Re: [openstack-dev] [nova] contextlib.nested and Python3 failing
On Sun, 23 Aug 2015 18:32:33 -0400 Davanum Srinivas dava...@gmail.com wrote: Josh, the test.nested() in Nova uses exactly that: http://git.openstack.org/cgit/openstack/nova/tree/nova/test.py#n75 -- dims Oh, discard everything I say then :) My brain must still be partially functioning due to vacation, haha. -Josh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [wsme] potential issues with WSME 0.8.0
WSME version 0.8.0 was released today with several fixes to error handling and error messages. These fixes make WSME behave more in the way it says it would like to behave (and should behave) with regard to input validation and HTTP handling. You want these changes. Unfortunately we've discovered since the release that it causes test failures in Ceilometer, Aodh and Ironic so it may also cause some issues in other services. The two main issues are: * More detailed input validation can result in the body of a 4xx response having changed to reflect increased detail of the problem. If you have tests which check this response body, they may now break. * Formerly, input validation would allow unused fields to pass through and be dropped. This is now, as a virtue of more strict processing throughout the validation handling, considered a client-side error. There may also be situations where a 500 had been returned in the past but now a more correct status code in the 4xx range is returned. Fixes for ceilometer and ironic are pending and may provide some guidance on fixes other projects might need to do: * Ironic: https://review.openstack.org/216802 * Ceilometer: https://review.openstack.org/#/c/208467/ -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Tacker][NFV] Tacker Liberty Midcycle meetup - Meeting Minutes
Tacker team held a Liberty Midcycle meetup last week in San Jose. Thanks everyone who took the time to participate! The meeting minutes are captured here, https://wiki.openstack.org/wiki/Meetings/Tacker/Liberty-Midcycle-Meeting-Minutes The slides used in the event are available here, http://www.slideshare.net/SridharRamaswamy/openstack-tacker-liberty-midcycle __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI
Hello everyone, We have been investigating the cause behind the Jenkins Check gate-tempest-dsvm-networking-ovn failures (non-voting at the moment). The failures have been happening pretty consistently with every commit. I wanted to start a conversation to get some input as to why these errors may be happening. One kind of error is related to the following (from the q-svc logs). 2015-08-04 05:40:28.313 ERROR neutron.agent.ovsdb.impl_idl [req-c189268a-1e1d-462f-a81e-62f0a34ff490 tempest-FloatingIPAdminTestJSON-1706130555 tempest-FloatingIPAdminTestJSON-1943105894] Traceback (most recent call last): File /opt/stack/new/neutron/neutron/agent/ovsdb/native/connection.py, line 84, in run txn.results.put(txn.do_commit()) File /opt/stack/new/neutron/neutron/agent/ovsdb/impl_idl.py, line 99, in do_commit seqno) File /opt/stack/new/neutron/neutron/agent/ovsdb/native/idlutils.py, line 125, in wait_for_change raise Exception(Timeout) Exception: Timeout When this error happens - in a separate thread there is DB Deadlock. Note that it's not always create_port (65%), it could be delete_port (30%), other calls (5%). There are many more of these errors (show below) than the above error. But it is always: SQL: u'UPDATE ipavailabilityranges SET first_ip=%s WHERE ipavailabilityranges.allocation_pool_id = %s AND ipavailabilityranges.first_ip = %s AND ipavailabilityranges.last_ip = %s'] 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api File /usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 136, in wrapper 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api return f(*args, **kwargs) 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api File /opt/stack/new/networking-ovn/networking_ovn/plugin.py, line 275, in create_port 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api db_port = super(OVNPlugin, self).create_port(context, port) ... 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api File /usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 205, in execute 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api self.errorhandler(self, exc, value) 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api File /usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in defaulterrorhandler 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api raise errorclass, errorvalue 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api DBDeadlock: (_mysql_exceptions.OperationalError) (1205, 'Lock wait timeout exceeded; try restarting transaction') [SQL: u'UPDATE ipavailabilityranges SET first_ip=%s WHERE ipavailabilityranges.allocation_pool_id = %s AND ipavailabilityranges.first_ip = %s AND ipavailabilityranges.last_ip = %s'] [parameters: ('10.100.0.3', '851466c3-8d6b-4629-bf65-86be2f403e67', '10.100.0.2', '10.100.0.14')] 2015-08-04 05:39:37.303 9407 ERROR oslo_db.api Russell suggested removing the MYSQL_DRIVER=MySQL-python declaration from local.conf https://review.openstack.org/#/c/216413/ which results in PyMySQL as the default. With the above change the above DB errors are no longer seen in my local setup, the CI setup is having trouble with the gate-networking-ovn-python27 test now therefore the gate-tempest-dsvm-networking-ovn never runs. So there are 2 questions: Is there any impact of using PyMySQL for the Jenkins check and gates. Why is the gate-networking-ovn-python27 failing (the past couple of commits) in {0} networking_ovn.tests.unit.test_ovn_plugin.TestOvnPlugin.test_create_port_security [0.194020s] ... FAILED. Do we need another conversation to track this? Amitabha __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][api] - attaching arbitrary key/value pairs to resources
On 08/25/2015 03:01 AM, Miguel Angel Ajo wrote: Doug Wiegley wrote: In general, the fight in Neutron *has* to be about common definitions of networking primitives that can be potentially implemented by multiple backends whenever possible. That's the entire point of Neutron. I get that it's hard, but that's the value Neutron brings to the table. I think that everyone agrees with you on this point. Including me. The tricky part comes when the speed of neutron adding to the api bottlenecks other things, or when the abstractions just aren’t there yet, because the technology in question isn’t mature enough. Do we provide relief valves, knowing they will be abused as much as help, or do we hold a hard line? These tags are a relief valve. I’m in favor of them, and I’m in favor of holding to the abstraction. It seems there has to be a middle ground. Thanks, doug Just thinking out loud: Probably trying to stem the tie, would it make sense to block api calls outside neutron core/api to grab such tags, with a big warning: if you try to circunvent this, you will harm interoperability of openstack, and your plugin will be blocked in next neutron releases.. They could go directly via SQL, but at least, they'd know they're doing the wrong thing, and risking a plugin ban, if that's a reasonable measure from our side. I don't think it's worth the effort or complexity to work too hard at actively preventing it, but anything that helps make it clear to people that it's considered private data (to anything but the API and DB) would be nice. We should be thinking of the people that are intending to play nice, and make it so they don't accidentally use something we don't intend to be used. That's something we can hash out during code review or follow-up patches. -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] PTL/TC candidate workflow proposal for next elections
On Aug 24, 2015, at 8:21 AM, Anne Gentle | Just Write Click annegen...@justwriteclick.com wrote: I understand the workflow to be necessary due to the scale at which we're governing now. With over 40 PTL positions plus the six TC spots rotating, I sense we need to adopt tooling that ensures every project gets equivalent, trackable, audit-able, process-oriented support. +1 -- Ed Leafe signature.asc Description: Message signed with OpenPGP using GPGMail __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [hacking] [style] multi-line imports PEP 0328
On Mon, 2015-08-24 at 22:53 -0700, Clay Gerrard wrote: So, I know that hacking has H301 (one import per line) - but say maybe you wanted to import *more* that one thing on a line (there's some exceptions right? sqlalchemy migrations or something?) There's never a need to import more than one thing per line given the rule to only import modules, not objects. While that is not currently enforced by hacking, it is a strong style guideline. (Exceptions for things like sqlalchemy do exist, of course.) -- Kevin L. Mitchell kevin.mitch...@rackspace.com Rackspace __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI
On 2015-08-25 10:26:38 -0700 (-0700), Amitabha Biswas wrote: [...] Russell suggested removing the MYSQL_DRIVER=MySQL-python declaration from local.conf https://review.openstack.org/#/c/216413/ which results in PyMySQL as the default. With the above change the above DB errors are no longer seen in my local setup, the CI setup is having trouble with the gate-networking-ovn-python27 test now therefore the gate-tempest-dsvm-networking-ovn never runs. So there are 2 questions: Is there any impact of using PyMySQL for the Jenkins check and gates. [...] See the many dozens of changes switching from MySQL-python to PyMySQL as our upstream default: https://review.openstack.org/#/q/topic:pymysql-switch,n,z This direction was decided at the Liberty summit in Vancouver, and upstream DevStack-based jobs default to PyMySQL for more than a couple months now (since https://review.openstack.org/191113 merged). This both solves some locking-related performance issues and gets us closer to Python 3.x support. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [third-party] Timeout waiting for ssh access
Hi Ramy, Can you mention the steps for glance scripts? On Tue, Aug 25, 2015 at 7:49 PM, Asselin, Ramy ramy.asse...@hp.com wrote: Hi Tang, I haven’t seen this issue. Which approach are you using to build the image? DIB or via glance scripts? Do you get the same result when using both approaches? If using DIB, what is the OS used to build the image? Ramy *From:* Tang Chen [mailto:tangc...@cn.fujitsu.com] *Sent:* Tuesday, August 25, 2015 5:02 AM *To:* openstack-dev@lists.openstack.org *Subject:* Re: [openstack-dev] [third-party] Timeout waiting for ssh access Hi all, Does anybody have any idea about this problem ? Since ubuntu does not have /etc/sysconfig/network-scripts/ifcfg-*, obviously it is a fedora like fs structure, we have tried to use centos, but we still got the same error. Thanks. On 08/24/2015 09:19 PM, Xie, Xianshan wrote: Hi, All I`m still strugling to setup nodepool env, and i got following error messages: -- ERROR nodepool.NodeLauncher: Exception launching node id: 13 in provider: local_01 error: Traceback (most recent call last): File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 405, in _run dt = self.launchNode(session) File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 503, in launchNode timeout=self.timeout): File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 50, in ssh_connect for count in iterate_timeout(timeout, ssh access): File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 42, in iterate_timeout raise Exception(Timeout waiting for %s % purpose) Exception: Timeout waiting for ssh access WARNING nodepool.NodePool: Deleting leaked instance d-p-c-local_01-12 (aa6f58d9-f691-4a72-98db-6add9d0edc1f) in local_01 for node id: 12 -- And meanwhile, in the console.log which records the info for launching this instance, there is also an error as follows: -- + sed -i -e s/^\(DNS[0-9]*=[.0-9]\+\)/#\1/g /etc/sysconfig/network-scripts/ifcfg-*^M sed: can't read /etc/sysconfig/network-scripts/ifcfg-*: No such file or directory^M ... cloud-init-nonet[26.16]: waiting 120 seconds for network device -- I have tried to figure out what`s causing this error: 1. mounted image.qcow2 and then checked the configuration for the network about this instance: $ cat etc/network/interfaces.d/eth0.cfg auto eth0 iface eth0 inet dhcp $ cat etc/network/interfaces auto lo iface lo inet loopback source /etc/network/interfaces.d/*.cfg It seems good. 2. But indeed, the directory named /etc/sysconfig/network-scripts/ifcfg-* does not exist. And i dont understand why it attempts to check this configuration file? Because my instance is specified to ubuntu not rhel. So,could you give me some tips to work this out? Thanks in advance. Xiexs __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Thanks Regards,* *Abhishek* *Cloudbyte Inc. http://www.cloudbyte.com* __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] incubator move to private modules
That would work too! I am not opposed to this either :) --Morgan Sent via mobile On Aug 25, 2015, at 03:01, Davanum Srinivas dava...@gmail.com wrote: Morgan, Bit more radical :) I am inclined to just yank all code from oslo-incubator and let the projects modify/move what they have left into their own package/module structure (and change the contracts however they see fit). -- Dims On Tue, Aug 25, 2015 at 1:48 AM, Morgan Fainberg morgan.fainb...@gmail.com wrote: Over time oslo incubator has become less important as most things are simply becoming libraries from the get-go. However, there is still code in incubator and particularly Keystone client has seen an issue where the incubator code is considered a public api by consuming projects. I would like to start the conversation of moving all incubator modules to be prefixed by _ indicating they are not meant for public consumption. I expect that if there is not a large uproar here on the mailing list, that I will propose a spec to oslo shortly to make this change possible. What I am looking for before the spec happens, is the view from the community on making this type of change and bringing modules private (and associated concerns). Cheers, --Morgan Sent via mobile __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] testing for setting the admin password via the libvirt driver
Support to change the admin password on an instance via the libvirt driver landed in liberty [1] but the hypervisor support matrix wasn't updated [2]. There is a version restriction in the driver that it won't work unless you're using at least libvirt 1.2.16. We should be able to at least update the hypervisor support matrix that this is supported for libvirt with the version restriction. markus_z actually pointed that out in the review of the change to add the support but it was ignored. The other thing I was wondering about was testing. The check/gate queue jobs with ubuntu 14.04 only have libvirt 1.2.2. There is the fedora 21 job that runs on the experimental queue and I've traditionally considered this a place to test out libvirt driver features that need something newer than 1.2.2, but that only goes up to libvirt 1.2.9.3 [3]. It looks like you have to get up to fedora 23 to be able to test this set-admin-password function [4]. In fact it looks like the only major distro out there right now that supports this new enough version of libvirt is fc23 [5]. Does anyone fancy getting a f23 job setup in the experimental queue for nova? It would be nice to actually be able to test the bleeding edge features that we put into the driver code. [1] https://review.openstack.org/#/c/185910/ [2] http://docs.openstack.org/developer/nova/support-matrix.html#operation_set_admin_password [3] http://logs.openstack.org/28/215328/3/check/gate-tempest-dsvm-f21/8e9eae5/logs/rpm-qa.txt.gz [4] http://rpmfind.net/linux/rpm2html/search.php?query=libvirt [5] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes
On 2015-08-25 10:54:38 +0100 (+0100), Dave Walker wrote: On 25 August 2015 at 10:28, Alexis Lee lx...@hpe.com wrote: [...] Without offering an opinion either way, I'm just wondering how tag-every-commit is superior to never tagging? The git SHAs already uniquely identify every commit; if you want only those on master, simply `git log master`. [...] The issue with this is deterministic version counting between commits, allowing distributed additional commits but still keeping the version counting centralised. [...] I guess to take this one step further, it should be pointed out that deterministic version counting is really another way of saying human-readable branch state serialization. The point is that Git already has a branch state serialization, where every commit includes a pointer reference to the identifiers of its parent commit(s). So really, tagging every state change within the branch is simply an assignment of a memorable name for that new state, mimicking the ordering implicit in Git, and tying that name (via a cryptographic attestation) to the corresponding commit. The commit ID of any point in the history on a branch is immutable, so aside from being able to discuss memorable names for these rather than a (typically abbreviated) hex representation of a SHA-1 hash and allow the participants to fairly intuitively know their sequence without looking it up, I'm unconvinced there's much difference between tagging every commit and tagging no commits. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [openstack-deb] Devstack stable/juno fails to install
On 8/20/2015 6:12 AM, Eduard Matei wrote: Hi, ATM our workaround is to manually pip install futures==2.2.0 before running stack.sh Any idea when an official fix will be available? Thanks, Eduard __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev It's being worked here: https://bugs.launchpad.net/python-swiftclient/+bug/1486576 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [third-party] Timeout waiting for ssh access
Hi Tang, I haven't seen this issue. Which approach are you using to build the image? DIB or via glance scripts? Do you get the same result when using both approaches? If using DIB, what is the OS used to build the image? Ramy From: Tang Chen [mailto:tangc...@cn.fujitsu.com] Sent: Tuesday, August 25, 2015 5:02 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [third-party] Timeout waiting for ssh access Hi all, Does anybody have any idea about this problem ? Since ubuntu does not have /etc/sysconfig/network-scripts/ifcfg-*, obviously it is a fedora like fs structure, we have tried to use centos, but we still got the same error. Thanks. On 08/24/2015 09:19 PM, Xie, Xianshan wrote: Hi, All I`m still strugling to setup nodepool env, and i got following error messages: -- ERROR nodepool.NodeLauncher: Exception launching node id: 13 in provider: local_01 error: Traceback (most recent call last): File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 405, in _run dt = self.launchNode(session) File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 503, in launchNode timeout=self.timeout): File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 50, in ssh_connect for count in iterate_timeout(timeout, ssh access): File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 42, in iterate_timeout raise Exception(Timeout waiting for %s % purpose) Exception: Timeout waiting for ssh access WARNING nodepool.NodePool: Deleting leaked instance d-p-c-local_01-12 (aa6f58d9-f691-4a72-98db-6add9d0edc1f) in local_01 for node id: 12 -- And meanwhile, in the console.log which records the info for launching this instance, there is also an error as follows: -- + sed -i -e s/^\(DNS[0-9]*=[.0-9]\+\)/#\1/g /etc/sysconfig/network-scripts/ifcfg-*^M sed: can't read /etc/sysconfig/network-scripts/ifcfg-*: No such file or directory^M ... cloud-init-nonet[26.16]: waiting 120 seconds for network device -- I have tried to figure out what`s causing this error: 1. mounted image.qcow2 and then checked the configuration for the network about this instance: $ cat etc/network/interfaces.d/eth0.cfg auto eth0 iface eth0 inet dhcp $ cat etc/network/interfaces auto lo iface lo inet loopback source /etc/network/interfaces.d/*.cfg It seems good. 2. But indeed, the directory named /etc/sysconfig/network-scripts/ifcfg-* does not exist. And i dont understand why it attempts to check this configuration file? Because my instance is specified to ubuntu not rhel. So,could you give me some tips to work this out? Thanks in advance. Xiexs __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool
HI Skyler, Very nice tool! When do you plan to open source it? Are you considering adding it to the OpenStack big-tent [1]? There are a few tools being worked on that provide different information [2][3][4]. It would be nice to consolidate and invest collective effort into one tool. It would be great to meet and discuss in the third party meeting [5], as Anita suggested. Are you available next Monday or Tuesday? Thanks! Ramy [1] http://docs.openstack.org/infra/manual/creators.html [2] http://git.openstack.org/cgit/stackforge/third-party-ci-tools/tree/monitoring/lastcomment-scoreboard [3] http://git.openstack.org/cgit/stackforge/third-party-ci-tools/tree/monitoring/scoreboard [4] http://git.openstack.org/cgit/stackforge/radar/tree/ [5] https://wiki.openstack.org/wiki/Meetings/ThirdParty#Weekly_Third_Party_meetings -Original Message- From: Anita Kuno [mailto:ante...@anteaya.info] Sent: Monday, August 24, 2015 5:44 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool On 08/24/2015 07:59 PM, Skyler Berg wrote: Hi all, I am pleased to announce CI Watch [1], a CI monitoring tool developed at Tintri. For each OpenStack project with third-party CI's, CI Watch shows the status of all CI systems for all recent patch sets on a single dashboard. CI maintainers can use this tool to pinpoint when errors began and to find other CI's affected by the similar issues. Core team members can find which vendor CI systems are failing and determine when breaking changes hit their projects. The project dashboards provide access to all relevant logs and reviews, simplifying the process of investigating failures. CI Watch should also create more transparency within the third-party CI ecosystem. The health of all CI's is now visible to everyone in the community. We hope that by giving everyone this visibility we will make it easier for anyone to find and address issues on CI systems. Any feedback would be appreciated. We plan to open source this project soon and welcome contributions from anyone interested. For the moment, any bugs, concerns, or ideas can be sent to openstack-...@tintri.com. [1] ci-watch.tintri.com Best, Skyler Berg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hi Skyler: Thanks for your interest in participating in the third party segment of the openstack community. We have a number of people working on dashboards for ci systems. We are working on having infra host one: https://review.openstack.org/#/c/194437/ which is a tool currently hosted by one of our ci operators, Patrick East, which is open source. Can I suggest you attend a third party meeting and perhaps meet some of the other operators and collaborate with them? We don't have any lack of people starting tools what we lack is a tool which will be maintained. Thanks for your interest Skyler, Anita. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [third-party] Timeout waiting for ssh access
On Tue, Aug 25, 2015, at 05:01 AM, Tang Chen wrote: Hi all, Does anybody have any idea about this problem ? Since ubuntu does not have /etc/sysconfig/network-scripts/ifcfg-*, obviously it is a fedora like fs structure, we have tried to use centos, but we still got the same error. This code is a work around for Rackspace specific networking details that will fail on Ubuntu or in HPCloud and comes with a comment that fails are expected so we don't run with errexit set. Basically this should not be related to any of the problems with SSH timeouts. For ssh timeouts you should be checking that the user set in the nodepool.yaml config file for the image is able to ssh into the VMs booted with your new images using the ssh key also specified in nodepool.yaml. You can test this manually by booting the VM with nova boot, then ssh'ing in by hand as the nodepool user using `ssh -i /path/to/key yourusernamehere@ipaddress`. If this does work then it is possible the default timeout for SSHing is simply too low and you can increase the boot-timeout value [0]. [0] http://docs.openstack.org/infra/nodepool/configuration.html Hope this helps, Clark __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [release] Liberty release branches / [non] capping process
On Tue, Aug 25, 2015, at 05:32 AM, Thierry Carrez wrote: Thierry Carrez wrote: [...] 1. Enable master-stable cross-check 2. Release Oslo, make stable branches for Oslo 2.1 Converge constraints 3. liberty-3 / FF / soft requirements freeze 4. hard requirements freeze 5. RC1 / make stable branches for services 6. Branch requirements, disable cross-check 7. Unfreeze requirements I discussed this with Robert this morning on #openstack-relmgr-office and it appears the plan is still valid. It was also confirmed in the spec at http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html It still feels reasonable on the face of it, but we need to double-check it and expand on the details. In particular I'm wondering: * is there anything in the implemented constraints system that changes the deal here ? No. Some jobs are not running under the constrained dep system yet, but that doesn't make the plan invalid. * Is the new constraints system already set up to work on the upcoming stable/liberty branches ? We need to double-check, but by default the stable/liberty code branches should test with master requirements until stable/liberty requirements are branched out, at which point it should test stable/liberty code with stable/liberty requirements. devstack-gate (which wraps most of this) has a default behavior or attempting to checkout the change specific ref, falling back to the change specific branch, and finally using master if everything else fails. This means that changes to stable/liberty should use master requirements until we cut a stable/liberty branch on requirements. The details are actually slightly more complicated than that and you can read through them at https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n393 * what did we exactly mean by master-stable cross-check ? For completeness, during the transition period (when we use master requirements for both master and stable/liberty branches) we need a job to gate proposed master requirements changes on stable/liberty test jobs in addition to master test jobs. Otherwise we may introduce a change in master requirements that breaks stable code branches. Once we have stable/liberty requirements branched out, we don't need that job anymore. This however will need to be added to JJB and zuul. If someone can give me a rough idea of what jobs need to be run in this way (eg what is sufficient) I can go ahead and work on getting a change to do that pushed. Note that this would end up being a special case as we haven't tested changes to master of projects without stable branches against the stable branches of projects with the new stable branches in place. This is one reason I have been a proponent for branching stable early across the board to ensure stable works and master works and we don't have to worry about syncing the two. * what did we exactly mean by Converge constraints ? We need to merge any lingering requirements bump in projects, before we create stable/liberty branches for them. With liberty-3 / FF being Thursday next week, we need to start implementing steps 1, 2 and 2.1 in the next 10 days, so we need to urgently check that this is still a valid plan (and any implementation detail). The most urgent is to figure out if we can have a master-stable cross-check enabled before we start cutting stable/liberty branches. Plan B (if we can't) is to apply extra caution to any master requirements change during the overlap period, without the gate safety net. Clark __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] periodic task
On 8/24/2015 9:32 PM, Gary Kotton wrote: In item #2 below the reboot is down via the guest and not the nova api’s :) From: Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com Reply-To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, August 24, 2015 at 7:18 PM To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova] periodic task Hi, A couple of months ago I posted a patch for bug https://launchpad.net/bugs/1463688. The issue is as follows: the periodic task detects that the instance state does not match the state on the hypervisor and it shuts down the running VM. There are a number of ways that this may happen and I will try and explain: 1. Vmware driver example: a host where the instances are running goes down. This could be a power outage, host failure, etc. The first iteration of the perdioc task will determine that the actual instacne is down. This will update the state of the instance to DOWN. The VC has the ability to do HA and it will start the instance up and running again. The next iteration of the periodic task will determine that the instance is up and the compute manager will stop the instance. 2. All drivers. The tenant decides to do a reboot of the instance and that coincides with the periodic task state validation. At this point in time the instance will not be up and the compute node will update the state of the instance as DWON. Next iteration the states will differ and the instance will be shutdown Basically the issue hit us with our CI and there was no CI running for a couple of hours due to the fact that the compute node decided to shutdown the running instances. The hypervisor should be the source of truth and it should not be the compute node that decides to shutdown instances. I posted a patch to deal with this https://review.openstack.org/#/c/190047/. Which is the reason for this mail. The patch is backwards compatible so that the existing deployments and random shutdown continues as it works today and the admin now has an ability just to do a log if there is a inconsistency. We do not want to disable the periodic task as knowing the current state of the instance is very important and has a ton of value, we just do not want the periodic to task to shut down a running instance. Thanks Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev In #2 the guest shouldn't be rebooted by the user (tenant) outside of the nova-api. I'm not sure if it's actually formally documented in the nova documentation, but from what I've always heard/known, nova is the control plane and you should be doing everything with your instances via the nova-api. If the user rebooted via nova-api, the task_state would be set and the periodic task would ignore the instance. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][qa] libvirt + LXC CI - where's the beef?
On 8/20/2015 10:42 AM, Matt Riedemann wrote: On 8/20/2015 5:33 AM, John Garbutt wrote: On 20 August 2015 at 03:08, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: After spending a few hours on https://bugs.launchpad.net/nova/+bug/1370590 I'm annoyed by the fact we don't yet have a CI system for testing libvirt + LXC. Bit thank you for raising this one. At the Juno midcycle in Portland I thought I remember some guy(s) from Rackspace talking about getting a CI job running, whatever happened with that? Now you mention it, I remember that. I haven't heard any news about that, let me poke some people. It seems like we should be able to get this going using community infra, right? Just need some warm bodies to get the parts together and figure out which Tempest tests can't be run with that setup - but we have the hypervisor support matrix to help us out as a starter. +1 It also seems unfair to require third party CI for libvirt + parallels (virtuozzo) but we don't have the same requirement for LXC. The original excuse was that it didn't bring much value, as most of the LXC differences were in libvirt. But given the recent bugs that have cropped up, that is totally the wrong call. I think we need to add a log message saying: LXC support is untested, and will be removed during Mitka if we do not get a CI in place. Following the rules here: https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements Does that make sense? John PS I must to kick off the feature classification push, so we can get discuss that for real at the summit. Really I am looking for folks to help with that, help monitor what bits of the support matrix are actually tested. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev To recap from today's nova meeting, apmelton provided the simple localrc [1] for getting devstack setup with lxc. He noted that there are some known issues, however: 1. nbd isn't installed by default in devstack - I think this goes back to the nbd + neutron ubuntu kernel panic on 12.04 back in the havana/icehouse timeframe. He also said that nbd appears to leak resources. I'm not entirely sure at this point if we can use something other than nbd like guestfs or the loop mount stuff if the image format is 'raw'. We'll have to tinker. It also sounds like Rackspace has some patches to workaround the nbd issues and apmelton was going to look at upstreaming those. 2. There is some weird intermittent issue where the network on the public interface just drops. -- The rough plan is for me to try and get a project-config change started for an lxc job that we can run in nova's experimental queue. We'll keep the blacklisted tempest tests in the nova tree like we do for cells. I'll probably need help with any devstack changes required. [1] https://gist.github.com/ramielrowe/081deaf0c6b79aec6890 I've started an etherpad to track the work needed to get an lxc job going in nova's experimental queue: https://etherpad.openstack.org/p/nova-lxc-ci -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)
On 08/25/2015 03:42 PM, Thomas Goirand wrote: Hi, [...] Anyway, the result is that mock 1.3 broke 9 packages at least in Kilo, currently in Sid [1]. Maybe, as packages gets rebuilt, I'll get more bug reports. This really, is a depressing situation. [...] Some ppl on IRC explained to me what the situation was, which is that the mock API has been wrongly used, and some tests were in fact wrongly passing, so indeed, this is one of the rare cases where breaking the API probably made sense. As it doesn't bring anything to repair these tests, I'm just not running them in Kilo from now on, using something like this: --subunit 'tests\.unit\.(?!.*foo.*) Please comment if you think that's the wrong way to go. Also, has some of these been repaired in the stable/kilo branch? Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] periodic task
On 8/25/15, 7:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 8/24/2015 9:32 PM, Gary Kotton wrote: In item #2 below the reboot is down via the guest and not the nova api¹s :) From: Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com Reply-To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, August 24, 2015 at 7:18 PM To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova] periodic task Hi, A couple of months ago I posted a patch for bug https://launchpad.net/bugs/1463688. The issue is as follows: the periodic task detects that the instance state does not match the state on the hypervisor and it shuts down the running VM. There are a number of ways that this may happen and I will try and explain: 1. Vmware driver example: a host where the instances are running goes down. This could be a power outage, host failure, etc. The first iteration of the perdioc task will determine that the actual instacne is down. This will update the state of the instance to DOWN. The VC has the ability to do HA and it will start the instance up and running again. The next iteration of the periodic task will determine that the instance is up and the compute manager will stop the instance. 2. All drivers. The tenant decides to do a reboot of the instance and that coincides with the periodic task state validation. At this point in time the instance will not be up and the compute node will update the state of the instance as DWON. Next iteration the states will differ and the instance will be shutdown Basically the issue hit us with our CI and there was no CI running for a couple of hours due to the fact that the compute node decided to shutdown the running instances. The hypervisor should be the source of truth and it should not be the compute node that decides to shutdown instances. I posted a patch to deal with this https://review.openstack.org/#/c/190047/. Which is the reason for this mail. The patch is backwards compatible so that the existing deployments and random shutdown continues as it works today and the admin now has an ability just to do a log if there is a inconsistency. We do not want to disable the periodic task as knowing the current state of the instance is very important and has a ton of value, we just do not want the periodic to task to shut down a running instance. Thanks Gary _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev In #2 the guest shouldn't be rebooted by the user (tenant) outside of the nova-api. I'm not sure if it's actually formally documented in the nova documentation, but from what I've always heard/known, nova is the control plane and you should be doing everything with your instances via the nova-api. If the user rebooted via nova-api, the task_state would be set and the periodic task would ignore the instance. Matt, this is one case that I showed where the problem occurs. There are others and I can invest time to see them. The fact that the periodic task is there is important. What I don¹t understand is why having an option of log indication for an admin is something that is not useful and instead we are going with having the compute node shutdown instance when this should not happen. Our infrastructure is behaving like cattle. That should not be the case and the hypervisor should be the source of truth. This is a serious issue and instances in production can and will go down. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] weekly meeting #48
On 08/24/2015 07:59 AM, Emilien Macchi wrote: Hello, Here's an initial agenda for our weekly meeting, Tuesday at 1500 UTC in #openstack-meeting-4: https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150825 Please add additional items you'd like to discuss. If our schedule allows it, we'll make bug triage during the meeting. We did our meeting, you can read the notes: http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html Best, -- Emilien Macchi signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] periodic task
On 8/25/2015 10:03 AM, Gary Kotton wrote: On 8/25/15, 7:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 8/24/2015 9:32 PM, Gary Kotton wrote: In item #2 below the reboot is down via the guest and not the nova api¹s :) From: Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com Reply-To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, August 24, 2015 at 7:18 PM To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova] periodic task Hi, A couple of months ago I posted a patch for bug https://launchpad.net/bugs/1463688. The issue is as follows: the periodic task detects that the instance state does not match the state on the hypervisor and it shuts down the running VM. There are a number of ways that this may happen and I will try and explain: 1. Vmware driver example: a host where the instances are running goes down. This could be a power outage, host failure, etc. The first iteration of the perdioc task will determine that the actual instacne is down. This will update the state of the instance to DOWN. The VC has the ability to do HA and it will start the instance up and running again. The next iteration of the periodic task will determine that the instance is up and the compute manager will stop the instance. 2. All drivers. The tenant decides to do a reboot of the instance and that coincides with the periodic task state validation. At this point in time the instance will not be up and the compute node will update the state of the instance as DWON. Next iteration the states will differ and the instance will be shutdown Basically the issue hit us with our CI and there was no CI running for a couple of hours due to the fact that the compute node decided to shutdown the running instances. The hypervisor should be the source of truth and it should not be the compute node that decides to shutdown instances. I posted a patch to deal with this https://review.openstack.org/#/c/190047/. Which is the reason for this mail. The patch is backwards compatible so that the existing deployments and random shutdown continues as it works today and the admin now has an ability just to do a log if there is a inconsistency. We do not want to disable the periodic task as knowing the current state of the instance is very important and has a ton of value, we just do not want the periodic to task to shut down a running instance. Thanks Gary _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev In #2 the guest shouldn't be rebooted by the user (tenant) outside of the nova-api. I'm not sure if it's actually formally documented in the nova documentation, but from what I've always heard/known, nova is the control plane and you should be doing everything with your instances via the nova-api. If the user rebooted via nova-api, the task_state would be set and the periodic task would ignore the instance. Matt, this is one case that I showed where the problem occurs. There are others and I can invest time to see them. The fact that the periodic task is there is important. What I don¹t understand is why having an option of log indication for an admin is something that is not useful and instead we are going with having the compute node shutdown instance when this should not happen. Our infrastructure is behaving like cattle. That should not be the case and the hypervisor should be the source of truth. This is a serious issue and instances in production can and will go down. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev For the HA case #1, the periodic task checks to see if the instance.host doesn't match the compute service host [1] and skips if they don't match. Shouldn't your HA scenario be updating which host the instance is running on? Or is this a vCenter-ism? [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n5871 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:
Re: [openstack-dev] [cinder] [third-party] ProphetStor CI account
Looks good to me. Thanks! Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 9:07 PM To: Asselin, Ramy ramy.asse...@hp.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Hi Ramy: I already fixed this important problem. Thanks. Does our CI system have any missed configuration or problem? Console log: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/console.html CI Review result: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/logs/ Many thanks. From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 10:54 AM To: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Other than that, everything looks fine to me. But that is important to fix. Thanks, Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 6:46 PM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: We use apache proxy pass option to redirect the public link to my internal CI server. Maybe I missed some configure? I will try to find solution for it. But it should affect the my OpenStack third-party CI system. Does our CI system ready to re-enable account? From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 9:14 AM To: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Rick, It's strange, I can navigate using the link you provided, but not via the parent Directory link. This is what it links to, which is missing the prophetstor_ci portion: http://download.prophetstor.com/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/ Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 5:59 PM To: Asselin, Ramy ramy.asse...@hp.commailto:ramy.asse...@hp.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: My console file is console.html as below: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/console.html http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/ From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Monday, August 24, 2015 11:03 PM To: 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Cc: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Great. Somehow you lost your console.log file. Or did I miss it? Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 2:00 AM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: I competed to change zuul.conf zuul_url to be my zuul server zuul.rjenkins.prophetstor.com. 2015-08-24 16:21:48.349 | + git_fetch_at_ref openstack/cinder refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b 2015-08-24 16:21:48.350 | + local project=openstack/cinder 2015-08-24 16:21:48.351 | + local ref=refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b 2015-08-24 16:21:48.352 | + '[' refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b '!=' '' ']' 2015-08-24 16:21:48.353 | + git fetch http://zuul.rjenkins.prophetstor.com/p/openstack/cinder refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b 2015-08-24 16:21:49.264 | From http://zuul.rjenkins.prophetstor.com/p/openstack/cinder 2015-08-24 16:21:49.265 | * branch refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b - FETCH_HEAD ProphetStor CI review result: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/ From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Saturday, August 22, 2015 6:40 AM To: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com; OpenStack Development Mailing List (not for usage questions)
[openstack-dev] [Glance] upcoming glanceclient release
Hi, We are planning to cut a client release this Thursday by 1500UTC or so. If there are any reviews that you absolutely need and are likely to not break the client in the near future, please ping me (nikhil_k) or jokke_ on IRC #openstack-glance. This will most likely be our final client release for Liberty. -- Thanks, Nikhil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [hacking] [style] multi-line imports PEP 0328
On Tue, Aug 25, 2015 at 8:45 AM, Kevin L. Mitchell kevin.mitch...@rackspace.com wrote: On Mon, 2015-08-24 at 22:53 -0700, Clay Gerrard wrote: So, I know that hacking has H301 (one import per line) - but say maybe you wanted to import *more* that one thing on a line (there's some exceptions right? sqlalchemy migrations or something?) There's never a need to import more than one thing per line given the rule to only import modules, not objects. While that is not currently enforced by hacking, it is a strong style guideline. (Exceptions for things like sqlalchemy do exist, of course.) Thank you for echoing my premise - H301 exists, but there are exceptions, so... On Mon, 2015-08-24 at 22:53 -0700, Clay Gerrard wrote: Anyway - I'm sure there could be a pep8 plugin rule that enforces use of parenthesis for multi line imports instead backslash line breaks [1] - but would that be something that hacking would want to carry (since *most* of the time H301 would kick in first?) - or if not; is there a way to plug it into pep8 outside of hacking without having to install some random one-off extension for this one rule separately? -Clay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [release] Liberty release branches / [non] capping process
Clark Boylan wrote: On Tue, Aug 25, 2015, at 05:32 AM, Thierry Carrez wrote: * what did we exactly mean by master-stable cross-check ? For completeness, during the transition period (when we use master requirements for both master and stable/liberty branches) we need a job to gate proposed master requirements changes on stable/liberty test jobs in addition to master test jobs. Otherwise we may introduce a change in master requirements that breaks stable code branches. Once we have stable/liberty requirements branched out, we don't need that job anymore. This however will need to be added to JJB and zuul. If someone can give me a rough idea of what jobs need to be run in this way (eg what is sufficient) I can go ahead and work on getting a change to do that pushed. Note that this would end up being a special case as we haven't tested changes to master of projects without stable branches against the stable branches of projects with the new stable branches in place. This is one reason I have been a proponent for branching stable early across the board to ensure stable works and master works and we don't have to worry about syncing the two. So currently, master requirements changes trigger the following jobs: gate-tempest-dsvm-full gate-tempest-dsvm-postgres-full gate-tempest-dsvm-neutron-full gate-grenade-dsvm gate-tempest-dsvm-large-ops gate-tempest-dsvm-neutron-large-ops If I understood the cross-check idea right, the idea would be to also run stable/liberty equivalent of those jobs: gate-tempest-dsvm-full-liberty gate-tempest-dsvm-postgres-full-liberty gate-tempest-dsvm-neutron-full-liberty gate-grenade-dsvm-liberty gate-tempest-dsvm-large-ops-liberty gate-tempest-dsvm-neutron-large-ops-liberty The goal being to protect stable branches from breakage due to requirements update while they still use the master branch of openstack/requirements. If some of those are problematic I guess we could settle for less jobs. -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][lbaas] L7 - Tasks
Hello I would like to know if there is a plan for L7 extension work for Liberty There is an extension patch-set here https://review.openstack.org/#/c/148232/ We will also need to do a CLI work which I started to do and will commit initial patch-set soon Reference implementation was started by Stephen here https://review.openstack.org/#/c/204957/ and tempest tests update should be done as well I do not know if it was discussed at IRC meetings. Please share your thought about it. Regards, Evg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints
Hi, Dolph, You are right, it’s same issue. Hans and I work together on this prototype, and Hans first found the token validation is not routed to local KeyStone, and I tried to find why in the source code. The patch in the bug description could be reference for the correction. (I am not sure how much of the side effect for the patch) Best Regards Chaoyi Huang ( Joe Huang ) From: Dolph Mathews [mailto:dolph.math...@gmail.com] Sent: Tuesday, August 25, 2015 7:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints On Thu, Aug 20, 2015 at 7:40 AM, Hans Feldt hans.fe...@ericsson.commailto:hans.fe...@ericsson.com wrote: How do you configure/use keystonemiddleware for a specific identity endpoint among several? In an OPNFV multi region prototype I have keystone endpoints per region. I would like keystonemiddleware (in context of glance-api) to use the local keystone for performing user token validation. Instead keystonemiddleware seems to use the first listed keystone endpoint in the service catalog (which could be wrong/non-optimal in most regions). I found this closed, related bug: https://bugs.launchpad.net/python-keystoneclient/+bug/1147530 This (brand new) bug report appears to describe the same issue: https://bugs.launchpad.net/keystonemiddleware/+bug/1488347 Thanks, Hans __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] [docs] Generating API samples
Hi Ken'ichi, Nice idea. On Wed, Aug 26, 2015 at 11:36 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com wrote: Hi Anne, Nova API guys, I'd like to explain how to generate API docs from Tempest log. In Tempest, each API test contains UUID for the test identification now. Current my idea is to parse tempest log, create a patch based on the parsing and post a patch to each project repo like: 1. Each project like Nova Contain the combinations of both generic API name(create a server, etc.) and test UUID of Tempest 2. Tempest * Output test UUIDs to Tempest log for each API operation * Contain **Script A** to parse Tempest log for getting URL, headers, request body, response body and normal status code based on the specified test UUID * Contain **Script B** to get the above combinations from each project repo via external plugin interfaces[1], run **Script A** with each test UUID and create API samples for each generic API name. And for Project out of Tempest scope can provide tests UUID from their Tempest like tests. And doc thing can be served for them via Tempest external plugin design. 3. Bot of openstack-infra * Create an API sample patch for each project and post it if the sample doesn't exist in the repo. Even after this way is implemented, the descriptions of request parameters are not covered on api docs: But each project will be able to get them with each project own way like Pecan/WSME docstring, JSON-Schema or by hands. or we can get each param doc string from project via external plugin in step 2 along with API generic name with assumption that each project should have doc string for each API request param. Just wondering how we will do that for all microversions as Tempest would not be having tests for all microversions. Any thought? Thanks Ken Ohmichi --- [1]: http://specs.openstack.org/openstack/qa-specs/specs/tempest-external-plugin-interface.html 2015-08-26 10:36 GMT+09:00 Ken'ichi Ohmichi ken1ohmi...@gmail.com: Hi Anne, 2015-08-25 0:51 GMT+09:00 Anne Gentle annegen...@justwriteclick.com: Hi all, I'm writing to find out how teams keep API sample requests and responses up-to-date. I know the nova team has a sample generator [1] that they've maintained for a few years now. Do other teams have something similar? If so, is your approach like the nova one? We had a weekly IRC meeting of Nova API yesterday(today for some guys), and we discussed how to generate/maintain API docs in long term. After the discussion, I have an idea. How about generating API sample files from Tempest log files? Now Tempest is writing most necessary parts of API docs((URL, headers, request body, response body, http status code) to its own log file like: http://logs.openstack.org/88/207688/3/check/gate-tempest-dsvm-full/d7a79d1/logs/tempest.txt.gz#_2015-08-10_13_20_36_982 2015-08-10 13:20:36.982 [..] 202 POST http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers 0.920s 2015-08-10 13:20:36.983 [..] Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 'omitted', 'Accept': 'application/json'} Body: {server: {name: tempest.common.compute-instance-607936499, networks: [{uuid: e63068c6-99d5-41f5-804d-ccb812bfeb51}], imageRef: d4159c59-cbfb-43f1-94de-3552d1f2871e, flavorRef: 42}} Response - Headers: {'location': 'http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c', 'content-type': 'application/json', 'date': 'Mon, 10 Aug 2015 13:20:36 GMT', 'x-compute-request-id': 'req-0fa22034-c1d5-41b2-bfb9-6de533733290', 'connection': 'close', 'status': '202', 'content-length': '434'} Body: {server: {security_groups: [{name: default}], OS-DCF:diskConfig: MANUAL, id: 19f98a6f-26d2-4491-93a8-8e894f19034c, links: [{href: http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: self}, {href: http://127.0.0.1:8774/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: bookmark}], adminPass: 2iEDo2EP5wRM}} _log_request_full /opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py:411 I feel it is difficult to implement the similar sample test way of Nova on each project. The above Tempest log is written on tempest-lib side and that is common way between projects. So we can use this way for all projects as a common/consistent way, I imagine now. I will make/write the detail of this idea later. Thanks Ken Ohmichi --- 1. https://github.com/openstack/nova/blob/master/nova/tests/functional/api_sample_tests/api_sample_base.py -- Anne Gentle Rackspace Principal Engineer www.justwriteclick.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [cinder] [third-party] ProphetStor CI account
I suggest you ask Mike Perez on irc: thingee in #openstack-cinder. I believe it's his decision since he's the cinder PTL. He'll then notify the infra team to request your account be reenabled. Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Tuesday, August 25, 2015 6:01 PM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: Now, If all is fine, Can you help me how can I re-enable my CI gerrit account? From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Wednesday, August 26, 2015 12:43 AM To: 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Cc: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Looks good to me. Thanks! Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 9:07 PM To: Asselin, Ramy ramy.asse...@hp.commailto:ramy.asse...@hp.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Hi Ramy: I already fixed this important problem. Thanks. Does our CI system have any missed configuration or problem? Console log: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/console.html CI Review result: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/logs/ Many thanks. From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 10:54 AM To: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Other than that, everything looks fine to me. But that is important to fix. Thanks, Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 6:46 PM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: We use apache proxy pass option to redirect the public link to my internal CI server. Maybe I missed some configure? I will try to find solution for it. But it should affect the my OpenStack third-party CI system. Does our CI system ready to re-enable account? From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 9:14 AM To: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Rick, It's strange, I can navigate using the link you provided, but not via the parent Directory link. This is what it links to, which is missing the prophetstor_ci portion: http://download.prophetstor.com/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/ Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 5:59 PM To: Asselin, Ramy ramy.asse...@hp.commailto:ramy.asse...@hp.com; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: My console file is console.html as below: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/console.html http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-dsvm-tempest-cinder-ci/5141/ From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Monday, August 24, 2015 11:03 PM To: 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Cc: Rick Chen rick.c...@prophetstor.commailto:rick.c...@prophetstor.com Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Great. Somehow you lost your console.log file. Or did I miss it? Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 2:00 AM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: I competed to change zuul.conf zuul_url to be my zuul server zuul.rjenkins.prophetstor.com. 2015-08-24 16:21:48.349 | + git_fetch_at_ref openstack/cinder refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b 2015-08-24
Re: [openstack-dev] [oslo] Help with stable/juno branches / releases
On Mon, Aug 24, 2015 at 08:03:38AM -0400, Davanum Srinivas wrote: Tony, +1 to open Bugs and Reviews. I'll help move things along. Hi Dims, I've created: https://bugs.launchpad.net/oslo.messaging/+bug/1488737 https://bugs.launchpad.net/oslo.utils/+bug/1488746 https://bugs.launchpad.net/oslotest/+bug/1488752 Each with a patch and instruction to fix the issues. I've also created: https://review.openstack.org/#/c/216955/ (and dependent change) to use the correct assert call in the oslo.i18n tests. Any help you can provide in getting these issues investigated and closed would be greatly appreciated. Tony. pgpgNNEf9ySoD.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ceilometer] jenkis job failures
Hi, Quick note to ceilometer folks. Many check and gate jobs for ceilometer are failed due to WSME related issue that is already addressed by [1], so please make sure your patch set are rebased on the current master before execute 'recheck'. [1] https://review.openstack.org/#/c/208467/ Thanks, Ryota --- Ryota Mibu r-m...@cq.jp.nec.com NEC Corporation __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] FW: [cinder] [third-party] ProphetStor CI account
HI Mike: Thanks Ramy help us to build up our CI environment, now is ready for the Cinder third-part CI testing. Can you help me to re-enable my prophetstor-ci gerrit account to join the Cinder review testing? If our CI system has any missed condition or requirement for Cinder CI testing, please let me know. Many thanks. Rick From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Wednesday, August 26, 2015 12:43 AM To: 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org Cc: Rick Chen rick.c...@prophetstor.com Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Looks good to me. Thanks! Ramy From: Rick Chen [ mailto:rick.c...@prophetstor.com mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 9:07 PM To: Asselin, Ramy mailto:ramy.asse...@hp.com ramy.asse...@hp.com; 'OpenStack Development Mailing List (not for usage questions)' mailto:openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Hi Ramy: I already fixed this important problem. Thanks. Does our CI system have any missed configuration or problem? Console log: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/console.html CI Review result: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/logs/ Many thanks. From: Asselin, Ramy [ mailto:ramy.asse...@hp.com mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 10:54 AM To: Rick Chen mailto:rick.c...@prophetstor.com rick.c...@prophetstor.com; 'OpenStack Development Mailing List (not for usage questions)' mailto:openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Other than that, everything looks fine to me. But that is important to fix. Thanks, Ramy From: Rick Chen [ mailto:rick.c...@prophetstor.com mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 6:46 PM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: We use apache proxy pass option to redirect the public link to my internal CI server. Maybe I missed some configure? I will try to find solution for it. But it should affect the my OpenStack third-party CI system. Does our CI system ready to re-enable account? From: Asselin, Ramy [ mailto:ramy.asse...@hp.com mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 9:14 AM To: Rick Chen mailto:rick.c...@prophetstor.com rick.c...@prophetstor.com; 'OpenStack Development Mailing List (not for usage questions)' mailto:openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Rick, It's strange, I can navigate using the link you provided, but not via the parent Directory link. This is what it links to, which is missing the prophetstor_ci portion: http://download.prophetstor.com/203895/3/check/prophetstor-dsvm-tempest-cin der-ci/5141/ http://download.prophetstor.com/203895/3/check/prophetstor-dsvm-tempest-cind er-ci/5141/ Ramy From: Rick Chen [ mailto:rick.c...@prophetstor.com mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 5:59 PM To: Asselin, Ramy mailto:ramy.asse...@hp.com ramy.asse...@hp.com; 'OpenStack Development Mailing List (not for usage questions)' mailto:openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: My console file is console.html as below: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-d svm-tempest-cinder-ci/5141/console.html http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/console.html http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-d svm-tempest-cinder-ci/5141/ http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/ From: Asselin, Ramy [ mailto:ramy.asse...@hp.com mailto:ramy.asse...@hp.com] Sent: Monday, August 24, 2015 11:03 PM To: 'OpenStack Development Mailing List (not for usage questions)' mailto:openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org Cc: Rick Chen mailto:rick.c...@prophetstor.com rick.c...@prophetstor.com Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Great. Somehow you lost your console.log file. Or did I miss it? Ramy From: Rick Chen [ mailto:rick.c...@prophetstor.com mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 2:00 AM To: Asselin, Ramy;
Re: [openstack-dev] [Containers] Magnum bay-create is getting stuck at CREATE_IN_PROGRESS
Hi Vikas, Please debug on these lines: - are you able to launch the instance based on the fedora-21-atomic-3 image in horizon ? If not, check if the image download was fine (by comparing the size) - get the nova list output (your should see kube_master and kube-minion up and running) - Log into the console of these 2 instances from horizon and check if it has booted up fine and if its in login prompt (I have seen an issue where it got stuck at some point during boot and I had to reload the instance) - After this, cloud-init should start and complete - get the output of heat stack-list –n” Thanks, Ganesh From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Wednesday, 26 August 2015 10:51 am To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Containers] Magnum bay-create is getting stuck at CREATE_IN_PROGRESS Vikas Choudhary, Try heat event-show to get more information about what's happening in the creation of the resource group for the k8s master. You might not have enough storage free to create the nova VM to run the bay master. Regards, Adrian On Aug 25, 2015, at 9:52 PM, Vikas Choudhary choudharyvika...@gmail.commailto:choudharyvika...@gmail.com wrote: I am following https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-quickstart.rst#using-kubernetes to try containers/magnum.After running magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 , it keeps showing : root@PRINHYLTPHP0400:/home/devstack/devstack# magnum bay-list +--+++--++ | uuid | name | node_count | master_count | status | +--+++--++ | e121254f-8bca-497b-9bd9-e9f37305592e | k8sbay | 1 | 1 | CREATE_IN_PROGRESS | root@PRINHYLTPHP0400:/home/devstack/devstack# heat resource-list ${BAY_HEAT_NAME} +---+-+--++-+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---+-+--++-+ | api_pool | 23694171-b787-4c15-9188-3c693e3702c8 | OS::Neutron::Pool | CREATE_COMPLETE | 2015-08-26T04:09:00 | | api_pool_floating | 6410a233-03fe-451e-9b52-cd9fac1fcf31 | OS::Neutron::FloatingIP | CREATE_COMPLETE | 2015-08-26T04:09:00 | | etcd_monitor | 2a5c325c-7f24-4c82-a252-64211b67d195 | OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:00 | | etcd_pool | 4578db12-5a88-4b1e-afe2-4f7f90bcbad1 | OS::Neutron::Pool | CREATE_COMPLETE | 2015-08-26T04:09:00 | *| kube_masters | 8ba9d12f-3567-4d54-969c-99077818ffa3 | OS::Heat::ResourceGroup |CREATE_IN_PROGRESS | 2015-08-26T04:09:00 |* | kube_minions | | OS::Heat::ResourceGroup | INIT_COMPLETE | 2015-08-26T04:09:00 | | api_monitor | 09a569ca-334a-4576-91e2-fa98a30f3e50 | OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:01 | | extrouter | f7fb19db-2be7-4624-bf20-4867e1d7572c | OS::Neutron::Router | CREATE_COMPLETE | 2015-08-26T04:09:01 | | extrouter_inside | f7fb19db-2be7-4624-bf20-4867e1d7572c:subnet_id=fdc539a0-9ce7-4faa-a6e1-bddd83d99df9 | OS::Neutron::RouterInterface | CREATE_COMPLETE | 2015-08-26T04:09:01 | | fixed_network | 15748ee9-af6e-445a-a3f2-175a866c51a9 | OS::Neutron::Net | CREATE_COMPLETE | 2015-08-26T04:09:01 | | fixed_subnet | fdc539a0-9ce7-4faa-a6e1-bddd83d99df9 | OS::Neutron::Subnet | CREATE_COMPLETE | 2015-08-26T04:09:01 | Kube-master gets stuck at create_in_progress.In magnum-con.log or heat-eng.log , i could not find any error messages.Can anybody please suggest how to debug this issue, any pointers? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] [docs] Generating API samples
Hi Anne, Nova API guys, I'd like to explain how to generate API docs from Tempest log. In Tempest, each API test contains UUID for the test identification now. Current my idea is to parse tempest log, create a patch based on the parsing and post a patch to each project repo like: 1. Each project like Nova Contain the combinations of both generic API name(create a server, etc.) and test UUID of Tempest 2. Tempest * Output test UUIDs to Tempest log for each API operation * Contain **Script A** to parse Tempest log for getting URL, headers, request body, response body and normal status code based on the specified test UUID * Contain **Script B** to get the above combinations from each project repo via external plugin interfaces[1], run **Script A** with each test UUID and create API samples for each generic API name. 3. Bot of openstack-infra * Create an API sample patch for each project and post it if the sample doesn't exist in the repo. Even after this way is implemented, the descriptions of request parameters are not covered on api docs: But each project will be able to get them with each project own way like Pecan/WSME docstring, JSON-Schema or by hands. Any thought? Thanks Ken Ohmichi --- [1]: http://specs.openstack.org/openstack/qa-specs/specs/tempest-external-plugin-interface.html 2015-08-26 10:36 GMT+09:00 Ken'ichi Ohmichi ken1ohmi...@gmail.com: Hi Anne, 2015-08-25 0:51 GMT+09:00 Anne Gentle annegen...@justwriteclick.com: Hi all, I'm writing to find out how teams keep API sample requests and responses up-to-date. I know the nova team has a sample generator [1] that they've maintained for a few years now. Do other teams have something similar? If so, is your approach like the nova one? We had a weekly IRC meeting of Nova API yesterday(today for some guys), and we discussed how to generate/maintain API docs in long term. After the discussion, I have an idea. How about generating API sample files from Tempest log files? Now Tempest is writing most necessary parts of API docs((URL, headers, request body, response body, http status code) to its own log file like: http://logs.openstack.org/88/207688/3/check/gate-tempest-dsvm-full/d7a79d1/logs/tempest.txt.gz#_2015-08-10_13_20_36_982 2015-08-10 13:20:36.982 [..] 202 POST http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers 0.920s 2015-08-10 13:20:36.983 [..] Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 'omitted', 'Accept': 'application/json'} Body: {server: {name: tempest.common.compute-instance-607936499, networks: [{uuid: e63068c6-99d5-41f5-804d-ccb812bfeb51}], imageRef: d4159c59-cbfb-43f1-94de-3552d1f2871e, flavorRef: 42}} Response - Headers: {'location': 'http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c', 'content-type': 'application/json', 'date': 'Mon, 10 Aug 2015 13:20:36 GMT', 'x-compute-request-id': 'req-0fa22034-c1d5-41b2-bfb9-6de533733290', 'connection': 'close', 'status': '202', 'content-length': '434'} Body: {server: {security_groups: [{name: default}], OS-DCF:diskConfig: MANUAL, id: 19f98a6f-26d2-4491-93a8-8e894f19034c, links: [{href: http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: self}, {href: http://127.0.0.1:8774/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: bookmark}], adminPass: 2iEDo2EP5wRM}} _log_request_full /opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py:411 I feel it is difficult to implement the similar sample test way of Nova on each project. The above Tempest log is written on tempest-lib side and that is common way between projects. So we can use this way for all projects as a common/consistent way, I imagine now. I will make/write the detail of this idea later. Thanks Ken Ohmichi --- 1. https://github.com/openstack/nova/blob/master/nova/tests/functional/api_sample_tests/api_sample_base.py -- Anne Gentle Rackspace Principal Engineer www.justwriteclick.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] [docs] Generating API samples
2015-08-26 12:15 GMT+09:00 GHANSHYAM MANN ghanshyamm...@gmail.com: Hi Ken'ichi, Nice idea. On Wed, Aug 26, 2015 at 11:36 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com wrote: Hi Anne, Nova API guys, I'd like to explain how to generate API docs from Tempest log. In Tempest, each API test contains UUID for the test identification now. Current my idea is to parse tempest log, create a patch based on the parsing and post a patch to each project repo like: 1. Each project like Nova Contain the combinations of both generic API name(create a server, etc.) and test UUID of Tempest 2. Tempest * Output test UUIDs to Tempest log for each API operation * Contain **Script A** to parse Tempest log for getting URL, headers, request body, response body and normal status code based on the specified test UUID * Contain **Script B** to get the above combinations from each project repo via external plugin interfaces[1], run **Script A** with each test UUID and create API samples for each generic API name. And for Project out of Tempest scope can provide tests UUID from their Tempest like tests. And doc thing can be served for them via Tempest external plugin design. yeah, +1 for enabling this way for outside-Tempest projects also. 3. Bot of openstack-infra * Create an API sample patch for each project and post it if the sample doesn't exist in the repo. Even after this way is implemented, the descriptions of request parameters are not covered on api docs: But each project will be able to get them with each project own way like Pecan/WSME docstring, JSON-Schema or by hands. or we can get each param doc string from project via external plugin in step 2 along with API generic name with assumption that each project should have doc string for each API request param. nice idea, we can make a rule where the script for extracting request params explanation and use it on step2 :) Just wondering how we will do that for all microversions as Tempest would not be having tests for all microversions. Oh, nice point. I feel it is good to avoid duplicated sample files in Nova if the APIs are not different between microversions. We need to implement microversion tests on tempest before anyway. Thanks Ken Ohmichi --- --- [1]: http://specs.openstack.org/openstack/qa-specs/specs/tempest-external-plugin-interface.html 2015-08-26 10:36 GMT+09:00 Ken'ichi Ohmichi ken1ohmi...@gmail.com: Hi Anne, 2015-08-25 0:51 GMT+09:00 Anne Gentle annegen...@justwriteclick.com: Hi all, I'm writing to find out how teams keep API sample requests and responses up-to-date. I know the nova team has a sample generator [1] that they've maintained for a few years now. Do other teams have something similar? If so, is your approach like the nova one? We had a weekly IRC meeting of Nova API yesterday(today for some guys), and we discussed how to generate/maintain API docs in long term. After the discussion, I have an idea. How about generating API sample files from Tempest log files? Now Tempest is writing most necessary parts of API docs((URL, headers, request body, response body, http status code) to its own log file like: http://logs.openstack.org/88/207688/3/check/gate-tempest-dsvm-full/d7a79d1/logs/tempest.txt.gz#_2015-08-10_13_20_36_982 2015-08-10 13:20:36.982 [..] 202 POST http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers 0.920s 2015-08-10 13:20:36.983 [..] Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 'omitted', 'Accept': 'application/json'} Body: {server: {name: tempest.common.compute-instance-607936499, networks: [{uuid: e63068c6-99d5-41f5-804d-ccb812bfeb51}], imageRef: d4159c59-cbfb-43f1-94de-3552d1f2871e, flavorRef: 42}} Response - Headers: {'location': 'http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c', 'content-type': 'application/json', 'date': 'Mon, 10 Aug 2015 13:20:36 GMT', 'x-compute-request-id': 'req-0fa22034-c1d5-41b2-bfb9-6de533733290', 'connection': 'close', 'status': '202', 'content-length': '434'} Body: {server: {security_groups: [{name: default}], OS-DCF:diskConfig: MANUAL, id: 19f98a6f-26d2-4491-93a8-8e894f19034c, links: [{href: http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: self}, {href: http://127.0.0.1:8774/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: bookmark}], adminPass: 2iEDo2EP5wRM}} _log_request_full /opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py:411 I feel it is difficult to implement the similar sample test way of Nova on each project. The above Tempest log is written on tempest-lib side and that is common way between projects. So we can use this way for all projects as a common/consistent way, I imagine now. I will make/write the detail of this idea
[openstack-dev] [Containers] Magnum bay-create is getting stuck at CREATE_IN_PROGRESS
I am following https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-quickstart.rst#using-kubernetes to try containers/magnum.After running magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 , it keeps showing : root@PRINHYLTPHP0400:/home/devstack/devstack# magnum bay-list +--+++--++ | uuid | name | node_count | master_count | status | +--+++--++ | e121254f-8bca-497b-9bd9-e9f37305592e | k8sbay | 1 | 1 | CREATE_IN_PROGRESS | root@PRINHYLTPHP0400:/home/devstack/devstack# heat resource-list ${BAY_HEAT_NAME} +---+-+--++-+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---+-+--++-+ | api_pool | 23694171-b787-4c15-9188-3c693e3702c8 | OS::Neutron::Pool | CREATE_COMPLETE | 2015-08-26T04:09:00 | | api_pool_floating | 6410a233-03fe-451e-9b52-cd9fac1fcf31 | OS::Neutron::FloatingIP | CREATE_COMPLETE | 2015-08-26T04:09:00 | | etcd_monitor | 2a5c325c-7f24-4c82-a252-64211b67d195 | OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:00 | | etcd_pool | 4578db12-5a88-4b1e-afe2-4f7f90bcbad1 | OS::Neutron::Pool | CREATE_COMPLETE | 2015-08-26T04:09:00 | **| kube_masters | 8ba9d12f-3567-4d54-969c-99077818ffa3 | OS::Heat::ResourceGroup |CREATE_IN_PROGRESS | 2015-08-26T04:09:00 |** | kube_minions | | OS::Heat::ResourceGroup | INIT_COMPLETE | 2015-08-26T04:09:00 | | api_monitor | 09a569ca-334a-4576-91e2-fa98a30f3e50 | OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:01 | | extrouter | f7fb19db-2be7-4624-bf20-4867e1d7572c | OS::Neutron::Router | CREATE_COMPLETE | 2015-08-26T04:09:01 | | extrouter_inside | f7fb19db-2be7-4624-bf20-4867e1d7572c:subnet_id=fdc539a0-9ce7-4faa-a6e1-bddd83d99df9 | OS::Neutron::RouterInterface | CREATE_COMPLETE | 2015-08-26T04:09:01 | | fixed_network | 15748ee9-af6e-445a-a3f2-175a866c51a9 | OS::Neutron::Net | CREATE_COMPLETE | 2015-08-26T04:09:01 | | fixed_subnet | fdc539a0-9ce7-4faa-a6e1-bddd83d99df9 | OS::Neutron::Subnet | CREATE_COMPLETE | 2015-08-26T04:09:01 | -- Kube-master gets stuck at create_in_progress.In magnum-con.log or heat-eng.log , i could not find any error messages.Can anybody please suggest how to debug this issue, any pointers? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Containers] Magnum bay-create is getting stuck at CREATE_IN_PROGRESS
Vikas Choudhary, Try heat event-show to get more information about what's happening in the creation of the resource group for the k8s master. You might not have enough storage free to create the nova VM to run the bay master. Regards, Adrian On Aug 25, 2015, at 9:52 PM, Vikas Choudhary choudharyvika...@gmail.commailto:choudharyvika...@gmail.com wrote: I am following https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-quickstart.rst#using-kubernetes to try containers/magnum.After running magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 , it keeps showing : root@PRINHYLTPHP0400:/home/devstack/devstack# magnum bay-list +--+++--++ | uuid | name | node_count | master_count | status | +--+++--++ | e121254f-8bca-497b-9bd9-e9f37305592e | k8sbay | 1 | 1 | CREATE_IN_PROGRESS | root@PRINHYLTPHP0400:/home/devstack/devstack# heat resource-list ${BAY_HEAT_NAME} +---+-+--++-+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---+-+--++-+ | api_pool | 23694171-b787-4c15-9188-3c693e3702c8 | OS::Neutron::Pool | CREATE_COMPLETE | 2015-08-26T04:09:00 | | api_pool_floating | 6410a233-03fe-451e-9b52-cd9fac1fcf31 | OS::Neutron::FloatingIP | CREATE_COMPLETE | 2015-08-26T04:09:00 | | etcd_monitor | 2a5c325c-7f24-4c82-a252-64211b67d195 | OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:00 | | etcd_pool | 4578db12-5a88-4b1e-afe2-4f7f90bcbad1 | OS::Neutron::Pool | CREATE_COMPLETE | 2015-08-26T04:09:00 | *| kube_masters | 8ba9d12f-3567-4d54-969c-99077818ffa3 | OS::Heat::ResourceGroup |CREATE_IN_PROGRESS | 2015-08-26T04:09:00 |* | kube_minions | | OS::Heat::ResourceGroup | INIT_COMPLETE | 2015-08-26T04:09:00 | | api_monitor | 09a569ca-334a-4576-91e2-fa98a30f3e50 | OS::Neutron::HealthMonitor | CREATE_COMPLETE | 2015-08-26T04:09:01 | | extrouter | f7fb19db-2be7-4624-bf20-4867e1d7572c | OS::Neutron::Router | CREATE_COMPLETE | 2015-08-26T04:09:01 | | extrouter_inside | f7fb19db-2be7-4624-bf20-4867e1d7572c:subnet_id=fdc539a0-9ce7-4faa-a6e1-bddd83d99df9 | OS::Neutron::RouterInterface | CREATE_COMPLETE | 2015-08-26T04:09:01 | | fixed_network | 15748ee9-af6e-445a-a3f2-175a866c51a9 | OS::Neutron::Net | CREATE_COMPLETE | 2015-08-26T04:09:01 | | fixed_subnet | fdc539a0-9ce7-4faa-a6e1-bddd83d99df9 | OS::Neutron::Subnet | CREATE_COMPLETE | 2015-08-26T04:09:01 | Kube-master gets stuck at create_in_progress.In magnum-con.log or heat-eng.log , i could not find any error messages.Can anybody please suggest how to debug this issue, any pointers? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints
- Original Message - From: Hans Feldt hans.fe...@ericsson.com To: openstack-dev@lists.openstack.org Sent: Thursday, August 20, 2015 10:40:28 PM Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints How do you configure/use keystonemiddleware for a specific identity endpoint among several? In an OPNFV multi region prototype I have keystone endpoints per region. I would like keystonemiddleware (in context of glance-api) to use the local keystone for performing user token validation. Instead keystonemiddleware seems to use the first listed keystone endpoint in the service catalog (which could be wrong/non-optimal in most regions). I found this closed, related bug: https://bugs.launchpad.net/python-keystoneclient/+bug/1147530 Hey, There's two points to this. * If you are using an auth plugin then you're right it will just pick the first endpoint. You can look at project specific endpoints[1] so that there is only one keystone endpoint returned for the services project. I've also just added a review for this feature[2]. * If you're not using an auth plugin (so the admin_X options) then keystone will always use the endpoint that is configured in the options (identity_uri). Hope that helps, Jamie [1] https://github.com/openstack/keystone-specs/blob/master/specs/juno/endpoint-group-filter.rst [2] https://review.openstack.org/#/c/216579 Thanks, Hans __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool
Hi, On Tue, Aug 25, 2015 at 2:43 AM, Anita Kuno ante...@anteaya.info wrote: On 08/24/2015 07:59 PM, Skyler Berg wrote: Hi all, I am pleased to announce CI Watch [1], a CI monitoring tool developed at Tintri. For each OpenStack project with third-party CI's, CI Watch shows the status of all CI systems for all recent patch sets on a single dashboard. That's great ! I like it a lot. I will watch this from time to time for sure. Any feedback would be appreciated. We plan to open source this project soon and welcome contributions from anyone interested. For the moment, any bugs, concerns, or ideas can be sent to openstack-...@tintri.com. Some suggestion: a) For a given 3rd party CI, compute a % of patches it commented on (to see how available the CI is) b) For a given 3rd party CI, compute how often (in % maybe) it disagrees with Jenkins (to see how reliable the CI is, assuming Jenkins/The gate is reliable, *cough* :p) Jordan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool
-Original Message- From: Anita Kuno [mailto:ante...@anteaya.info] Sent: 25 August 2015 01:44 To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool We have a number of people working on dashboards for ci systems. We are working on having infra host one: https://review.openstack.org/#/c/194437/ which is a tool currently hosted by one of our ci operators, Patrick East, which is open source. For interest, the tool hosted by Patrick East has an easy-access URL which can be used in advance of Infra hosting, at http://zuul.openstack.xenproject.org/scoreboard/. As an example, http://zuul.openstack.xenproject.org/scoreboard/?project=openstack%2Fnovauser=timeframe=24start=end=page_size=150 shows all Nova passes/fails in the last 24 hour period. Bob __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][api] - attaching arbitrary key/value pairs to resources
Doug Wiegley wrote: In general, the fight in Neutron *has* to be about common definitions of networking primitives that can be potentially implemented by multiple backends whenever possible. That's the entire point of Neutron. I get that it's hard, but that's the value Neutron brings to the table. I think that everyone agrees with you on this point. Including me. The tricky part comes when the speed of neutron adding to the api bottlenecks other things, or when the abstractions just aren’t there yet, because the technology in question isn’t mature enough. Do we provide relief valves, knowing they will be abused as much as help, or do we hold a hard line? These tags are a relief valve. I’m in favor of them, and I’m in favor of holding to the abstraction. It seems there has to be a middle ground. Thanks, doug Just thinking out loud: Probably trying to stem the tie, would it make sense to block api calls outside neutron core/api to grab such tags, with a big warning: if you try to circunvent this, you will harm interoperability of openstack, and your plugin will be blocked in next neutron releases.. They could go directly via SQL, but at least, they'd know they're doing the wrong thing, and risking a plugin ban, if that's a reasonable measure from our side. On Aug 24, 2015, at 4:01 PM, Kevin Bentonblak...@gmail.com wrote: I don't think even worse code makes this what's proposed here seem any better. I'm not really sure what you're saying. I think he's saying that as a vendor he is looking for ways to expose things that aren't normally available and ends up doing terrible evil things to achieve it. :) And if the metadata tags were available, they would be the new delivery vehicle of choice since they are much better than monkey-patching. The way to do that is to have it defined explicitly by Neutron and not punt. +1, but the concern was that having these data bags easily available will eliminate a lot of the incentive contributors had to work together to standardize what they were trying to do. In general, the fight in Neutron *has* to be about common definitions of networking primitives that can be potentially implemented by multiple backends whenever possible. That's the entire point of Neutron. I get that it's hard, but that's the value Neutron brings to the table. I think that everyone agrees with you on this point. It seems like Doug was just pointing out from the vendor perspective that it's very tempting to slap something together based on what is available, and now we will be providing a tool to make that route even easier. After thinking about it, an out-of-tree driver abusing tags is a much better place for us to be than monkey-patching code. At least with tags it's obvious that it's bolted on metadata rather than entirely different API behavior monkey-patched in. On Mon, Aug 24, 2015 at 2:43 PM, Russell Bryantrbry...@redhat.commailto:rbry...@redhat.com wrote: On 08/24/2015 05:25 PM, Doug Wiegley wrote: I took advantage of it to prototype a feature her That right there is the crux of the objections so far. Don’t get me wrong, I’d love this, and would abuse it within an inch of its life regularly. The alternative is sometimes even worse than a vendor extension or plugin. Take for example, wanting to add a new load balancing algorithm, like LEAST_MURDERED_KITTENS. The current list is hard-coded all over the dang place, so you end up shipping neutron patches or monkey patches. Opaque pass-through to the driver is evil from an interoperability standpoint, but in terms of extending code at the operators choosing, there are MUCH worse sins that are actively being committed. I don't think even worse code makes this what's proposed here seem any better. I'm not really sure what you're saying. Flavors covers this use case, but in a way that’s up to the operators to setup, and not as easy for devs to deal with. Whether the above sounds like it’s a bonus or a massive reason not to do this will entirely lie in the eye of the beholder, but the vendor extension use case WILL BE USED, no matter what we say. Interop really is a key part of this. If we look at this particular case, yes, I get that there are lots of LB algorithms out there and that it makes sense to expose that choice to users. However, I do think what's best for users is to define and document each of them very explicitly. The end user should know that if they choose algorithm BEST_LB_EVER, that it means the same thing on cloud A vs cloud B. The way to do that is to have it defined explicitly by Neutron and not punt. Maybe in practice the Neutron defined set is a subset of what's available overall, and the custom (vendor) ones can be clearly marked as such. In any case, I'm just trying to express what goal I think we should be striving for. In general, the fight in Neutron *has* to be about common definitions of networking primitives that can be potentially
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts
Thanks everyone, proud to be on the team! Sam On 25/08/2015 12:32, Lucas Alvares Gomes lucasago...@gmail.com wrote: Congrats! Well deserved On Tue, Aug 25, 2015 at 12:24 PM, Yuiko Takada yuikotakada0...@gmail.com wrote: Sam, congrats and welcome! Yuiko Takada 2015/08/25 19:53、Dmitry Tantsur dtant...@redhat.com のメッセージ: Hi all! Please join me in welcoming Sam to our team! He has been doing very smart reviews recently, was contributing core features and expressed clear interest in the ironic-inspector project. Thanks and welcome! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][qa] libvirt + LXC CI - where's the beef?
I've started an etherpad to track the work needed to get an lxc job going in nova's experimental queue: https://etherpad.openstack.org/p/nova-lxc-ci -- Thanks...we would really appreciate the LXC testing. this is great to see. While there is a lot of discussions around containers, an LXC driver is very interesting where we are looking full machine containers. With HPC, this has a very low overhead but allows consistent accounting, quota and admin roles. Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev- requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] periodic task
On 8/25/15, 9:10 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 8/25/2015 10:03 AM, Gary Kotton wrote: On 8/25/15, 7:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 8/24/2015 9:32 PM, Gary Kotton wrote: In item #2 below the reboot is down via the guest and not the nova api¹s :) From: Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com Reply-To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, August 24, 2015 at 7:18 PM To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova] periodic task Hi, A couple of months ago I posted a patch for bug https://launchpad.net/bugs/1463688. The issue is as follows: the periodic task detects that the instance state does not match the state on the hypervisor and it shuts down the running VM. There are a number of ways that this may happen and I will try and explain: 1. Vmware driver example: a host where the instances are running goes down. This could be a power outage, host failure, etc. The first iteration of the perdioc task will determine that the actual instacne is down. This will update the state of the instance to DOWN. The VC has the ability to do HA and it will start the instance up and running again. The next iteration of the periodic task will determine that the instance is up and the compute manager will stop the instance. 2. All drivers. The tenant decides to do a reboot of the instance and that coincides with the periodic task state validation. At this point in time the instance will not be up and the compute node will update the state of the instance as DWON. Next iteration the states will differ and the instance will be shutdown Basically the issue hit us with our CI and there was no CI running for a couple of hours due to the fact that the compute node decided to shutdown the running instances. The hypervisor should be the source of truth and it should not be the compute node that decides to shutdown instances. I posted a patch to deal with this https://review.openstack.org/#/c/190047/. Which is the reason for this mail. The patch is backwards compatible so that the existing deployments and random shutdown continues as it works today and the admin now has an ability just to do a log if there is a inconsistency. We do not want to disable the periodic task as knowing the current state of the instance is very important and has a ton of value, we just do not want the periodic to task to shut down a running instance. Thanks Gary ___ __ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev In #2 the guest shouldn't be rebooted by the user (tenant) outside of the nova-api. I'm not sure if it's actually formally documented in the nova documentation, but from what I've always heard/known, nova is the control plane and you should be doing everything with your instances via the nova-api. If the user rebooted via nova-api, the task_state would be set and the periodic task would ignore the instance. Matt, this is one case that I showed where the problem occurs. There are others and I can invest time to see them. The fact that the periodic task is there is important. What I don¹t understand is why having an option of log indication for an admin is something that is not useful and instead we are going with having the compute node shutdown instance when this should not happen. Our infrastructure is behaving like cattle. That should not be the case and the hypervisor should be the source of truth. This is a serious issue and instances in production can and will go down. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev For the HA case #1, the periodic task checks to see if the instance.host doesn't match the compute service host [1] and skips if they don't match. Shouldn't your HA scenario be updating which host the instance is running on? Or is this a vCenter-ism? The nova compute node has not changed. It is not the compute nodes host. The host that the instance was running on was down and those instances were moved.
Re: [openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI
On 08/25/2015 01:26 PM, Amitabha Biswas wrote: Russell suggested removing the MYSQL_DRIVER=MySQL-python declaration from local.conf https://review.openstack.org/#/c/216413/which results in PyMySQL as the default. With the above change the above DB errors are no longer seen in my local setup, It's great to hear that resolved the errors you saw! 1. Is there any impact of using PyMySQL for the Jenkins check and gates. As Jeremy mentioned, this is what everything else is using (and what OVN was automatically already using in OpenStack CI). 2. Why is the gate-networking-ovn-python27**failing (the past couple of commits) in {0} networking_ovn.tests.unit.test_ovn_plugin.TestOvnPlugin.test_create_port_security [0.194020s] ... FAILED. Do we need another conversation to track this? This is a separate issue. The networking-ovn git repo has been pretty quiet the last few weeks and it seems something has changed that made our tests break. We inherit a lot of base plugin tests from neutron, so it's probably some change in Neutron that we haven't synced with yet. I haven't had time to dig into it yet. -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] periodic task
On 08/25/15 at 06:08pm, Gary Kotton wrote: On 8/25/15, 9:10 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 8/25/2015 10:03 AM, Gary Kotton wrote: On 8/25/15, 7:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 8/24/2015 9:32 PM, Gary Kotton wrote: In item #2 below the reboot is down via the guest and not the nova api¹s :) From: Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com Reply-To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, August 24, 2015 at 7:18 PM To: OpenStack List openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova] periodic task Hi, A couple of months ago I posted a patch for bug https://launchpad.net/bugs/1463688. The issue is as follows: the periodic task detects that the instance state does not match the state on the hypervisor and it shuts down the running VM. There are a number of ways that this may happen and I will try and explain: 1. Vmware driver example: a host where the instances are running goes down. This could be a power outage, host failure, etc. The first iteration of the perdioc task will determine that the actual instacne is down. This will update the state of the instance to DOWN. The VC has the ability to do HA and it will start the instance up and running again. The next iteration of the periodic task will determine that the instance is up and the compute manager will stop the instance. 2. All drivers. The tenant decides to do a reboot of the instance and that coincides with the periodic task state validation. At this point in time the instance will not be up and the compute node will update the state of the instance as DWON. Next iteration the states will differ and the instance will be shutdown Basically the issue hit us with our CI and there was no CI running for a couple of hours due to the fact that the compute node decided to shutdown the running instances. The hypervisor should be the source of truth and it should not be the compute node that decides to shutdown instances. I posted a patch to deal with this https://review.openstack.org/#/c/190047/. Which is the reason for this mail. The patch is backwards compatible so that the existing deployments and random shutdown continues as it works today and the admin now has an ability just to do a log if there is a inconsistency. We do not want to disable the periodic task as knowing the current state of the instance is very important and has a ton of value, we just do not want the periodic to task to shut down a running instance. Thanks Gary ___ __ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev In #2 the guest shouldn't be rebooted by the user (tenant) outside of the nova-api. I'm not sure if it's actually formally documented in the nova documentation, but from what I've always heard/known, nova is the control plane and you should be doing everything with your instances via the nova-api. If the user rebooted via nova-api, the task_state would be set and the periodic task would ignore the instance. Matt, this is one case that I showed where the problem occurs. There are others and I can invest time to see them. The fact that the periodic task is there is important. What I don¹t understand is why having an option of log indication for an admin is something that is not useful and instead we are going with having the compute node shutdown instance when this should not happen. Our infrastructure is behaving like cattle. That should not be the case and the hypervisor should be the source of truth. This is a serious issue and instances in production can and will go down. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev For the HA case #1, the periodic task checks to see if the instance.host doesn't match the compute service host [1] and skips if they don't match. Shouldn't your HA scenario be updating which host the instance is running on? Or is this a vCenter-ism? The nova compute node has not changed. It is not the compute nodes host. The host that the instance was running on was down and those instances were moved. So this is a case
Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)
On 8/25/2015 10:04 AM, Thomas Goirand wrote: On 08/25/2015 03:42 PM, Thomas Goirand wrote: Hi, [...] Anyway, the result is that mock 1.3 broke 9 packages at least in Kilo, currently in Sid [1]. Maybe, as packages gets rebuilt, I'll get more bug reports. This really, is a depressing situation. [...] Some ppl on IRC explained to me what the situation was, which is that the mock API has been wrongly used, and some tests were in fact wrongly passing, so indeed, this is one of the rare cases where breaking the API probably made sense. As it doesn't bring anything to repair these tests, I'm just not running them in Kilo from now on, using something like this: --subunit 'tests\.unit\.(?!.*foo.*) Please comment if you think that's the wrong way to go. Also, has some of these been repaired in the stable/kilo branch? I seem to remember some projects backporting the test fixes to stable/kilo but ultimately we just capped mock on the stable branches to avoid this issue there. Cheers, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][networking-ovn][tempest] devstack: gate-tempest-dsvm-networking-ovn failures in Openstack CI
On Tue, Aug 25, 2015 at 2:15 PM, Russell Bryant rbry...@redhat.com wrote: On 08/25/2015 01:26 PM, Amitabha Biswas wrote: Russell suggested removing the MYSQL_DRIVER=MySQL-python declaration from local.conf https://review.openstack.org/#/c/216413/which results in PyMySQL as the default. With the above change the above DB errors are no longer seen in my local setup, It's great to hear that resolved the errors you saw! 1. Is there any impact of using PyMySQL for the Jenkins check and gates. As Jeremy mentioned, this is what everything else is using (and what OVN was automatically already using in OpenStack CI). 2. Why is the gate-networking-ovn-python27**failing (the past couple of commits) in {0} networking_ovn.tests.unit.test_ovn_plugin.TestOvnPlugin.test_create_port_security [0.194020s] ... FAILED. Do we need another conversation to track this? This is a separate issue. The networking-ovn git repo has been pretty quiet the last few weeks and it seems something has changed that made our tests break. We inherit a lot of base plugin tests from neutron, so it's probably some change in Neutron that we haven't synced with yet. I haven't had time to dig into it yet. This patch was recently merged to Neutron: https://review.openstack.org/#/c/201141/ Looks like that unit test is trying to create a port with an invalid MAC address. I pushed a fix here: https://review.openstack.org/#/c/216837/ -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Reg. glance image in different domain
In Juno version, its failing like: # glance --os-tenant-id 73e6f673c9684c43966dfa3174d2430c --os-username eswar-new-user --os-password eswar --os-auth-token bef0a7aa06064c39ad042ce368aecdec --os-user-domain-name eswar-new-domain image-list An auth plugin is required to determine the endpoint URL. On Mon, Aug 24, 2015 at 5:10 PM, ESWAR RAO eswar7...@gmail.com wrote: Hi All, I have created a different domain as below: domain-name: Heat-stack tenant-name: heat-tenant user-name: heat-user I have assigned admin role to the tenant. I have added tenant-id as a member to existing image in glance in default domain. # glance --os-tenant-id ad3688ad21b9492599761b54ed58649a --os-username heat-user --os-password heat --os-auth-token auth-token image-list The request you have made requires authentication. (HTTP 401) # heat --os-tenant-id ad3688ad21b9492599761b54ed58649a --os-username heat-user --os-password heat --os-auth-token auth-token stack-list +--+--+-+--+ | id | stack_name | stack_status| creation_time| +--+--+-+--+ | 569700fb-9533-43e7-90a7-734be8a86947 | heat-0e6475bd-0853-4340-9636-83457c9223fb | CREATE_COMPLETE | 2015-08-24T07:00:44Z | +--+--+-+--+ The stack is in CREATE_COMPLETE but instance is not launched in nova. I suspect this could be due to glance image problems. How can i see images with the user in different domain? Thanks Eswar Rao __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [requirements] [global-requirements] [mistral] Using pika library
On 24 Aug 2015, at 20:17, Doug Hellmann d...@doughellmann.com wrote: My point is the following: we’re not getting rid of oslo.messaging for several reasons (community standard, its developers have better vision and expertise at messaging etc.). In any case we’ll have oslo.messaging as one of our implementations for rpc layer (by it I mean very Mistral specific interface to distribute tasks over executors and similar). And we’re, of course, ready to further work with you on evolving oslo.messaging in the right direction. At the same time we still have an idea of implementing our RPC alternatively (w/o oslo.messaging) just for purely time reasons, in other words, we want to have that missing feature as soon as possible because our customers are already using Mistral in production and it affects them. But once we got all needed in oslo.messaging we can get rid of our own implementation at all. Except you'll need to maintain support for the configuration options you'll have to add in order to use pika for at least 2 cycles, which means you're stuck maintaining that stuff for a year. I think it will take much less time than that to add the feature you want to oslo.messaging. We don’t have to use pika, it’s just a preference. Kombu is eventually ok too. I rather meant non-oslo.messaging implementation of RPC. Changing semantics seemed to be exactly the main challenge we had in mind. It made us think it would hardly be implemented within oslo.messaging any time soon. If you’re saying it’s not impossible then it’s good, we need to discuss how it may be implemented in a backwards compatible manner. The only reason this would be hard is if you expect every other user of oslo.messaging to decide to adopt the same semantics you want at the same time. If we build a flag into the API, in a backwards-compatible way, all we have to do is update the library itself. Ok, got your point. We need to see more detailed if it works out. However we implement it, we need to ensure backwards compatibility for applications relying on the current behavior. For example, perhaps the application would pass a flag to the messaging library to control whether ACKs are sent before or after processing (we don't want that as a configuration option, because it's the application developer who needs to handle any differences in behavior, rather than the deployer). We should start out with the default behavior set to what we have now, and then we can experiment with changing it in each application before changing the default in the library. So, if you're interested in working with us on that, let us know. Yes, we are. What would be the next practical steps that you could suggest? Dims proposed a spec, and I think in this case that's a good idea so we can ensure we understand how the change will affect drivers other than rabbit. It may be OK to start out by saying that the other drivers will (or may) ignore the flag, especially since it may not make any sense for something like zmq at all. Ok, thanks. Renat __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes
Thierry Carrez said on Fri, Aug 21, 2015 at 04:56:37PM +0200: Tag-every-commit: (+) Conveys clearly that every commit is consumable (-) Current tooling doesn't support this, we need to write something (-) Zillions of tags will make tags ref space a bit unusable by humans Time to time tagging: (+) Aligned with how we do releases everywhere else (-) Makes some commits special (-) Making a release still requires someone to care Missing anything ? Without offering an opinion either way, I'm just wondering how tag-every-commit is superior to never tagging? The git SHAs already uniquely identify every commit; if you want only those on master, simply `git log master`. Alexis (lxsli) -- Nova developer, Hewlett-Packard Limited. Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN. Registered Number: 00690597 England VAT number: GB 314 1496 79 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] [infra] How to auto-generate stable release notes
On 25 August 2015 at 10:28, Alexis Lee lx...@hpe.com wrote: Thierry Carrez said on Fri, Aug 21, 2015 at 04:56:37PM +0200: Tag-every-commit: (+) Conveys clearly that every commit is consumable (-) Current tooling doesn't support this, we need to write something (-) Zillions of tags will make tags ref space a bit unusable by humans Time to time tagging: (+) Aligned with how we do releases everywhere else (-) Makes some commits special (-) Making a release still requires someone to care Missing anything ? Without offering an opinion either way, I'm just wondering how tag-every-commit is superior to never tagging? The git SHAs already uniquely identify every commit; if you want only those on master, simply `git log master`. Alexis (lxsli) Hey Alexis, The issue with this is deterministic version counting between commits, allowing distributed additional commits but still keeping the version counting centralised. We use pbr to determine version numbers, which has logic around git tags to determine version numbering. For example: $ git clone master # == version 1 $ echo foo stuff.txt $ git add stuff $ git commit stuff.txt -m Daviey's awesome value-add # # should still == version 1, but without a centralised reference marker it will be version 2. -- Kind Regards, Dave Walker __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool
On 25/08/15 01:02, Skyler Berg wrote: Any feedback would be appreciated. We plan to open source this project soon and welcome contributions from anyone interested. For the moment, any bugs, concerns, or ideas can be sent to openstack-...@tintri.com. [1] ci-watch.tintri.com Wow, super-useful! As someone who has been submitting Neutron patches for a while, I've often wondered whether the CIs that fail for me are failing for everybody else as well, and this [2] answers precisely that question. [2] http://ci-watch.tintri.com/project?project=neutron Thanks, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints
On 2015-08-25 09:37, Jamie Lennox wrote: - Original Message - From: Hans Feldt hans.fe...@ericsson.com To: openstack-dev@lists.openstack.org Sent: Thursday, August 20, 2015 10:40:28 PM Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints How do you configure/use keystonemiddleware for a specific identity endpoint among several? In an OPNFV multi region prototype I have keystone endpoints per region. I would like keystonemiddleware (in context of glance-api) to use the local keystone for performing user token validation. Instead keystonemiddleware seems to use the first listed keystone endpoint in the service catalog (which could be wrong/non-optimal in most regions). I found this closed, related bug: https://bugs.launchpad.net/python-keystoneclient/+bug/1147530 Hey, There's two points to this. * If you are using an auth plugin then you're right it will just pick the first endpoint. You can look at project specific endpoints[1] so that there is only one keystone endpoint returned for the services project. I've also just added a review for this feature[2]. I am not. * If you're not using an auth plugin (so the admin_X options) then keystone will always use the endpoint that is configured in the options (identity_uri). Yes for getting its own admin/service token. But for later user token validation it seems to pick the first identity service in the stored (?) service catalog. By patching keystonemiddleware, _create_identity_server and the call to Adapter constructor with an endpoint_override parameter I can get it to use the local keystone for token validation. I am looking for an official way of achieving the same. Thanks, Hans Hope that helps, Jamie [1] https://github.com/openstack/keystone-specs/blob/master/specs/juno/endpoint-group-filter.rst [2] https://review.openstack.org/#/c/216579 Thanks, Hans __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Encapsulating logic and state in the client
- Original Message - From: Zane Bitter zbit...@redhat.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Sent: Monday, 17 August, 2015 5:25:36 PM Subject: [openstack-dev] [TripleO] Encapsulating logic and state in the client It occurs to me that there has never been a detailed exposition of the purpose of the tripleo-common library here, and that this might be a good time to rectify that. Basically, there are two things that it sucks to have in the client: First, logic - that is, any code that is not related to the core client functionality of taking input from the user, making ReST API calls, and returning output to the user. This sucks because anyone needing to connect to a ReST API using a language other than Python has to reimplement the logic in their own language. It also creates potential versioning issues, because there's nothing to guarantee that the client code and anything it interacts with on the server are kept in sync. Secondly, state. This sucks because the state is contained in a user's individual session, which not only causes all sorts of difficulties for anyone trying to implement a web UI but also means that you're at risk of losing some state if you e.g. accidentally Ctrl-C the CLI client. Thinking about this further, the interesting question to me is how much logic we aim to encapsulate behind an API. For example, one of the simpler CLI commands we have in RDO-Manager (which is moving upstream[1]) is to run introspection on all of the Ironic nodes. This involves a series of commands that need to be run in order and it can take upwards of 20 minutes depending how many nodes you have. However, this does just communicate with Ironic (and ironic inspector) so is it worth hiding behind an API? I am inclined to say that it is so we can make the end result as easy to consume as possible but I think it might be difficult to draw the line in some cases. The question then rises about what this API would look like? Generally speaking I feel like it looks like a workflow API, it shouldn't offer many (or any?) unique features, rather it manages the process of performing a series of operations across multiple APIs. There have been attempts at doing this within OpenStack before in a more general case, I wonder what we can learn from those. Unfortunately, as undesirable as these are, they're sometimes necessary in the world we currently live in. The only long-term solution to this is to put all of the logic and state behind a ReST API where it can be accessed from any language, and where any state can be stored appropriately, possibly in a database. In principle that could be accomplished either by creating a tripleo-specific ReST API, or by finding native OpenStack undercloud APIs to do everything we need. My guess is that we'll find a use for the former before everything is ready for the latter, but that's a discussion for another day. We're not there yet, but there are things we can do to keep our options open to make that transition in the future, and this is where tripleo-common comes in. I submit that anything that adds logic or state to the client should be implemented in the tripleo-common library instead of the client plugin. This offers a couple of advantages: - It provides a defined boundary between code that is CLI-specific and code that is shared between the CLI and GUI, which could become the model for a future ReST API once it has stabilised and we're ready to take that step. - It allows for an orderly transition when that happens - we can have a deprecation period during which the tripleo-common library is imported into both the client and the (future, hypothetical) ReST API. cheers, Zane. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [1]: https://review.openstack.org/#/c/215186/3/gerrit/projects.yaml,cm __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [horizon][i18n] Horizon plugins translation
Hi, I see there are several UI plugins in Horizon, which are in OpenStack projects.yaml. They are: horizon-cisco-ui, manila-ui and tuskar-ui. They are using separated repo now. As the translation coordinator, I want to understand which of them want to be translated into multiple languages in Liberty release. Are they ready to be translated ? As a separated repo, if these plugins want to be translated, they need to: 1. Create locale folder and generate pot files in the repo 2. Create project in translation tool 3. Create auto jobs to upload pot files and download translations from translation tool. Please let me know your thoughts. Best regards Ying Chun Guo (Daisy) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] Contract of ShareDriver.deny_access
On Fri, Aug 21, 2015 at 3:22 PM, Ben Swartzlander b...@swartzlander.org wrote: [Resending my response as unknown forces ate my original message] On 08/20/2015 08:30 AM, Bjorn Schuberg wrote: Hello everyone, this is my first thread on this mailing list, and I would like to take the opportunity to say that it was great to see you all at the midcycle, even if remote. Now, to my question; I've been looking into an issue that arise when deleting access to a share, and then momentarily after, deleting the same share. The delete fails due to a race in `_remove_share_access_rules` in the `ShareManager`, which attempts to delete all granted permissions on the share before removing it, but one of the access permissions is concurrently deleted due to the first API call, see; https://github.com/openstack/manila/blob/master/manila/share/manager.py#L600 I think an acceptable fix to this would be to wrap the `_deny_access` call with a `try`... `except` block, and log any attempts to remove non-existing permissions. The problem is that there seems to be no contract on the exception to throw in case you attempt to delete an `access` which does not exist -- each driver behaves differently. This got my attention after running the tempest integration tests, where the teardown *sometimes* fails in tempest.api.share.test_rules:ShareIpRulesForNFSTest. Any thoughts on this? Perhaps there is a smoother approach that I'm not seeing. This is a good point. I'm actually interested in purusing a deeper overhaul of the allow/deny access logic for Mitaka which will make access rules less error prone in my opinion. I'm open to short term bug fixes in Liberty for problems like the one you mention, but I'm already planning a session in Tokyo about a new share access driver interface. The reason it has to wait until Mitaka is that all of the drivers will need to change their logic to accommodate the new method. My thinking on access rules is that the driver interface which adds and removes rules one at a time is too fragile, and assumes too much about what backends are capable of supporting. I see the following problems (in addition to the one you mention): * If addition or deletion of a rules fails for any reason, the set of rules on the backend starts to differ from what the user intended and there is no way to go back and correct the problem. * If backends aren't able to implement rules exactly as Manila expects, (perhaps a backend does not support nested subnets with identical rules) then there are situations where a certain set of user actions will be guaranteed to result in broken rules. Consider (1) add rw access to 10.0.0.0/8 (2) Add rw access to 10.10.0.0/16 (3) Remove rw access to 10.0.0.0/8 (4) Try to access share from 10.10.10.10. If step (2) fails because the backend ignored that rule (it was redundant at the time it was added) then step (4) will also fail, even though it shouldn't. * The current mechanism doesn't allow making multiple changes atomically -- changes have to be sequential. This will case problems if we want to allow access rules to be defined externally (something which was discussed during Juno and is still desirable) because changes to access rules may come in batches. My proposal is simple. Rather than making drivers implement allow_access() and deny_access(), driver should implement a single set_access() which gets passed a list of all the access rules. Drivers would be required to compare the list of rules passed in from the manager to the list of rules on the storage controller and make changes as appropriate. For some drivers this would be more work but for other drivers it would be less work. Overall I think it's a better design. We can probably implement some kind of backwards compatibility to avoid breaking drivers during the migration to the new interface. I like the idea of a review of the permission logic. While making it atomic really makes sense, I also think it is important to take the opportunity to define and document the error cases. Eg, - In case a `set_access()` fails, a certain exception should be thrown by the driver, and the backend must gurarantee that the previous permissions are intact. Otherwise we still might have a mismatch between the permissions in Manila and the storage controller. It's not something I intend to push for in Liberty however. -Ben Cheers, Björn __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [fuel] Branching strategy vs feature freeze
Dmitry, thank you for a well described plan. May I please ask you for a little TL;DR excerpt of what’s going to be changed, because I’m affraid some folks may get lost or may not have enough time to analyze it deeply (I actually see that happening but I won’t do fingerpointing :) ). - romcheg 24 серп. 2015 о 20:29 Dmitry Borodaenko dborodae...@mirantis.com написав(ла): We have several problems with Fuel branching strategy that have become enough of a bottleneck to consider re-thinking our whole release management process. In this email, I will describe the problems, propose a couple of alternative strategies, and try to analyze their relative merits and associated risks. I have my opinions and preferences, but I will try my best to objectively compare all available options. My goal is to improve efficiency of the existing team and make it significantly easier for new people to contribute to Fuel, even when their schedule and their agenda is not 100% aligned with those of Fuel core team and Mirantis OpenStack. It is essential for the new process to be acceptable for Fuel contributors, and is is just as essential to reach a consensus quickly: with Fuel 7.0 Hard Code Freeze less than two weeks away [0], we're already late with planning 8.0, and we absolutely must have the whole plan for 8.0 before Fuel 7.0 is released (September 24). [0] https://wiki.openstack.org/wiki/Fuel/7.0_Release_Schedule I propose a time-bound rough consensus to make this decision: raise all concerns and risks before end of this week (Friday, August 28), propose and discuss ways to address the raised concerns, and make final decision before end of next week (Friday, September 4). Here's how Fuel versions, branches, and release milestones work now. Major Fuel version corresponds to an OpenStack release. For example, Fuel 7.0 is the first version to support Kilo. Minor version denotes a feature release. For example, Fuel 6.1 is still based on Juno but contains new features relative to Fuel 6.0. Tiny versions (e.g. 5.1.1) and maintenance updates (e.g. 6.1-mu-1) include only bugfixes. Most Fuel development happens on the master branch. On Hard Code Freeze, a stable branch is created in all Fuel git repositories (e.g. stable/7.0 will be created on September 3). After that, all changes targeted to that release series must be proposed, reviewed, and merged to master before they are proposed for any stable branches [1]. One release series and one stable branch is created per minor Fuel version (e.g. stable/6.1 for 6.1.x). [1] https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series Fuel release cycle starts from Hard Code freeze of the previous release: - Hard Code Freeze: stable branch created, master branch is open for all changes - Design Complete: all release features have specs merged in fuel-specs - Feature Freeze: feature changes no longer accepted in master branch - Soft Code Freeze: medium and lower priority bugfixes no longer accepted in master branch Looks similar to the way OpenStack branch model works [2], but there's an important difference. [2] https://wiki.openstack.org/wiki/Branch_Model In OpenStack, master branch is closed for feature commits for 2 weeks per release [3], or 1/13th of the whole 26-week release cycle. [3] https://wiki.openstack.org/wiki/Liberty_Release_Schedule In Fuel, master branch remains closed for almost half of the release cycle: 5.0 -- 32 of 81 days 5.1 -- 63 of 119 days 6.0 -- 35 of 93 days 6.1 -- 85 of 180 days 7.0 -- 42 of 93 days (planned) This renders it unusable as an integration branch: if you are bound by a schedule that is not aligned with Fuel release milestones and have more changes than a single commit which you could keep rebasing until Fuel master is open again, you're better off merging your changes into your own integration branch (i.e. fork Fuel). The same problem is even worse if you're working on the next OpenStack release. Even when Fuel master is open, it's developed and tested against latest stable release of OpenStack. For example, even though OpenStack developers started working on Liberty features in May, reflecting that work in Fuel master is blocked until September. There are 4 partially overlapping solutions to this problem: 1) Future branch: create a future integration branch on FF, rebase it weekly onto master (or merge weekly from master to future), merge future to master after stable branch is created on HCF. 2) Short FF: create stable branch 2 weeks after FF (same as OpenStack) instead of waiting for HCF. 3) Short FF and internal fork: create stable branch 2 weeks after FF, create an internal fork of stable branch for Mirantis OpenStack. 4) CI for external forks: package and document Fuel development infrastructure so that anyone who wants to fork Fuel can set up their own CI. In theory, we
Re: [openstack-dev] [oslo] incubator move to private modules
Morgan, Bit more radical :) I am inclined to just yank all code from oslo-incubator and let the projects modify/move what they have left into their own package/module structure (and change the contracts however they see fit). -- Dims On Tue, Aug 25, 2015 at 1:48 AM, Morgan Fainberg morgan.fainb...@gmail.com wrote: Over time oslo incubator has become less important as most things are simply becoming libraries from the get-go. However, there is still code in incubator and particularly Keystone client has seen an issue where the incubator code is considered a public api by consuming projects. I would like to start the conversation of moving all incubator modules to be prefixed by _ indicating they are not meant for public consumption. I expect that if there is not a large uproar here on the mailing list, that I will propose a spec to oslo shortly to make this change possible. What I am looking for before the spec happens, is the view from the community on making this type of change and bringing modules private (and associated concerns). Cheers, --Morgan Sent via mobile __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)
On 26 August 2015 at 03:04, Thomas Goirand z...@debian.org wrote: As it doesn't bring anything to repair these tests, I'm just not running them in Kilo from now on, using something like this: --subunit 'tests\.unit\.(?!.*foo.*) --subunit is for controlled test output logic - but a negative lookahead regex is the right way to disable known buggy tests, yes. Please comment if you think that's the wrong way to go. Also, has some of these been repaired in the stable/kilo branch? Few to none, but I think they'd be valid to backport. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts
Congrats Sam! Cheers! John Stafford Engineering Manager |HP Helion Openstack | Openstack-Ironic E: john.staff...@hp.commailto:john.staff...@hp.com | V: 360.212.9720 | M: 206.963.0916 |IRC (Freenode): BadCub [cid:7AFA25C2-9877-4224-B65B-284C7AC54155@dlink.com] From: Chris K [mailto:nobody...@gmail.com] Sent: Tuesday, August 25, 2015 2:06 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts Nice to see the team expand. Thank you Sam for all your work. Well deserved. :) -Chris NobodyCam On Tue, Aug 25, 2015 at 11:35 AM, Sam Betts (sambetts) sambe...@cisco.commailto:sambe...@cisco.com wrote: Thanks everyone, proud to be on the team! Sam On 25/08/2015 12:32, Lucas Alvares Gomes lucasago...@gmail.commailto:lucasago...@gmail.com wrote: Congrats! Well deserved On Tue, Aug 25, 2015 at 12:24 PM, Yuiko Takada yuikotakada0...@gmail.commailto:yuikotakada0...@gmail.com wrote: Sam, congrats and welcome! Yuiko Takada 2015/08/25 19:53、Dmitry Tantsur dtant...@redhat.commailto:dtant...@redhat.com のメッセージ: Hi all! Please join me in welcoming Sam to our team! He has been doing very smart reviews recently, was contributing core features and expressed clear interest in the ironic-inspector project. Thanks and welcome! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] PLEASE READ: VPNaaS API Change - not backward compatible
Oops, sorry about the blank email. Answers/Questions in-line. On 26/08/15 07:46, Paul Michali wrote: Previous post only went to dev list. Ensuring both and adding a bit more... On Tue, Aug 25, 2015 at 8:37 AM Paul Michali p...@michali.net wrote: Xav, The discussion is very important, and hence why both Kyle and I have been posting these questions on the operator (and dev) lists. Unfortunately, I wasn't subscribed to the operator's list and missed some responses to Kyle's message, which were posted only to that list. As a result, I had an incomplete picture and posted this thread to see if it was OK to do this without backward compatibility, based on the (incorrect) assumption that there was no production use. That is corrected now, and I'm getting all the messages and thanks to everyone, have input on messages I missed. So given that, let's try a reset on the discussion, so that I can better understand the issues... Great! Thanks very much for expanding the scope. We really appreciate it. Do you feel that not having backward compatibility (but having a migration path) would seriously affect you or would it be manageable? Currently, this feels like it would seriously affect us. I don't feel confident that the following concerns won't cause us big problems. As Xav mentioned previously, we have a few major concerns: 1) Horizon compatibility We run a newer version of horizon than we do neutron. If Horizon version X doesn't work with Neutron version X-1, this is a very big problem for us. 2) Service interruption How much of a service interruption would the 'migration path' cause? We all know that IPsec VPNs can be fragile... How much of a guarantee will we have that migration doesn't break a bunch of VPNs all at the same time because of some slight difference in the way configurations are generated? 3) Heat compatibility We don't always run the same version of Heat and Neutron. Is there pain for the customers beyond learning about the new API changes and capabilities (something that would apply whether there is backward compatibility or not)? See points 1,2, and 3 above. Another implication of not having backwards compatibility would be that end-users would need to immediately switch to using the new API, once the migration occurs, versus doing so on their own time frame. Would this be a concern for you (customers not having the convenience of delaying their switch to the new API)? I was thinking that backward incompatible changes would adversely affect people who were using client scripts/apps to configure (a large number of) IPsec connections, where they'd have to have client scripts/apps in-place to support the new API. This is actually less of a concern. We have found that VPN creation is mostly done manually and anyone who is clever enough to make IPsec go is clever enough to learn a new API/horizon interface. Which is more of a logistics issue, and could be managed, IMHO. Would there be customers that would fall into that category, or are customers manually configuring IPSec connections in that they could just use the new API? Most customers could easily adapt to a new API. Are there other adverse effects of not having backward compatibility that need to be considered? As with the dashboard, heat also needs a bit of consideration. How would Heat deal with the API changes? So far, I'm identifying one effect that is more of a convenience (although nice one at that), and one effect that can be avoided by planning for the upgrade. I'd like to know if I'm missing something more important to operators. I'd also like to know if we thing there is a user base large enough (and how large is large?0 that would warrant going through the complexity and risk to support both API versions simultaneously? This is a bit frustrating... It implies that only large clouds matter. There is a further tacit implication the API is not really a contract that can be relied upon. We are operating a multi-region cloud with many clients who depend upon VPNaaS for business-critical production workloads. Of course, this is a two-way road! We all want what is best for OpenStack, so we should talk about the complexity and risk on your end. Can you tell us more about that? I really have no interest in being an operator who demands the world from developers, but I am worried about what all this means for my cloud. Cheers, James Dempsey -- James Dempsey Senior Cloud Engineer Catalyst IT Limited +64 4 803 2264 -- Regards, Paul Michali (pc_m) Specifically, we're talking about the VPN service create API no longer taking a subnet ID (instead an endpoint group is create that contains the subnet ID), and the IPSec site-to-site connection create API would no longer take a list of peer CIDRs, but instead would take a pair of endpoint group IDs (one for the local subnet(s) formally specified by the service API, and
[openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated with Ansible! Ready for NFV L2 Bridges via Heat!
Hello Stackers! I'm proud to announce an Ansible Playbook to deploy OpenStack on Ubuntu! Check it out! * https://github.com/sandvine/os-ansible-deployment-lite Powered by Sandvine! ;-) Basically, this is the automation of what we have documented here: * http://docs.openstack.org/kilo/install-guide/install/apt/content/ Instructions: 1- Install Ubuntu 14.04, fully upgraded (with linux-generic-lts-vivid installed), plus /etc/hostname and /etc/hosts configured according. 2- Deploy OpenStack with 1 command: * Open vSwtich (default): bash (curl -s https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install.sh) * Linux Bridges (alternative): bash (curl -s https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install-lbr.sh) 3- Launch a NFV L2 Stack: heat stack-create demo -f ~/os-ansible-deployment-lite/misc/os-heat-templates/nfv-l2-bridge-basic-stack-ubuntu-little.yaml IMPORTANT NOTES: Only runs the step 2 on top of a fresh installed Ubuntu 14.04! Can be a Server or Desktop but, fresh installed. Do not pre-install MySQL, RabbitMQ, Keystone, etc... Let Ansible to its magic! Also, make sure you can use sudo without password. Some features of our Ansible Playbook: 1- Deploys OpenStack with one single command, in one physical box (all-in-one), helper script (./os-deploy.sh) available; 2- Supports NFV instances that can act as a L2 Bridge between two VXLAN Networks; 3- Plenty of Heat Templates; 4- 100% Ubuntu based; 5- Very simple setup (simpler topology; dummy interfaces for both br-ex and vxlan; no containers for each service (yet)); 6- Ubuntu PPA available, with a few OpenStack patches backported from Liberty, to Kilo (to add port_security_enabled Heat support); https://launchpad.net/~sandvine/+archive/ubuntu/cloud-archive-kilo/ 7- Only requires one physical ethernet card; 8- Both Linux Bridges and Open vSwitch deployments are supported; 9- Planning to add DPDK support; 10- Multi-node support under development; 11- IPv6 support comming... * Notes about Vagrant support: Under development (it doesn't work yet). There is a preliminary Vagrant support (there is still a bug on MySQL startup, pull requests are welcome). Just git clone our Ansible playbooks and run vagrant up (or ./os-deploy-vagrant.sh to auto-config your Ansible vars / files for you). We tried it only with Mac / VirtualBox but, it does not support VT-in-VT (nested virtualization), so, we're looking for KVM / Libvirt on Ubuntu Desktop instead. But it would be nice to, at least, launch OpenStack in a VirtualBox on you Mac... =) Hope you guys enjoy it! Cheers! Thiago __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'
This concerns the support of the names of domain scoped Keystone resources (users, projects, etc.) in puppet. At the puppet-openstack meeting today [1] we decided that puppet-openstack will support Keystone domain scoped resource names without a '::domain' in the name, only if the 'default_domain_id' parameter in Keystone has _not_ been set. That is, if the default domain is 'Default'. In addition: * In the OpenStack L release, if 'default_domain_id' is set, puppet will issue a warning if a name is used without '::domain'. * In the OpenStack M release, puppet will issue a warning if a name is used without '::domain', even if 'default_domain_id' is not set. * In N (or possibly, O), resource names will be required to have '::domain'. The current spec [2] and current code [3] try to support names without a '::domain' in the name, in non-default domains, provided the name is unique across _all_ domains. This will have to be changed in the current code and spec. [1] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html [2] http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html [3] https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'
On 08/25/2015 04:30 PM, Rich Megginson wrote: This concerns the support of the names of domain scoped Keystone resources (users, projects, etc.) in puppet. At the puppet-openstack meeting today [1] we decided that puppet-openstack will support Keystone domain scoped resource names without a '::domain' in the name, only if the 'default_domain_id' parameter in Keystone has _not_ been set. That is, if the default domain is 'Default'. In addition: * In the OpenStack L release, if 'default_domain_id' is set, puppet will issue a warning if a name is used without '::domain'. * In the OpenStack M release, puppet will issue a warning if a name is used without '::domain', even if 'default_domain_id' is not set. * In N (or possibly, O), resource names will be required to have '::domain'. The current spec [2] and current code [3] try to support names without a '::domain' in the name, in non-default domains, provided the name is unique across _all_ domains. This will have to be changed in the current code and spec. +1 [1] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html [2] http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html [3] https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Emilien Macchi signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] contextlib.nested and Python3 failing
On 08/25/2015 10:17 AM, Joshua Harlow wrote: Oh, discard everything I say then :) My brain must still be partially functioning due to vacation, haha. import functools def work(vacation=False): if not vacation: get_lots_done() back_from_vacation = functools.partial(work, vacation=True) There you go, Josh. There's your partial function. Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] PLEASE READ: VPNaaS API Change - not backward compatible
Previous post only went to dev list. Ensuring both and adding a bit more... On Tue, Aug 25, 2015 at 8:37 AM Paul Michali p...@michali.net wrote: Xav, The discussion is very important, and hence why both Kyle and I have been posting these questions on the operator (and dev) lists. Unfortunately, I wasn't subscribed to the operator's list and missed some responses to Kyle's message, which were posted only to that list. As a result, I had an incomplete picture and posted this thread to see if it was OK to do this without backward compatibility, based on the (incorrect) assumption that there was no production use. That is corrected now, and I'm getting all the messages and thanks to everyone, have input on messages I missed. So given that, let's try a reset on the discussion, so that I can better understand the issues... Do you feel that not having backward compatibility (but having a migration path) would seriously affect you or would it be manageable? Is there pain for the customers beyond learning about the new API changes and capabilities (something that would apply whether there is backward compatibility or not)? Another implication of not having backwards compatibility would be that end-users would need to immediately switch to using the new API, once the migration occurs, versus doing so on their own time frame. Would this be a concern for you (customers not having the convenience of delaying their switch to the new API)? I was thinking that backward incompatible changes would adversely affect people who were using client scripts/apps to configure (a large number of) IPsec connections, where they'd have to have client scripts/apps in-place to support the new API. Which is more of a logistics issue, and could be managed, IMHO. Would there be customers that would fall into that category, or are customers manually configuring IPSec connections in that they could just use the new API? Are there other adverse effects of not having backward compatibility that need to be considered? So far, I'm identifying one effect that is more of a convenience (although nice one at that), and one effect that can be avoided by planning for the upgrade. I'd like to know if I'm missing something more important to operators. I'd also like to know if we thing there is a user base large enough (and how large is large?0 that would warrant going through the complexity and risk to support both API versions simultaneously? Regards, Paul Michali (pc_m) Specifically, we're talking about the VPN service create API no longer taking a subnet ID (instead an endpoint group is create that contains the subnet ID), and the IPSec site-to-site connection create API would no longer take a list of peer CIDRs, but instead would take a pair of endpoint group IDs (one for the local subnet(s) formally specified by the service API, and one for peer CIDRs). Regards, Paul Michali (pc_m) On Mon, Aug 24, 2015 at 5:32 PM Xav Paice xavpa...@gmail.com wrote: I'm sure I'm not the only one that finds the vast amount of traffic in the dev list to be completely unmanageable to catch the important messages - the ops list is much lower traffic, and as an operator I pay a bunch more attention to it. The discussion of deprecating an API is something that HAS to be discussed with operators, on the operators list or highlighted somehow so that people get attention drawn to the message. Let's be clear - I fully appreciate the extra effort that would be required in supporting both the new and the old APIs, and also would absolutely love to see the new feature. I do think we need to be able to support our customers in the transition, and extra pain for them results in lower uptake of the services we provide. On 25 August 2015 at 09:27, Xav Paice xavpa...@gmail.com wrote: Also: http://lists.openstack.org/pipermail/openstack-operators/2015-August/007928.html http://lists.openstack.org/pipermail/openstack-operators/2015-August/007891.html On 25 August 2015 at 09:09, Kevin Benton blak...@gmail.com wrote: It sounds like you might have missed a couple responses: http://lists.openstack.org/pipermail/openstack-operators/2015-August/007903.html http://lists.openstack.org/pipermail/openstack-operators/2015-August/007910.html On Mon, Aug 24, 2015 at 1:53 PM, Paul Michali p...@michali.net wrote: Xav, In the email, there were no responses of anyone using VPNaaS *in a production environment*. Summary from responders: Erik M - Tried in Juno with no success. Will retry. Edgar M - said no reports from operators about VPNaaS code Sam S - Using VPN in VMs and not VPNaaS Kevin B - Not used. Use VMs instead Sriram S - Indicating not used. If I misread the responses, or if someone has not spoken up, right now is the time to let us know of your situation and the impact this proposal would have on your use of VPNaaS IPSec site-to-site connections. The request here, is
[openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts
Hi all! Please join me in welcoming Sam to our team! He has been doing very smart reviews recently, was contributing core features and expressed clear interest in the ironic-inspector project. Thanks and welcome! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts
On 08/25/2015 12:53 PM, Dmitry Tantsur wrote: Hi all! Please join me in welcoming Sam to our team! He has been doing very smart reviews recently, was contributing core features and expressed clear interest in the ironic-inspector project. Thanks and welcome! Congrats Sam, well deserved! Imre __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints
On Thu, Aug 20, 2015 at 7:40 AM, Hans Feldt hans.fe...@ericsson.com wrote: How do you configure/use keystonemiddleware for a specific identity endpoint among several? In an OPNFV multi region prototype I have keystone endpoints per region. I would like keystonemiddleware (in context of glance-api) to use the local keystone for performing user token validation. Instead keystonemiddleware seems to use the first listed keystone endpoint in the service catalog (which could be wrong/non-optimal in most regions). I found this closed, related bug: https://bugs.launchpad.net/python-keystoneclient/+bug/1147530 This (brand new) bug report appears to describe the same issue: https://bugs.launchpad.net/keystonemiddleware/+bug/1488347 Thanks, Hans __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts
Sam, congrats and welcome! Yuiko Takada 2015/08/25 19:53、Dmitry Tantsur dtant...@redhat.com のメッセージ: Hi all! Please join me in welcoming Sam to our team! He has been doing very smart reviews recently, was contributing core features and expressed clear interest in the ironic-inspector project. Thanks and welcome! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts
Congrats! Well deserved On Tue, Aug 25, 2015 at 12:24 PM, Yuiko Takada yuikotakada0...@gmail.com wrote: Sam, congrats and welcome! Yuiko Takada 2015/08/25 19:53、Dmitry Tantsur dtant...@redhat.com のメッセージ: Hi all! Please join me in welcoming Sam to our team! He has been doing very smart reviews recently, was contributing core features and expressed clear interest in the ironic-inspector project. Thanks and welcome! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [third-party] Timeout waiting for ssh access
Hi all, Does anybody have any idea about this problem ? Since ubuntu does not have /etc/sysconfig/network-scripts/ifcfg-*, obviously it is a fedora like fs structure, we have tried to use centos, but we still got the same error. Thanks. On 08/24/2015 09:19 PM, Xie, Xianshan wrote: Hi, All I`m still strugling to setup nodepool env, and i got following error messages: -- ERROR nodepool.NodeLauncher: Exception launching node id: 13 in provider: local_01 error: Traceback (most recent call last): File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 405, in _run dt = self.launchNode(session) File /home/fujitsu/xiexs/nodepool/nodepool/nodepool.py, line 503, in launchNode timeout=self.timeout): File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 50, in ssh_connect for count in iterate_timeout(timeout, ssh access): File /home/fujitsu/xiexs/nodepool/nodepool/nodeutils.py, line 42, in iterate_timeout raise Exception(Timeout waiting for %s % purpose) Exception: Timeout waiting for ssh access WARNING nodepool.NodePool: Deleting leaked instance d-p-c-local_01-12 (aa6f58d9-f691-4a72-98db-6add9d0edc1f) in local_01 for node id: 12 -- And meanwhile, in the console.log which records the info for launching this instance, there is also anerror as follows: -- + sed -i -e s/^\(DNS[0-9]*=[.0-9]\+\)/#\1/g /etc/sysconfig/network-scripts/ifcfg-*^M sed: can't read /etc/sysconfig/network-scripts/ifcfg-*: No such file or directory^M ... cloud-init-nonet[26.16]: waiting 120 seconds for network device -- I have tried to figure out what`s causing this error: 1. mounted image.qcow2 and then checked the configuration for the network aboutthis instance: $ cat etc/network/interfaces.d/eth0.cfg auto eth0 iface eth0 inet dhcp $ cat etc/network/interfaces auto lo iface lo inet loopback source /etc/network/interfaces.d/*.cfg It seems good. 2. But indeed, the directory named /etc/sysconfig/network-scripts/ifcfg-* does not exist. And i dont understand why it attempts to check this configuration file? Because my instance is specified to ubuntu not rhel. So,could you give me some tips to work this out? Thanks in advance. Xiexs __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [fuel] Branching strategy vs feature freeze
Sure, TL;DR was actually hidden in the middle of the email, here's an even shorter version: 0) we're suffering from closing master for feature work for too long 1) continuously rebased future branch is most likely a no-go 2) short FF (SCF and stable branch after 2 weeks) is an option for 8.0 3) short FF with stable in a separate internal gerrit was also proposed 4) merits and cost of enabling CI setup for private forks should be carefully considered independently from other options HTH -DmitryB On Tue, Aug 25, 2015 at 12:01:26PM +0200, Roman Prykhodchenko wrote: Dmitry, thank you for a well described plan. May I please ask you for a little TL;DR excerpt of what’s going to be changed, because I’m affraid some folks may get lost or may not have enough time to analyze it deeply (I actually see that happening but I won’t do fingerpointing :) ). - romcheg 24 серп. 2015 о 20:29 Dmitry Borodaenko dborodae...@mirantis.com написав(ла): We have several problems with Fuel branching strategy that have become enough of a bottleneck to consider re-thinking our whole release management process. In this email, I will describe the problems, propose a couple of alternative strategies, and try to analyze their relative merits and associated risks. I have my opinions and preferences, but I will try my best to objectively compare all available options. My goal is to improve efficiency of the existing team and make it significantly easier for new people to contribute to Fuel, even when their schedule and their agenda is not 100% aligned with those of Fuel core team and Mirantis OpenStack. It is essential for the new process to be acceptable for Fuel contributors, and is is just as essential to reach a consensus quickly: with Fuel 7.0 Hard Code Freeze less than two weeks away [0], we're already late with planning 8.0, and we absolutely must have the whole plan for 8.0 before Fuel 7.0 is released (September 24). [0] https://wiki.openstack.org/wiki/Fuel/7.0_Release_Schedule I propose a time-bound rough consensus to make this decision: raise all concerns and risks before end of this week (Friday, August 28), propose and discuss ways to address the raised concerns, and make final decision before end of next week (Friday, September 4). Here's how Fuel versions, branches, and release milestones work now. Major Fuel version corresponds to an OpenStack release. For example, Fuel 7.0 is the first version to support Kilo. Minor version denotes a feature release. For example, Fuel 6.1 is still based on Juno but contains new features relative to Fuel 6.0. Tiny versions (e.g. 5.1.1) and maintenance updates (e.g. 6.1-mu-1) include only bugfixes. Most Fuel development happens on the master branch. On Hard Code Freeze, a stable branch is created in all Fuel git repositories (e.g. stable/7.0 will be created on September 3). After that, all changes targeted to that release series must be proposed, reviewed, and merged to master before they are proposed for any stable branches [1]. One release series and one stable branch is created per minor Fuel version (e.g. stable/6.1 for 6.1.x). [1] https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series Fuel release cycle starts from Hard Code freeze of the previous release: - Hard Code Freeze: stable branch created, master branch is open for all changes - Design Complete: all release features have specs merged in fuel-specs - Feature Freeze: feature changes no longer accepted in master branch - Soft Code Freeze: medium and lower priority bugfixes no longer accepted in master branch Looks similar to the way OpenStack branch model works [2], but there's an important difference. [2] https://wiki.openstack.org/wiki/Branch_Model In OpenStack, master branch is closed for feature commits for 2 weeks per release [3], or 1/13th of the whole 26-week release cycle. [3] https://wiki.openstack.org/wiki/Liberty_Release_Schedule In Fuel, master branch remains closed for almost half of the release cycle: 5.0 -- 32 of 81 days 5.1 -- 63 of 119 days 6.0 -- 35 of 93 days 6.1 -- 85 of 180 days 7.0 -- 42 of 93 days (planned) This renders it unusable as an integration branch: if you are bound by a schedule that is not aligned with Fuel release milestones and have more changes than a single commit which you could keep rebasing until Fuel master is open again, you're better off merging your changes into your own integration branch (i.e. fork Fuel). The same problem is even worse if you're working on the next OpenStack release. Even when Fuel master is open, it's developed and tested against latest stable release of OpenStack. For example, even though OpenStack developers started working on Liberty features in May, reflecting that work in Fuel master is blocked until September. There are 4 partially overlapping solutions to this problem: 1) Future branch: create a future integration branch on FF, rebase it weekly onto master (or merge weekly
Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool
Ramy, Anita, I am in the process of adding an OpenStack project [1] for CI Watch. For the time being adding it to the big-tent is out of the scope of the project. I look forward to discussing CI monitoring solutions during next Tuesday's meeting. Can we make a list of all similar efforts and collect links to where we can find out more about them? Patrick East has scoreboard [2]. Bob Ball linked to a running instance of scoreboard [3], but the link is not working for me. Is it running somewhere else or is is not available at the moment? I do not know of other similar projects or where to find information about them. I am interested to see what others are doing. Regards, Skyler [1] https://review.openstack.org/#/c/216840/ [2] https://github.com/stackforge/third-party-ci-tools/tree/master/monitoring/scoreboard [3] http://zuul.openstack.xenproject.org/scoreboard/ The 08/25/2015 14:45, Asselin, Ramy wrote: HI Skyler, Very nice tool! When do you plan to open source it? Are you considering adding it to the OpenStack big-tent [1]? There are a few tools being worked on that provide different information [2][3][4]. It would be nice to consolidate and invest collective effort into one tool. It would be great to meet and discuss in the third party meeting [5], as Anita suggested. Are you available next Monday or Tuesday? Thanks! Ramy [1] http://docs.openstack.org/infra/manual/creators.html [2] http://git.openstack.org/cgit/stackforge/third-party-ci-tools/tree/monitoring/lastcomment-scoreboard [3] http://git.openstack.org/cgit/stackforge/third-party-ci-tools/tree/monitoring/scoreboard [4] http://git.openstack.org/cgit/stackforge/radar/tree/ [5] https://wiki.openstack.org/wiki/Meetings/ThirdParty#Weekly_Third_Party_meetings -Original Message- From: Anita Kuno [mailto:ante...@anteaya.info] Sent: Monday, August 24, 2015 5:44 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool On 08/24/2015 07:59 PM, Skyler Berg wrote: Hi all, I am pleased to announce CI Watch [1], a CI monitoring tool developed at Tintri. For each OpenStack project with third-party CI's, CI Watch shows the status of all CI systems for all recent patch sets on a single dashboard. CI maintainers can use this tool to pinpoint when errors began and to find other CI's affected by the similar issues. Core team members can find which vendor CI systems are failing and determine when breaking changes hit their projects. The project dashboards provide access to all relevant logs and reviews, simplifying the process of investigating failures. CI Watch should also create more transparency within the third-party CI ecosystem. The health of all CI's is now visible to everyone in the community. We hope that by giving everyone this visibility we will make it easier for anyone to find and address issues on CI systems. Any feedback would be appreciated. We plan to open source this project soon and welcome contributions from anyone interested. For the moment, any bugs, concerns, or ideas can be sent to openstack-...@tintri.com. [1] ci-watch.tintri.com Best, Skyler Berg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hi Skyler: Thanks for your interest in participating in the third party segment of the openstack community. We have a number of people working on dashboards for ci systems. We are working on having infra host one: https://review.openstack.org/#/c/194437/ which is a tool currently hosted by one of our ci operators, Patrick East, which is open source. Can I suggest you attend a third party meeting and perhaps meet some of the other operators and collaborate with them? We don't have any lack of people starting tools what we lack is a tool which will be maintained. Thanks for your interest Skyler, Anita. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:
[openstack-dev] [congress] simulation example in the doc not working
Hello, In simulation examples at http://congress.readthedocs.org/en/latest/enforcement.html?highlight=simulation , the action_policy is replaced with null. However, null is not considered as a valid policy as I keep receiving 400 errors. Could someone let me know the easiest way to get around this error? How to create a simple action policy just for test purpose as of now? Thanks, -- Su Zhang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [congress] simulation example in the doc not working
Hi Su, Try using 'action' instead of 'null'. 'action' is a built-in policy. It's possible that seemingly insignificant changes made the API call more strict and now force the policy you provide to actually exist. Let me know if that doesn't work, and I'll investigate further. Tim On Tue, Aug 25, 2015 at 5:18 PM Su Zhang westlif...@gmail.com wrote: Hello, In simulation examples at http://congress.readthedocs.org/en/latest/enforcement.html?highlight=simulation , the action_policy is replaced with null. However, null is not considered as a valid policy as I keep receiving 400 errors. Could someone let me know the easiest way to get around this error? How to create a simple action policy just for test purpose as of now? Thanks, -- Su Zhang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Documentation on how to Start Contributing
Hi, Vahid! - Modified /home/stack/workspace/murano/devstack/plugin.sh based on Gosha's suggestion and replaced MURANO_REPO=${MURANO_REPO:-${GIT_BASE}/openstack/murano.git}with MURANO_REPO=/home/stack/workspace/murano. Unfortunately, I’m not sure, that using local path as MURANO_REPO will work. At least, because I never heard about use cases like this. But looks like you are given it a try. So, I suggest you to make sure, that you executed ./unstack and ./clean.sh scripts, before start deployment. If you using clean host, this one not needed. Also, I think that is not needed to change plugin’s code. You can define MURANO_REPO=/home/stack/… in your localrc/local.conf file and use enable_plugin murano https://github.com/openstack/murano» What about suggestions, how to test your local changes. I see a two easy ways to do it without using local repositories. 1. Deploy devstack with murano from master as is using plugin or libs and replace old files in /opt/stack/murano with new ones, that you changed. After this need to restart murano services. 2. Upload your changes to gerrit, and use as MURANO_REPO=https://review.openstack.org/openstack/murano and MURANO_BRANCH=refs/changes/…/.../…. Both of methods are good. But this errors on me when I run ./stack.sh: ERROR: openstack Conflict occurred attempting to store user - Duplicate Entry (HTTP 409) (Request-ID: req-805b487c-44fe-4155-8349-65362c2a34ee) It will be really good, if you can give more information (I mean full, or last part of deployment log) Best Regards! -- Victor Ryzhenkin Junior QA Engeneer freerunner on #freenode Включено 26 августа 2015 г. в 2:49:46, Vahid S Hashemian (vahidhashem...@us.ibm.com) написал: OK. So I'm still having some issues with this. Here's what I have done: - Followed instructions on http://murano.readthedocs.org/en/latest/install/development.htmlup to step 4 * cloned murano into /home/stack/workspace/murano * cloned devstack into /home/stack/devstack - Modified /home/stack/workspace/murano/devstack/plugin.sh based on Gosha's suggestion and replaced MURANO_REPO=${MURANO_REPO:-${GIT_BASE}/openstack/murano.git}with MURANO_REPO=/home/stack/workspace/murano. - Modified /home/stack/devstack/local.conf and added enable_plugin murano /home/stack/workspace/murano But this errors on me when I run ./stack.sh: ERROR: openstack Conflict occurred attempting to store user - Duplicate Entry (HTTP 409) (Request-ID: req-805b487c-44fe-4155-8349-65362c2a34ee) I appreciate some clarification. Thanks. Regards, - Vahid Hashemian, Ph.D. Advisory Software Engineer, IBM Cloud Labs From: Vahid S Hashemian/Silicon Valley/IBM@IBMUS To: OpenStack Development Mailing List \(not for usage questions\) openstack-dev@lists.openstack.org Date: 07/10/2015 04:02 PM Subject: Re: [openstack-dev] [Murano] Documentation on how to Start Contributing Thanks Nikolay and Gosha. As Gosha mentioned I'd like to be able to integrate my local changes to Murano into my devstack installation. I figured for UI changes I can probably make the changes directly to the file and restart my apache2 service. However, I am looking for an easy way to test back-end changes, like if I had to modify how a particular CLI behaves, and test it in my devstack environment. Gosha, thanks for the info you sent. Can you clarify something though? In local.conf there is a line enable_plugin murano https://github.com/openstack/murano; pointing to the Murano's github repository. In plugin.sh, on the other hand, there is a line MURANO_REPO=${MURANO_REPO:-${GIT_BASE}/openstack/murano.git} that also another configuration for Murano repository. Are these two related? Should I modify both for the purpose I mentioned above? Also, I cannot find a murano.git file on my server (as referenced in line 17 of plugin.sh). Should I use something like /home/stack/murano/.git instead? Thank you again for your help. Regards, - Vahid Hashemian, Ph.D. Advisory Software Engineer, IBM Cloud Labs From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 07/10/2015 07:45 AM Subject: Re: [openstack-dev] [Murano] Documentation on how to Start Contributing Hi, If I understand correctly, you want to be able to install modified Murano from your own repository. There is a devstack integration script in Murano repository which does this. Here are lines where you can point to specific repository for Murano installation in devstack: https://github.com/openstack/murano/blob/master/devstack/plugin.sh#L17-L18 Installation procedure in the README.rst file in the
Re: [openstack-dev] [neutron][lbaas] L7 - Tasks
Hi Evgeny, Of course we would love to have L7 in Liberty but that window is closing on 8/31. We usually monitor the progress (via Stephen) at the weekly Octavia meeting. Stephen indicated that we won’t get it before the L3 deadline and with all the open items it might still be tight. I am wondering if you can advise on that. Thanks, German From: Evgeny Fedoruk evge...@radware.commailto:evge...@radware.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Tuesday, August 25, 2015 at 9:33 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [neutron][lbaas] L7 - Tasks Hello I would like to know if there is a plan for L7 extension work for Liberty There is an extension patch-set here https://review.openstack.org/#/c/148232/ We will also need to do a CLI work which I started to do and will commit initial patch-set soon Reference implementation was started by Stephen here https://review.openstack.org/#/c/204957/ and tempest tests update should be done as well I do not know if it was discussed at IRC meetings. Please share your thought about it. Regards, Evg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Documentation on how to Start Contributing
OK. So I'm still having some issues with this. Here's what I have done: - Followed instructions on http://murano.readthedocs.org/en/latest/install/development.html up to step 4 * cloned murano into /home/stack/workspace/murano * cloned devstack into /home/stack/devstack - Modified /home/stack/workspace/murano/devstack/plugin.sh based on Gosha's suggestion and replaced MURANO_REPO=${MURANO_REPO:-${GIT_BASE}/openstack/murano.git} with MURANO_REPO=/home/stack/workspace/murano. - Modified /home/stack/devstack/local.conf and added enable_plugin murano /home/stack/workspace/murano But this errors on me when I run ./stack.sh: ERROR: openstack Conflict occurred attempting to store user - Duplicate Entry (HTTP 409) (Request-ID: req-805b487c-44fe-4155-8349-65362c2a34ee) I appreciate some clarification. Thanks. Regards, - Vahid Hashemian, Ph.D. Advisory Software Engineer, IBM Cloud Labs From: Vahid S Hashemian/Silicon Valley/IBM@IBMUS To: OpenStack Development Mailing List \(not for usage questions\) openstack-dev@lists.openstack.org Date: 07/10/2015 04:02 PM Subject:Re: [openstack-dev] [Murano] Documentation on how to Start Contributing Thanks Nikolay and Gosha. As Gosha mentioned I'd like to be able to integrate my local changes to Murano into my devstack installation. I figured for UI changes I can probably make the changes directly to the file and restart my apache2 service. However, I am looking for an easy way to test back-end changes, like if I had to modify how a particular CLI behaves, and test it in my devstack environment. Gosha, thanks for the info you sent. Can you clarify something though? In local.conf there is a line enable_plugin murano https://github.com/openstack/murano; pointing to the Murano's github repository. In plugin.sh, on the other hand, there is a line MURANO_REPO=${MURANO_REPO:-${GIT_BASE}/openstack/murano.git} that also another configuration for Murano repository. Are these two related? Should I modify both for the purpose I mentioned above? Also, I cannot find a murano.git file on my server (as referenced in line 17 of plugin.sh). Should I use something like /home/stack/murano/.git instead? Thank you again for your help. Regards, - Vahid Hashemian, Ph.D. Advisory Software Engineer, IBM Cloud Labs From:Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To:OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date:07/10/2015 07:45 AM Subject:Re: [openstack-dev] [Murano] Documentation on how to Start Contributing Hi, If I understand correctly, you want to be able to install modified Murano from your own repository. There is a devstack integration script in Murano repository which does this. Here are lines where you can point to specific repository for Murano installation in devstack: https://github.com/openstack/murano/blob/master/devstack/plugin.sh#L17-L18 Installation procedure in the README.rst file in the folder devstack of murano repository. Thanks Gosha On Thu, Jul 9, 2015 at 9:54 PM, Nikolay Starodubtsev nstarodubt...@mirantis.com wrote: Hi, Can you describe what problems do you have with bring code changes into a live Devstack environment, and test them. If you want a real-time QA experience you can ask your questions at #murano on freenode. Nikolay Starodubtsev Software Engineer Mirantis Inc. Skype: dark_harlequine1 2015-07-10 2:32 GMT+03:00 Vahid S Hashemian vahidhashem...@us.ibm.com: Hello, I am wondering if there is any documentation for new contributors that explains how to get up-to-speed with Murano development; something that covers how to bring code changes into a live Devstack environment, and test them. I've looked at the Murano documentation online and have not been able to find it. Any pointer is very much appreciated. Thanks. - Vahid Hashemian, Ph.D. Advisory Software Engineer, IBM Cloud Labs __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284
[openstack-dev] [monasca] Monasca weekly meetings in IRC, datetime proposal
We have been asked to host the Monasca weekly meetings in IRC. The proposal is to run the weekly meeting on Wednesday at 1500 UTC in IRC channel openstack-meeting-3. A review has been submitted at, https://review.openstack.org/#/c/216904/1/meetings/monasca-team-meeting.yaml. Please +1 if you are OK with the proposed time slot. If not, please -1 and propose a new time slot. OpenStack meetings are hosted in specific IRC channels and archived. Unfortunately, there wasn't an existing channel available to continue to host the meeting on Tuesday's. Hopefully, Wednesday will continue to work for everyone. Regards --Roland __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)
I feel your pain and where you are coming from. We're all of us trying to make things better in the world - OpenStack, Debian, Python and so on. And I understand the limitations that Debian has vs the upstream Python ecosystem, and the challenges you face working in that environment. But having said that, the tone and content of your email felt quite accusatory to me and thats unnecessary - it is to be blunt, nasty. We can do much better than that when resolving conflicts. I would greatly appreciate it if, should future conflicts come up, that you do so. I have seen your follow up email, but since you've raised the points here I feel compelled to proffer a different explanation of the issues you're reporting. On 26 August 2015 at 01:42, Thomas Goirand z...@debian.org wrote: Hi, This is a special message for Robert Collins, as I believe he's the one responsible for the breakage. If it's not your fault, then I'm sorry, and whoever does the breakage should read what's below carefully, so that it doesn't happen again. meta: I find it hard to read emails 'written specially to me because I broke something' in a calm and dispassionate manner. Whether I did it or not, it predisposes me to defensiveness and raises my heart rate and blood pressure. E.g. adrenaline. I think for my own health I'm going to add a rule to killfile such mailers in the future: life is too short. If someone wants to do a postmortem on something I'm involved in - great. If they decide to open with such a biased, blame based approach, then I'm not interested. - Ok, so onto the body. The mock API was broken by changes to the copy in the CPython standard library during 3.4 and 3.5. I and others worked hard to remediate the feature limits introduced by those changes when they were discovered by the backporting process to 'mock' in 1.1 and above. mock 1.1 was a minor version change rather than a major version change because at the time of the initial sync I did not realise how widespread the impact of the changes from the stdlb would be. I had personally reviewed them all, including test changes, and none seemed contentious. I was wrong. The Python users of the internet have already told me this in technicolour, with diagrams. However, having incurred that cost, we haven't had any /new/ gratuitous incompatibilities added, and we've rolled back the really big ones - by fixing the stdlib library to make it better. Except for the bad assert detection one - see below. Robert, while I do appreciate all of your work, and your technically sound contributions, I am having a hard time with your habit to regularly break backward AND forward API compatibility. Yes, sometimes we unfortunately must do it. But this should be a very rare exception, and you've been doing it over and over again, making package maintainer's life miserable. Mock has been an exceptional case in my experience. But where else have I done this? Backwards compat is 'deal with older inputs', which pbr does just fine for *defined inputs*. Mock 1.3 is also backwards compatible with *defined older inputs*. The one case where its not, assert methods on mocks that were not defined, has been hugely contentious, possibly burnt out a cPython code dev (who quite literally said 'I'm outta here' in the 100 message mailing thread about it), and has been widely *welcomed* by users because it finds actual genuine bugs in their test suite. Forwards compat is 'deal with newer inputs gracefully'. Both pbr and mock accept newer inputs gracefully: they error and callers can use that to detect an old version and provide whatsoever fallback they like. Just like handling of epoll on Linux versions that don't have it. This first happened with PBR. Kilo can't use = 1.x This is due to Kilo having *inappropriate* version caps on its dependencies. Which we've been busy unwinding and fixing infrastructure this cycle to avoid having it happen again on Liberty. The errors from projects in kilo running with pbr = 1.x are due to pkg_resources entry_points validating the declared dependencies from the package, and the packages having a pbr1 *defensive dependency*. This is not recognised as a mistaken pattern - see the requirements management spec where we're trying to avoid it. pbr's Python API and packaging behaviour is entirely compatible with all of kilo. The things that pbr 0.11 accepts and pbr 1.0+ doesn't is an empty set to the best of my knowledge. , and Liberty can't use = 1.x. This is because pbr 1.x offers features that Liberty needs. Thats how software moves forward: you add the feature, and someone else uses it and declares a dependency on your version. So I can't upload PBR 1.3.0 to Sid. This has been dealt with because I am the maintainer of PBR, but really, it shouldn't have happen. How come for years, upgrading PBR always worked, and suddenly, when you start contributing to it, it breaks backward compat? I'm having a hard time to understand what's the
Re: [openstack-dev] [puppet][keystone] Keystone resource naming with domain support - no '::domain' if 'Default'
+1 from me as well. On Tue, Aug 25, 2015 at 2:30 PM, Rich Megginson rmegg...@redhat.com wrote: This concerns the support of the names of domain scoped Keystone resources (users, projects, etc.) in puppet. At the puppet-openstack meeting today [1] we decided that puppet-openstack will support Keystone domain scoped resource names without a '::domain' in the name, only if the 'default_domain_id' parameter in Keystone has _not_ been set. That is, if the default domain is 'Default'. In addition: * In the OpenStack L release, if 'default_domain_id' is set, puppet will issue a warning if a name is used without '::domain'. * In the OpenStack M release, puppet will issue a warning if a name is used without '::domain', even if 'default_domain_id' is not set. * In N (or possibly, O), resource names will be required to have '::domain'. The current spec [2] and current code [3] try to support names without a '::domain' in the name, in non-default domains, provided the name is unique across _all_ domains. This will have to be changed in the current code and spec. [1] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-08-25-15.01.html [2] http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html [3] https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L217 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core: Sam Betts
Nice to see the team expand. Thank you Sam for all your work. Well deserved. :) -Chris NobodyCam On Tue, Aug 25, 2015 at 11:35 AM, Sam Betts (sambetts) sambe...@cisco.com wrote: Thanks everyone, proud to be on the team! Sam On 25/08/2015 12:32, Lucas Alvares Gomes lucasago...@gmail.com wrote: Congrats! Well deserved On Tue, Aug 25, 2015 at 12:24 PM, Yuiko Takada yuikotakada0...@gmail.com wrote: Sam, congrats and welcome! Yuiko Takada 2015/08/25 19:53、Dmitry Tantsur dtant...@redhat.com のメッセージ: Hi all! Please join me in welcoming Sam to our team! He has been doing very smart reviews recently, was contributing core features and expressed clear interest in the ironic-inspector project. Thanks and welcome! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] PLEASE READ: VPNaaS API Change - not backward compatible
On 26/08/15 07:46, Paul Michali wrote: Previous post only went to dev list. Ensuring both and adding a bit more... On Tue, Aug 25, 2015 at 8:37 AM Paul Michali p...@michali.net wrote: Xav, The discussion is very important, and hence why both Kyle and I have been posting these questions on the operator (and dev) lists. Unfortunately, I wasn't subscribed to the operator's list and missed some responses to Kyle's message, which were posted only to that list. As a result, I had an incomplete picture and posted this thread to see if it was OK to do this without backward compatibility, based on the (incorrect) assumption that there was no production use. That is corrected now, and I'm getting all the messages and thanks to everyone, have input on messages I missed. So given that, let's try a reset on the discussion, so that I can better understand the issues... Do you feel that not having backward compatibility (but having a migration path) would seriously affect you or would it be manageable? Is there pain for the customers beyond learning about the new API changes and capabilities (something that would apply whether there is backward compatibility or not)? Another implication of not having backwards compatibility would be that end-users would need to immediately switch to using the new API, once the migration occurs, versus doing so on their own time frame. Would this be a concern for you (customers not having the convenience of delaying their switch to the new API)? I was thinking that backward incompatible changes would adversely affect people who were using client scripts/apps to configure (a large number of) IPsec connections, where they'd have to have client scripts/apps in-place to support the new API. Which is more of a logistics issue, and could be managed, IMHO. Would there be customers that would fall into that category, or are customers manually configuring IPSec connections in that they could just use the new API? Are there other adverse effects of not having backward compatibility that need to be considered? So far, I'm identifying one effect that is more of a convenience (although nice one at that), and one effect that can be avoided by planning for the upgrade. I'd like to know if I'm missing something more important to operators. I'd also like to know if we thing there is a user base large enough (and how large is large?0 that would warrant going through the complexity and risk to support both API versions simultaneously? Regards, Paul Michali (pc_m) Specifically, we're talking about the VPN service create API no longer taking a subnet ID (instead an endpoint group is create that contains the subnet ID), and the IPSec site-to-site connection create API would no longer take a list of peer CIDRs, but instead would take a pair of endpoint group IDs (one for the local subnet(s) formally specified by the service API, and one for peer CIDRs). Regards, Paul Michali (pc_m) On Mon, Aug 24, 2015 at 5:32 PM Xav Paice xavpa...@gmail.com wrote: I'm sure I'm not the only one that finds the vast amount of traffic in the dev list to be completely unmanageable to catch the important messages - the ops list is much lower traffic, and as an operator I pay a bunch more attention to it. The discussion of deprecating an API is something that HAS to be discussed with operators, on the operators list or highlighted somehow so that people get attention drawn to the message. Let's be clear - I fully appreciate the extra effort that would be required in supporting both the new and the old APIs, and also would absolutely love to see the new feature. I do think we need to be able to support our customers in the transition, and extra pain for them results in lower uptake of the services we provide. On 25 August 2015 at 09:27, Xav Paice xavpa...@gmail.com wrote: Also: http://lists.openstack.org/pipermail/openstack-operators/2015-August/007928.html http://lists.openstack.org/pipermail/openstack-operators/2015-August/007891.html On 25 August 2015 at 09:09, Kevin Benton blak...@gmail.com wrote: It sounds like you might have missed a couple responses: http://lists.openstack.org/pipermail/openstack-operators/2015-August/007903.html http://lists.openstack.org/pipermail/openstack-operators/2015-August/007910.html On Mon, Aug 24, 2015 at 1:53 PM, Paul Michali p...@michali.net wrote: Xav, In the email, there were no responses of anyone using VPNaaS *in a production environment*. Summary from responders: Erik M - Tried in Juno with no success. Will retry. Edgar M - said no reports from operators about VPNaaS code Sam S - Using VPN in VMs and not VPNaaS Kevin B - Not used. Use VMs instead Sriram S - Indicating not used. If I misread the responses, or if someone has not spoken up, right now is the time to let us know of your situation and the impact this proposal would have on
Re: [openstack-dev] PLEASE READ: VPNaaS API Change - not backward compatible
Xav, The discussion is very important, and hence why both Kyle and I have been posting these questions on the operator (and dev) lists. Unfortunately, I wasn't subscribed to the operator's list and missed some responses to Kyle's message, which were posted only to that list. As a result, I had an incomplete picture and posted this thread to see if it was OK to do this without backward compatibility, based on the (incorrect) assumption that there was no production use. That is corrected now, and I'm getting all the messages and thanks to everyone, have input on messages I missed. So given that, let's try a reset on the discussion, so that I can better understand the issues... Do you feel that not having backward compatibility (but having a migration path) would seriously affect you or would it be manageable? Is there pain for the customers beyond learning about the new API changes and capabilities (something that would apply whether there is backward compatibility or not)? I was thinking that backward incompatible changes would adversely affect people who were using client scripts/apps to configure (a large number of) IPsec connections, where they'd have to have client scripts/apps in-place to support the new API. Would there be customers that would fall into that category, or are customers manually configuring IPSec connections in that they could just use the new API? Are there other adverse effects of not having backward compatibility that need to be considered? Specifically, we're talking about the VPN service create API no longer taking a subnet ID (instead an endpoint group is create that contains the subnet ID), and the IPSec site-to-site connection create API would no longer take a list of peer CIDRs, but instead would take a pair of endpoint group IDs (one for the local subnet(s) formally specified by the service API, and one for peer CIDRs). Regards, Paul Michali (pc_m) On Mon, Aug 24, 2015 at 5:32 PM Xav Paice xavpa...@gmail.com wrote: I'm sure I'm not the only one that finds the vast amount of traffic in the dev list to be completely unmanageable to catch the important messages - the ops list is much lower traffic, and as an operator I pay a bunch more attention to it. The discussion of deprecating an API is something that HAS to be discussed with operators, on the operators list or highlighted somehow so that people get attention drawn to the message. Let's be clear - I fully appreciate the extra effort that would be required in supporting both the new and the old APIs, and also would absolutely love to see the new feature. I do think we need to be able to support our customers in the transition, and extra pain for them results in lower uptake of the services we provide. On 25 August 2015 at 09:27, Xav Paice xavpa...@gmail.com wrote: Also: http://lists.openstack.org/pipermail/openstack-operators/2015-August/007928.html http://lists.openstack.org/pipermail/openstack-operators/2015-August/007891.html On 25 August 2015 at 09:09, Kevin Benton blak...@gmail.com wrote: It sounds like you might have missed a couple responses: http://lists.openstack.org/pipermail/openstack-operators/2015-August/007903.html http://lists.openstack.org/pipermail/openstack-operators/2015-August/007910.html On Mon, Aug 24, 2015 at 1:53 PM, Paul Michali p...@michali.net wrote: Xav, In the email, there were no responses of anyone using VPNaaS *in a production environment*. Summary from responders: Erik M - Tried in Juno with no success. Will retry. Edgar M - said no reports from operators about VPNaaS code Sam S - Using VPN in VMs and not VPNaaS Kevin B - Not used. Use VMs instead Sriram S - Indicating not used. If I misread the responses, or if someone has not spoken up, right now is the time to let us know of your situation and the impact this proposal would have on your use of VPNaaS IPSec site-to-site connections. The request here, is that if operators are not using this in a production deployment where they need backward compatibility, then we'd like to avoid having to provide the complexity needed to support backward compatibility. In the proposal, there are two APIs that would be changed with this enhancement. It's detailed in reference [1], and I can elaborate, if needed. Please keep in mind, that users/operators using previous versions, can upgrade to the new version, and any existing VPNaaS configuration will be automatically migrated to the new table structures, so that existing IPSec connections would continue to operate with the new release. The proposal would not support using the older APIs, once the new APIs are available. Client apps/scripts would need to be updated to use the new API (neutron client and the Horizon dashboard will be updated as part of the overall effort). I hope that clarifies the discussion. Regards, Paul Michali (pc_m) On Mon, Aug 24, 2015 at 3:50 PM Xav Paice xavpa...@gmail.com wrote: On
Re: [openstack-dev] [release] Liberty release branches / [non] capping process
Thierry Carrez wrote: [...] 1. Enable master-stable cross-check 2. Release Oslo, make stable branches for Oslo 2.1 Converge constraints 3. liberty-3 / FF / soft requirements freeze 4. hard requirements freeze 5. RC1 / make stable branches for services 6. Branch requirements, disable cross-check 7. Unfreeze requirements I discussed this with Robert this morning on #openstack-relmgr-office and it appears the plan is still valid. It was also confirmed in the spec at http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html It still feels reasonable on the face of it, but we need to double-check it and expand on the details. In particular I'm wondering: * is there anything in the implemented constraints system that changes the deal here ? No. Some jobs are not running under the constrained dep system yet, but that doesn't make the plan invalid. * Is the new constraints system already set up to work on the upcoming stable/liberty branches ? We need to double-check, but by default the stable/liberty code branches should test with master requirements until stable/liberty requirements are branched out, at which point it should test stable/liberty code with stable/liberty requirements. * what did we exactly mean by master-stable cross-check ? For completeness, during the transition period (when we use master requirements for both master and stable/liberty branches) we need a job to gate proposed master requirements changes on stable/liberty test jobs in addition to master test jobs. Otherwise we may introduce a change in master requirements that breaks stable code branches. Once we have stable/liberty requirements branched out, we don't need that job anymore. * what did we exactly mean by Converge constraints ? We need to merge any lingering requirements bump in projects, before we create stable/liberty branches for them. With liberty-3 / FF being Thursday next week, we need to start implementing steps 1, 2 and 2.1 in the next 10 days, so we need to urgently check that this is still a valid plan (and any implementation detail). The most urgent is to figure out if we can have a master-stable cross-check enabled before we start cutting stable/liberty branches. Plan B (if we can't) is to apply extra caution to any master requirements change during the overlap period, without the gate safety net. -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][api] - attaching arbitrary key/value pairs to resources
On 08/25/2015 01:00 AM, Gal Sagie wrote: I agree with Doug and Kevin, i think it is very hard for Neutron to keep the pace in every area of Networking abstraction, and i prefer this solution on code patching. I agree with Russell on the definition of Neutron end goal, but what good can it provide if clouds stop using Neutron because it doesn't provide them the appropriate support or better yet start solving these problems in creative ways thats ends up missing the entire point of Neutron. (and then clouds stop using Neutron because they will blame it for the lack of interoperability) I think that this is a good enough middle solution and as Armando suggested in the patch it self, we should work in a separate task towards making the users/developers/operators understand (either with documentation or other methods) that the correct end goal would be to standardize things in the API. Implementing it like nova-tags seems to me like a good way to prevent too much abuse. And as i mentioned in the spec [1], there are important use cases for this feature in the API level that is transparent to the backend implementation (Multi site OpenStack and mixed environments (for example Kuryr)) To be clear, I support the feature as long as it's documented that it's opaque to Neutron backends. My argument is about the general idea of arbitrary pass-through to backends, which you don't seem to be proposing. -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Encapsulating logic and state in the client
Thinking about this further, the interesting question to me is how much logic we aim to encapsulate behind an API. For example, one of the simpler CLI commands we have in RDO-Manager (which is moving upstream[1]) is to run introspection on all of the Ironic nodes. This involves a series of commands that need to be run in order and it can take upwards of 20 minutes depending how many nodes you have. However, this does just communicate with Ironic (and ironic inspector) so is it worth hiding behind an API? I am inclined to say that it is so we can make the end result as easy to consume as possible but I think it might be difficult to draw the line in some cases. The question then rises about what this API would look like? Generally speaking I feel like it looks like a workflow API, it shouldn't offer many (or any?) unique features, rather it manages the process of performing a series of operations across multiple APIs. There have been attempts at doing this within OpenStack before in a more general case, I wonder what we can learn from those. This is where my head is too. The OpenStack on OpenStack thing means we get to leverage the existing tools and users can leverage their existing knowledge of the products. But what I think an API will provide is guidance on how to achieve that (the big argument there being if this should be done in an API or through documentation). It coaches new users and integrations on how to make all of the underlying pieces play together to accomplish certain things. To your question on that ironic call, I'm split on how I feel. On one hand, I really like the idea of the TripleO API being able to support an OpenStack deployment entirely on its own. You may want to go directly to some undercloud tools for certain edge cases, but for the most part you should be able to accomplish the goal of deploying OpenStack through the TripleO APIs. But that's not necessarily what TripleO wants to be. I've seen the sentiment of it only being tools for deploying OpenStack, in which case a single API isn't really what it's looking to do. I still think we need some sort of documentation to guide integrators instead of saying look at the REST API docs for these 5 projects, but that documentation is lighter weight than having pass through calls in a TripleO API. Unfortunately, as undesirable as these are, they're sometimes necessary in the world we currently live in. The only long-term solution to this is to put all of the logic and state behind a ReST API where it can be accessed from any language, and where any state can be stored appropriately, possibly in a database. In principle that could be accomplished either by creating a tripleo-specific ReST API, or by finding native OpenStack undercloud APIs to do everything we need. My guess is that we'll find a use for the former before everything is ready for the latter, but that's a discussion for another day. We're not there yet, but there are things we can do to keep our options open to make that transition in the future, and this is where tripleo-common comes in. I submit that anything that adds logic or state to the client should be implemented in the tripleo-common library instead of the client plugin. This offers a couple of advantages: - It provides a defined boundary between code that is CLI-specific and code that is shared between the CLI and GUI, which could become the model for a future ReST API once it has stabilised and we're ready to take that step. - It allows for an orderly transition when that happens - we can have a deprecation period during which the tripleo-common library is imported into both the client and the (future, hypothetical) ReST API. cheers, Zane. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [1]: https://review.openstack.org/#/c/215186/3/gerrit/projects.yaml,cm __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] [docs] Generating API samples
On 08/24/2015 11:51 AM, Anne Gentle wrote: Hi all, I'm writing to find out how teams keep API sample requests and responses up-to-date. I know the nova team has a sample generator [1] that they've maintained for a few years now. Do other teams have something similar? If so, is your approach like the nova one? sahara keeps examples for some of it's calls in our repo, although they are not a full example of all calls. these have all been hand-crafted, and could use some updating to add examples for the missing calls. we are currently evaluating a new major version for our api[1], and are talking about creating an api directory in our specs repo to have more up-to-date samples, and a descriptive explanation, of our api. i'm guessing that initially these examples will be hand-crafted. regards, mike [1]: https://review.openstack.org/#/c/212172/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] mock 1.3 breaking all of Kilo in Sid (and other cases of this kind)
Hi, This is a special message for Robert Collins, as I believe he's the one responsible for the breakage. If it's not your fault, then I'm sorry, and whoever does the breakage should read what's below carefully, so that it doesn't happen again. Robert, while I do appreciate all of your work, and your technically sound contributions, I am having a hard time with your habit to regularly break backward AND forward API compatibility. Yes, sometimes we unfortunately must do it. But this should be a very rare exception, and you've been doing it over and over again, making package maintainer's life miserable. This first happened with PBR. Kilo can't use = 1.x, and Liberty can't use = 1.x. So I can't upload PBR 1.3.0 to Sid. This has been dealt with because I am the maintainer of PBR, but really, it shouldn't have happen. How come for years, upgrading PBR always worked, and suddenly, when you start contributing to it, it breaks backward compat? I'm having a hard time to understand what's the need to break something which worked perfectly for so long. I'd appreciate more details. But for mock, that's another story. I'm not the maintainer, and the one who is, decided it was a good moment to upload to Sid. The result is 9 FTBFS (failures to build from source) so far, because mock = 1.1 is incompatible with Kilo (but does work well with Liberty, which *requires* it). I am currently unsure why the maintainer of mock uploaded to Sid and not to experimental. What needs to be considered is that mock is used not only by OpenStack. Here's the result of an apt-rdepends -r python-mock in Sid, today: Reverse Depends: atheist (0.20110402-2.1) Reverse Depends: python-cookiecutter (1.0.0-2) Reverse Depends: python-jsonschema (2.4.0-1) Reverse Depends: python-lamson (1.0pre11-1.1) Reverse Depends: python-matplotlib (1.4.2-3.1) Reverse Depends: python-mockldap (0.2.5-1) Reverse Depends: python-model-mommy (1.2-1) Reverse Depends: python-oslo.versionedobjects (0.1.1-2) Reverse Depends: python-oslotest (= 1.5.1-1) Reverse Depends: python-responses (0.3.0-1) Reverse Depends: python-softlayer (4.0.4-1) Reverse Depends: python-vcr (1.6.1-1) Clearly, we're not alone using mock. And we should always consider that we aren't alone. So the usual yeah, but we have pinned the versions, so it's Debian's fault to have uploaded version 1.3 in Sid would be very naive in this case, and absolutely not valid. This is an ok-ish answer for OpenStack only components like Oslo libraries. And even so, I'm convince that we shouldn't break APIs there either. So the issue here, really, is backward and forward compatibility breakage in mock. Robert, you're a DD and you've been working for Canonical, so you must know about these. You just need to care more for this type of things. In the Linux kernel development space, they *never* break userland as a rule. Why are Python developers allowing themselves to do so? Worse case if we really want to break things: isn't there ways to keep the old API and write a new one, let everyone migrate, then eventually deprecate the old one? Anyway, the result is that mock 1.3 broke 9 packages at least in Kilo, currently in Sid [1]. Maybe, as packages gets rebuilt, I'll get more bug reports. This really, is a depressing situation. Now, as the package maintainer for the failed packages, I have 4 solutions: 1/ Reassign these bugs to python-mock. 2/ Remove all of the unit tests which are currently failing because of the new python-mock version. This isn't great, but as I already ran these tests with mock 1.0.1, it should be ok. 3/ Completely remove unit tests for these Kilo packages (or at least allow them to fail). 4/ See what's been done in Liberty to fix these tests with the newer version of mock, and backport that to Kilo. In the case of 1/, I don't think the python-mock package maintainer will be able to do anything about it, and eventually, python-mock will get AUTORM from Debian testing, which doesn't help me at all. Unfortunately, 4/ isn't practical, because I'm also maintaining backports to Jessie, which means I'd have to write fixes so that the packages would work for both mock 1.0.1 and 1.3, plus it would take a very large amount of my time in a non-useful way (I know the package works as it passed unit tests with 1.0.1, so just fixing the tests is useless). So I'm left with either option 2/ and 3/. But really, I'd have preferred if mock didn't break things... :/ Now, the most annoying one is with testtools (ie: #796542). I'd appreciate having help on that one. I hope the message is heard and that it wont happen again. Cheers, Thomas Goirand (zigo) [1] https://bugs.debian.org/795128 [src:python-barbicanclient] python-barbicanclient: FTBFS: test_delete_checks_status_code: AttributeError: assert_called https://bugs.debian.org/795587 [src:python-heatclient] python-heatclient: FTBFS: AttributeError: assert_called_once https://bugs.debian.org/795588 [src:python-glance-store]
Re: [openstack-dev] [cinder] [third-party] ProphetStor CI account
HI Ramy: Now, If all is fine, Can you help me how can I re-enable my CI gerrit account? From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Wednesday, August 26, 2015 12:43 AM To: 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org Cc: Rick Chen rick.c...@prophetstor.com Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Looks good to me. Thanks! Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 9:07 PM To: Asselin, Ramy ramy.asse...@hp.com mailto:ramy.asse...@hp.com ; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Hi Ramy: I already fixed this important problem. Thanks. Does our CI system have any missed configuration or problem? Console log: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/console.html CI Review result: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/logs/ Many thanks. From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 10:54 AM To: Rick Chen rick.c...@prophetstor.com mailto:rick.c...@prophetstor.com ; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Other than that, everything looks fine to me. But that is important to fix. Thanks, Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 6:46 PM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: We use apache proxy pass option to redirect the public link to my internal CI server. Maybe I missed some configure? I will try to find solution for it. But it should affect the my OpenStack third-party CI system. Does our CI system ready to re-enable account? From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Tuesday, August 25, 2015 9:14 AM To: Rick Chen rick.c...@prophetstor.com mailto:rick.c...@prophetstor.com ; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Rick, It's strange, I can navigate using the link you provided, but not via the parent Directory link. This is what it links to, which is missing the prophetstor_ci portion: http://download.prophetstor.com/203895/3/check/prophetstor-dsvm-tempest-cind er-ci/5141/ Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 5:59 PM To: Asselin, Ramy ramy.asse...@hp.com mailto:ramy.asse...@hp.com ; 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: My console file is console.html as below: http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/console.html http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds vm-tempest-cinder-ci/5141/ From: Asselin, Ramy [mailto:ramy.asse...@hp.com] Sent: Monday, August 24, 2015 11:03 PM To: 'OpenStack Development Mailing List (not for usage questions)' openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Cc: Rick Chen rick.c...@prophetstor.com mailto:rick.c...@prophetstor.com Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account Great. Somehow you lost your console.log file. Or did I miss it? Ramy From: Rick Chen [mailto:rick.c...@prophetstor.com] Sent: Monday, August 24, 2015 2:00 AM To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account HI Ramy: I competed to change zuul.conf zuul_url to be my zuul server zuul.rjenkins.prophetstor.com. 2015-08-24 16:21:48.349 | + git_fetch_at_ref openstack/cinder refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b 2015-08-24 16:21:48.350 | + local project=openstack/cinder 2015-08-24 16:21:48.351 | + local ref=refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b 2015-08-24 16:21:48.352 | + '[' refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b '!=' '' ']' 2015-08-24 16:21:48.353 | + git fetch http://zuul.rjenkins.prophetstor.com/p/openstack/cinder refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b 2015-08-24 16:21:49.264 | From
Re: [openstack-dev] [api] [docs] Generating API samples
Hi Anne, 2015-08-25 0:51 GMT+09:00 Anne Gentle annegen...@justwriteclick.com: Hi all, I'm writing to find out how teams keep API sample requests and responses up-to-date. I know the nova team has a sample generator [1] that they've maintained for a few years now. Do other teams have something similar? If so, is your approach like the nova one? We had a weekly IRC meeting of Nova API yesterday(today for some guys), and we discussed how to generate/maintain API docs in long term. After the discussion, I have an idea. How about generating API sample files from Tempest log files? Now Tempest is writing most necessary parts of API docs((URL, headers, request body, response body, http status code) to its own log file like: http://logs.openstack.org/88/207688/3/check/gate-tempest-dsvm-full/d7a79d1/logs/tempest.txt.gz#_2015-08-10_13_20_36_982 2015-08-10 13:20:36.982 [..] 202 POST http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers 0.920s 2015-08-10 13:20:36.983 [..] Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 'omitted', 'Accept': 'application/json'} Body: {server: {name: tempest.common.compute-instance-607936499, networks: [{uuid: e63068c6-99d5-41f5-804d-ccb812bfeb51}], imageRef: d4159c59-cbfb-43f1-94de-3552d1f2871e, flavorRef: 42}} Response - Headers: {'location': 'http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c', 'content-type': 'application/json', 'date': 'Mon, 10 Aug 2015 13:20:36 GMT', 'x-compute-request-id': 'req-0fa22034-c1d5-41b2-bfb9-6de533733290', 'connection': 'close', 'status': '202', 'content-length': '434'} Body: {server: {security_groups: [{name: default}], OS-DCF:diskConfig: MANUAL, id: 19f98a6f-26d2-4491-93a8-8e894f19034c, links: [{href: http://127.0.0.1:8774/v2/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: self}, {href: http://127.0.0.1:8774/c2ab3e6ac69e43bb925a4895075e47d7/servers/19f98a6f-26d2-4491-93a8-8e894f19034c;, rel: bookmark}], adminPass: 2iEDo2EP5wRM}} _log_request_full /opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py:411 I feel it is difficult to implement the similar sample test way of Nova on each project. The above Tempest log is written on tempest-lib side and that is common way between projects. So we can use this way for all projects as a common/consistent way, I imagine now. I will make/write the detail of this idea later. Thanks Ken Ohmichi --- 1. https://github.com/openstack/nova/blob/master/nova/tests/functional/api_sample_tests/api_sample_base.py -- Anne Gentle Rackspace Principal Engineer www.justwriteclick.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] contextlib.nested and Python3 failing
On Tue, 25 Aug 2015 12:23:28 -0700 Jay Pipes jaypi...@gmail.com wrote: On 08/25/2015 10:17 AM, Joshua Harlow wrote: Oh, discard everything I say then :) My brain must still be partially functioning due to vacation, haha. import functools def work(vacation=False): if not vacation: get_lots_done() back_from_vacation = functools.partial(work, vacation=True) There you go, Josh. There's your partial function. Best, -jay I approve of this message, ha. :) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev