Re: [openstack-dev] [Neutron][Climate] bp:configurable-ip-allocation
Mark, Thank you for fast answer! We'll wait for it. On Thu, Aug 22, 2013 at 5:14 PM, Mark McClain mark.mccl...@dreamhost.comwrote: Nokolay- Expect to updated code posted soon for Havana. mark On Aug 22, 2013, at 12:47 AM, Nikolay Starodubtsev nstarodubt...@mirantis.com wrote: Hi, everyone! We are working on Climate, and we are interested in https://blueprints.launchpad.net/neutron/+spec/configurable-ip-allocationI see two changes connected with this bp, but they both were abandoned in the beginning of the year. Can anyone give me an answer about implementation progress? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone] Two BPs for managing the tokens
Hi, Talked with Henry Nash and Jamie Lennox on IRC, I have created two BPs to manage the keystone tokens: 1. https://blueprints.launchpad.net/keystone/+spec/periodically-flush-expired-token which is used to delete expired token 2. https://blueprints.launchpad.net/keystone/+spec/reuse-token which will re-use valid token These two BPs will help us to reduce the token records in token table enormously. I have put some ideas on the BP description. Any comments are welcome. Regards, Yong Sheng Gong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [IMPORTANT] The Gate around Feature Freeze
Dolph Mathews dolph.math...@gmail.com writes: pretty excited! As always, if you'd like to pitch in, stop by #openstack-infra on Freenode and see what we're up to. Wow, nice work! Thank you, infra! agreed, thanks for the good work infra. Chmouel. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] Propose Liang Chen for heat-core
On 08/22/2013 11:57 AM, Steven Hardy wrote: Hi, I'd like to propose that we add Liang Chen to the heat-core team[1] Liang has been doing some great work recently, consistently providing good review feedback[2][3], and also sending us some nice patches[4][5], implementing several features and fixes for Havana. Please respond with +1/-1. +1! Thanks! [1] https://wiki.openstack.org/wiki/Heat/CoreTeam [2] http://russellbryant.net/openstack-stats/heat-reviewers-90.txt [3] https://review.openstack.org/#/q/reviewer:cbjc...@linux.vnet.ibm.com,n,z [4] https://github.com/openstack/heat/graphs/contributors?from=2013-04-18to=2013-08-18type=c [5] https://review.openstack.org/#/q/owner:cbjc...@linux.vnet.ibm.com,n,z ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August
It works for me and I'll have some research done on the bullet points until then. thanks, -Nikhil -Original Message- From: Brian Rosmaita brian.rosma...@rackspace.com Sent: Friday, August 23, 2013 9:33am To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Would 15:00 UTC Tuesday 27 August work for all interested parties? From: Joshua Harlow [harlo...@yahoo-inc.com] Sent: Thursday, August 22, 2013 9:13 PM To: OpenStack Development Mailing List Cc: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August Maybe lets try tuesday? Sent from my really tiny device... On Aug 22, 2013, at 3:38 PM, Mark Washenberger mark.washenber...@markwash.netmailto:mark.washenber...@markwash.net wrote: Eek, I might not actually be able to make this time on Monday. I'm free from 9:30 Pacific Monday and on. Would Tuesday work? Or tomorrow? On Thu, Aug 22, 2013 at 2:33 PM, Brian Rosmaita brian.rosma...@rackspace.commailto:brian.rosma...@rackspace.com wrote: We're shooting for 15:00 UTC Monday 26 August. (Let me know ASAP if you really want to be in on this and can't make it, but this is probably the best time available.) Topics: (1) taskflow seam for integration (2) tasks api and executor interface (3) indexable column in db for tasks to be queryable See http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-08-22-20.02.log.html for brief discussion of these topics during today's glance meeting. More info: https://etherpad.openstack.org/havana-glance-requirements https://etherpad.openstack.org/LG39UnQA7z ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] Propose Liang Chen for heat-core
Excerpts from Steven Hardy's message of 2013-08-22 08:57:31 -0700: Hi, I'd like to propose that we add Liang Chen to the heat-core team[1] Liang has been doing some great work recently, consistently providing good review feedback[2][3], and also sending us some nice patches[4][5], implementing several features and fixes for Havana. Please respond with +1/-1. +1 and huzzah! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Two BPs for managing the tokens
Hello, I would think you would want to reuse the same token but update the expiration time as if it were the first time the token had been generated. Mark From: Yongsheng Gong [mailto:gong...@unitedstack.com] Sent: Friday, August 23, 2013 12:40 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [keystone] Two BPs for managing the tokens Hi, Talked with Henry Nash and Jamie Lennox on IRC, I have created two BPs to manage the keystone tokens: 1. https://blueprints.launchpad.net/keystone/+spec/periodically-flush-expired-token which is used to delete expired token 2. https://blueprints.launchpad.net/keystone/+spec/reuse-token which will re-use valid token These two BPs will help us to reduce the token records in token table enormously. I have put some ideas on the BP description. Any comments are welcome. Regards, Yong Sheng Gong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Two BPs for managing the tokens
On Fri, Aug 23, 2013 at 10:51 AM, Miller, Mark M (EB SW Cloud - RD - Corvallis) mark.m.mil...@hp.com wrote: Hello, ** ** I would think you would want to reuse the same token but update the expiration time as if it were the first time the token had been generated. That wouldn't work for PKI tokens, as the resulting signature would have to change. ** ** Mark ** ** *From:* Yongsheng Gong [mailto:gong...@unitedstack.com] *Sent:* Friday, August 23, 2013 12:40 AM *To:* OpenStack Development Mailing List *Subject:* [openstack-dev] [keystone] Two BPs for managing the tokens ** ** Hi, Talked with Henry Nash and Jamie Lennox on IRC, I have created two BPs to manage the keystone tokens: 1. https://blueprints.launchpad.net/keystone/+spec/periodically-flush-expired-token which is used to delete expired token 2. https://blueprints.launchpad.net/keystone/+spec/reuse-token which will re-use valid token ** ** These two BPs will help us to reduce the token records in token table enormously. ** ** I have put some ideas on the BP description. ** ** Any comments are welcome. ** ** ** ** Regards, Yong Sheng Gong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Two BPs for managing the tokens
On Aug 23, 2013 12:24 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Fri, Aug 23, 2013 at 10:51 AM, Miller, Mark M (EB SW Cloud - RD - Corvallis) mark.m.mil...@hp.com wrote: Hello, I would think you would want to reuse the same token but update the expiration time as if it were the first time the token had been generated. That wouldn't work for PKI tokens, as the resulting signature would have to change. Mark From: Yongsheng Gong [mailto:gong...@unitedstack.com] Sent: Friday, August 23, 2013 12:40 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [keystone] Two BPs for managing the tokens Hi, Talked with Henry Nash and Jamie Lennox on IRC, I have created two BPs to manage the keystone tokens: 1. https://blueprints.launchpad.net/keystone/+spec/periodically-flush-expired-token which is used to delete expired token 2. https://blueprints.launchpad.net/keystone/+spec/reuse-token which will re-use valid token These two BPs will help us to reduce the token records in token table enormously. I have put some ideas on the BP description. Any comments are welcome. What about Adam Young's vision for keystone, which I like, http://adam.younglogic.com/2013/07/a-vision-for-keystone/ These two blueprints don't appear to be in line with it. Also, instead of making keystone reuse tokens why not make the token reuse in the clients better (keyring based). Last I checked it was disabled and broken in nova (there was a patch to fix it, but keep it disabled) Regards, Yong Sheng Gong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August
I'm fine with either times. So maybe 1600 for all? From: Jessica Lucci jessica.lu...@rackspace.commailto:jessica.lu...@rackspace.com Reply-To: OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Friday, August 23, 2013 8:11 AM To: OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August I'm actually busy at that time…any way we could move it to 16:00 UTC on that same Tuesday? On Aug 23, 2013, at 8:33 AM, Brian Rosmaita brian.rosma...@rackspace.commailto:brian.rosma...@rackspace.com wrote: Would 15:00 UTC Tuesday 27 August work for all interested parties? From: Joshua Harlow [harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com] Sent: Thursday, August 22, 2013 9:13 PM To: OpenStack Development Mailing List Cc: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August Maybe lets try tuesday? Sent from my really tiny device... On Aug 22, 2013, at 3:38 PM, Mark Washenberger mark.washenber...@markwash.netmailto:mark.washenber...@markwash.net wrote: Eek, I might not actually be able to make this time on Monday. I'm free from 9:30 Pacific Monday and on. Would Tuesday work? Or tomorrow? On Thu, Aug 22, 2013 at 2:33 PM, Brian Rosmaita brian.rosma...@rackspace.commailto:brian.rosma...@rackspace.com wrote: We're shooting for 15:00 UTC Monday 26 August. (Let me know ASAP if you really want to be in on this and can't make it, but this is probably the best time available.) Topics: (1) taskflow seam for integration (2) tasks api and executor interface (3) indexable column in db for tasks to be queryable See http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-08-22-20.02.log.html for brief discussion of these topics during today's glance meeting. More info: https://etherpad.openstack.org/havana-glance-requirements https://etherpad.openstack.org/LG39UnQA7z ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack][keystone][ldap] Bug unsupported operand type(s) for : 'str' and 'int'
This might be the same as this bug: https://bugs.launchpad.net/keystone/+bug/1210175 which was only fixed in master. - Brant On Thu, Aug 22, 2013 at 1:55 AM, Qinglong.Meng mengql112...@gmail.comwrote: Hi All, Os: Ubuntu 12.04 LTS keystone version: stable/grizzly After I deploy keystone with ldap backend (openldap), I got the issue about type in keystone.log, when I create user by keystoneclient. Here are some docs: 1. keystone.log http://paste.openstack.org/ 2. keystone.conf http://paste.openstack.org/show/44839/ 3. slapd.conf http://paste.openstack.org/show/44843/ 4. openstack.ldif http://paste.openstack.org/show/44844/ And I think this is one bug, it's right? Best Regards, -- Lawrency Meng mail: mengql112...@gmail.com ___ Mailing list: https://launchpad.net/~openstack Post to : openst...@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Glance] import task meeting moved to 15:00 UTC Tues 27 August
We'll now be holding the below meeting at 15:00 UTC Tuesday 27 August in #openstack-glance on Freenode. From: Brian Rosmaita [brian.rosma...@rackspace.com] Sent: Thursday, August 22, 2013 5:33 PM To: Joshua Harlow; OpenStack Development Mailing List Subject: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August We're shooting for 15:00 UTC Monday 26 August. (Let me know ASAP if you really want to be in on this and can't make it, but this is probably the best time available.) Topics: (1) taskflow seam for integration (2) tasks api and executor interface (3) indexable column in db for tasks to be queryable See http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-08-22-20.02.log.html for brief discussion of these topics during today's glance meeting. More info: https://etherpad.openstack.org/havana-glance-requirements https://etherpad.openstack.org/LG39UnQA7z ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Two BPs for managing the tokens
On 08/23/2013 12:43 PM, Joe Gordon wrote: On Aug 23, 2013 12:24 PM, Dolph Mathews dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote: On Fri, Aug 23, 2013 at 10:51 AM, Miller, Mark M (EB SW Cloud - RD - Corvallis) mark.m.mil...@hp.com mailto:mark.m.mil...@hp.com wrote: Hello, I would think you would want to reuse the same token but update the expiration time as if it were the first time the token had been generated. That wouldn't work for PKI tokens, as the resulting signature would have to change. Mark From: Yongsheng Gong [mailto:gong...@unitedstack.com mailto:gong...@unitedstack.com] Sent: Friday, August 23, 2013 12:40 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [keystone] Two BPs for managing the tokens Hi, Talked with Henry Nash and Jamie Lennox on IRC, I have created two BPs to manage the keystone tokens: 1. https://blueprints.launchpad.net/keystone/+spec/periodically-flush-expired-token Not sure that this is worth writing or maintaining. The system services for Cron are much more robust, and we don;t have to maintain them. I do have this review for your consideration, though: https://review.openstack.org/#/c/43510/ In conjunction with the caching layer, it might be the right approach: flush the old tokens upon revocation list regeneration. which is used to delete expired token 2. https://blueprints.launchpad.net/keystone/+spec/reuse-token which will re-use valid token These two BPs will help us to reduce the token records in token table enormously. I have put some ideas on the BP description. Any comments are welcome. What about Adam Young's vision for keystone, which I like, http://adam.younglogic.com/2013/07/a-vision-for-keystone/ These two blueprints don't appear to be in line with it. Also, instead of making keystone reuse tokens why not make the token reuse in the clients better (keyring based). Last I checked it was disabled and broken in nova (there was a patch to fix it, but keep it disabled) Regards, Yong Sheng Gong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] import task meeting 15:00 UTC Monday 26 August
On 08/23/2013 07:15 AM, Joshua Harlow wrote: I'm fine with either times. So maybe 1600 for all? That time works for me. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [IMPORTANT] The Gate around Feature Freeze
On 08/22/2013 10:37 PM, Dolph Mathews wrote: On Thu, Aug 22, 2013 at 7:48 PM, James E. Blair jebl...@openstack.org mailto:jebl...@openstack.org wrote: Monty Taylor mord...@inaugust.com mailto:mord...@inaugust.com writes: The infra team has done a lot of work in prep for our favorite time of year, and we've actually landed several upgrades to the gate without which we'd be in particularly bad shape right now. (I'll let Jim write about some of them later when he's not battling the current operational issues - they're pretty spectacular) As with many scaling issues, some of these upgrades have resulted in moving the point of pain further along the stack. We're working on solutions to the current pain points. (Or, I should say they are, because I'm on a plane headed to Burning Man and not useful for much other than writing emails.) Hi! The good news is that a lot of the operational problems over the past few days have been corrected, we are now pretty close to the noise floor of infrastructure issues in the gate, and over the next few days we'll work to get rid of the remaining bugs. As I'm sure everyone knows, we've seen a huge growth in the project, the number of changes, and the number of tests we run. That is both wonderful, and a little terrifying! But we haven't been idle: we have made some significant improvements and innovations to the project infrastructure to deal with our growing load, especially during these peak times. About a year ago, we realized that the growing number of jobs run (and number of test machines on which we run those jobs) was going to cause scaling issues with Jenkins. So with the help of Khai Do, we created the gearman-plugin[1] for Jenkins, and then we modified Zuul to use it. That means that Zuul isn't directly tied to Jenkins anymore, and can distribute the jobs it needs to run to anything that can run them via Gearman. A few weeks ago we took advantage of that by adding two new Jenkins masters to our system, giving us one of the first (if not the first) multi-master Jenkins systems. Since then, all of the test jobs have been run on nodes attached to either jenkins01.openstack.org http://jenkins01.openstack.org or jenkins02.openstack.org http://jenkins02.openstack.org (which you may have seen linked to from the Zuul status page). That has given us the ability to upgrade Jenkins and its plugins with no interruption due to the active-active nature of the system. And we can add hundreds of test nodes to each of these systems and continue to scale them horizontally as our load increases. With Jenkins now able to scale, the next bottleneck was the number of test nodes. Until recently, we had a handful of special Jenkins jobs which would launch and destroy the single-use nodes that are used for devstack tests. We were seeing issues with Jenkins running those jobs, as well as their ability to keep up with demand. So we started the Nodepool project[2] to create a daemon that could keep up with the demand for test nodes, be much more responsive, and eliminate some of the occasional errors that we would see in the old Rube-Goldberg system we had for managing nodes. In anticipation of the rush of patches for the feature freeze, we rolled that out over the weekend so it was ready to go Monday. And it worked! In fact, it's extremely responsive. It immediately utilized our entire capacity to supply test nodes. Which was great, except that a lot of our tests are configured to use the git repos from Gerrit, which is why Gerrit was very slow early in the week. Fortunately, Elizabeth Krumbach Joseph has been working on setting up a new Git server. That alone is pretty exciting, and she's going to send an announcement about it soon. Since it was ready to go, we moved the test load from Gerrit to the new git server, which has made Gerrit much more responsive again. Unfortunately, the new git server still wasn't quite able to keep up with the test load, so Clark Boylan, Elizabeth and I have spent some time tuning it as well as load-balancing it across several hosts. That is now in place, and the new system seems able to cope with the load from the current rush of patches. We're still seeing an occasional issue where a job is reported as LOST because Jenkins is apparently unaware that it can't talk to the test node. We have some workarounds in progress that we hope to have in place soon. Our goal is to have the most robust and accurate test system possible, that can run all of the tests we can think to throw at it. I think the improvements we've made recently are going to help
[openstack-dev] What's Up Doc? Aug 23 2013
I'm happy to report the latest in doc-land. As always, each Monday you can come to our open office hours at 9:00 Pacific. This week we're changing the doc team meeting frequency to every other week, Tuesdays at 1300 UTC in #openstack-meeting. I don't know about your house but at my house we're happy that school is starting next week! Bring on the books. 1. In review and merged this past week: We fixed and released 25 bugs last week, 17 for openstack-manuals and 8 for api-site. The OpenStack Block Storage Admin Guide now contains only admin info and install and config info have been moved into those guides. We fixed a bug where the /trunk/ version of this guide was outputting incorrectly. Shawn Hartsock got his first doc patch through for the VMWare docs, thanks Shawn! 2. High priority doc work: Our reorg is in full force. This week I met with four core teams to ensure the OpenStack docs team can move around the Admin Guides per project into broader OpenStack guides. We still have over 200 bugs for Havana. With a shift to focus on docs and bug fixing we sure hope to see lots of patches in the coming weeks. 3. Doc work going on that I know of: Getting a lab set up for Shaun, our Cisco contractor, to test install instructions for OpenStack. Example architectures are here if you're interested: etherpad.openstack.org/havanainstall. He has purchased laptops to serve as his lab with USB/Ethernet NICs. We're piecing out the Object Storage Admin Guide to other books in openstack-manuals and plan to stop creating that book separately. OpenSUSE install instructions are in the repository for the basic install guide. I have to give a shout out to Andreas Jaeger and Christian Berendt for all the patches and reviews lately - thanks a bunch! OpenSUSE in the house! 4. New incoming doc requests: On the openstack-docs list, we are still designing two admin-level guides. They are: OpenStack Administration Guide – Covers system administration tasks like maintaining, monitoring, and customizing an initially installed system. Similar to the Operations Guide possibly. User Guide for Administrators - Covers user tasks performed through the dashboard and CLIs for admin users. Bring any and all ideas to openstack-docs to join in the discussion. 5. Doc tools updates: The 1.9.2 release of the clouddoctools-maven-plugin supports CALS tables markup. We're looking into a 1.9.3 release to add support for using Windows for building docs locally and fixing a wrapping problem for the Orchestration API. So it's likely we should wait for a 1.9.3 release before changing all the pom.xml files for the havana release. 6. Other doc news: We documented our review policies here: https://wiki.openstack.org/wiki/Documentation/HowTo#Who_can_Review_Documentation.3Fto ensure clarity for the new contributors and current docs-core members. We have over 25 people coming to the Docs Boot Camp and our host Mirantis will video tape the sessions. The OpenStack Foundation is supporting us greatly as well as Rackspace. Thanks supporters -- we're all looking forward to it! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Infra] We've launched git.openstack.org
On Sat, Aug 24, 2013 at 4:20 AM, Elizabeth Krumbach Joseph l...@princessleia.com wrote: Hi everyone, Several months ago the OpenStack Infrastructure team filed a bug to Create a git.openstack.org mirror system[0] Does git.openstack.org replace git hub for developers cloning repositories? Or is it just a mirror of the content at github? Thanks, Michael ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Infra] We've launched git.openstack.org
Michael Still mi...@stillhq.com writes: On Sat, Aug 24, 2013 at 4:20 AM, Elizabeth Krumbach Joseph l...@princessleia.com wrote: Hi everyone, Several months ago the OpenStack Infrastructure team filed a bug to Create a git.openstack.org mirror system[0] Does git.openstack.org replace git hub for developers cloning repositories? Or is it just a mirror of the content at github? You can use either one. We will continue to mirror all of the repositories to github as well as git.openstack.org. Since the use of github is not required for OpenStack development, we now have the option to have instructions for new developers that don't have a potentially confusing reference to github (which sometimes causes people to wonder why they can't submit a pull request there). For that reason, we just changed a bunch of references in the infra program documentation to point to the new git server. The github repos are useful for a lot of developers and we don't have any plans to stop mirroring there. -Jim [1] http://ci.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Infra] We've launched git.openstack.org
Does this change affect the gerrit remote in any way? --John On Aug 23, 2013, at 2:48 PM, jebl...@openstack.org (James E. Blair) wrote: Michael Still mi...@stillhq.com writes: On Sat, Aug 24, 2013 at 4:20 AM, Elizabeth Krumbach Joseph l...@princessleia.com wrote: Hi everyone, Several months ago the OpenStack Infrastructure team filed a bug to Create a git.openstack.org mirror system[0] Does git.openstack.org replace git hub for developers cloning repositories? Or is it just a mirror of the content at github? You can use either one. We will continue to mirror all of the repositories to github as well as git.openstack.org. Since the use of github is not required for OpenStack development, we now have the option to have instructions for new developers that don't have a potentially confusing reference to github (which sometimes causes people to wonder why they can't submit a pull request there). For that reason, we just changed a bunch of references in the infra program documentation to point to the new git server. The github repos are useful for a lot of developers and we don't have any plans to stop mirroring there. -Jim [1] http://ci.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Infra] We've launched git.openstack.org
On Fri, Aug 23, 2013 at 2:56 PM, John Dickinson m...@not.mn wrote: Does this change affect the gerrit remote in any way? Nope! The gerrit remote remains review.openstack.org (of course your origin may change if you decide to s/github/git.openstack.org) -- Elizabeth Krumbach Joseph || Lyz || pleia2 http://www.princessleia.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-08-23
Greetings stackers! It's freeze-2 time and that means things get hot! We've had quite a bit going on in the VMware sub-team and the proof is in the number of patches that are *ready* for core-reviewers. We have 9 that are ready for core-reviewers to take a look at. Needs one more core review/approval: * NEW, https://review.openstack.org/#/c/39046/ ,'VMware: fix rescue/unrescue instance' https://bugs.launchpad.net/nova/+bug/1199955 core votes,1, non-core votes,2, down votes, 0 * NEW, https://review.openstack.org/#/c/39720/ ,'VMware: Added check for datastore state before selection' https://bugs.launchpad.net/nova/+bug/1194078 core votes,1, non-core votes,6, down votes, 0 * NEW, https://review.openstack.org/#/c/33504/ ,'VMware: nova-compute crashes if VC not available' https://bugs.launchpad.net/nova/+bug/1192016 core votes,1, non-core votes,8, down votes, 0 Ready for core reviewer: * NEW, https://review.openstack.org/#/c/43268/ ,'VMware: enable VNC access without user having to enter password' https://bugs.launchpad.net/nova/+bug/1215352 core votes,0, non-core votes,4, down votes, 0 * NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone strategy settings and overrides' https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy core votes,0, non-core votes,4, down votes, 0 * NEW, https://review.openstack.org/#/c/33100/ ,'Fixes host stats for VMWareVCDriver' https://bugs.launchpad.net/nova/+bug/1190515 core votes,0, non-core votes,6, down votes, 0 * NEW, https://review.openstack.org/#/c/30628/ ,'Fix VCDriver to pick the datastore that has capacity' https://bugs.launchpad.net/nova/+bug/1171930 core votes,0, non-core votes,8, down votes, 0 * NEW, https://review.openstack.org/#/c/40029/ ,'VMware: Config Drive Support' https://bugs.launchpad.net/nova/+bug/1206584 core votes,0, non-core votes,5, down votes, 0 * NEW, https://review.openstack.org/#/c/40298/ ,'Fix snapshot in VMWwareVCDriver' https://bugs.launchpad.net/nova/+bug/1184807 core votes,0, non-core votes,7, down votes, 0 == still being reviewed == All our blueprints are up and in review cycle so we made good on our August 22nd deadline. Now all we need to do is get these patches in good enough shape we can comfortably put them forward. If folks are getting stuck, feel free to drag me into it and I'll try and get the parties to the same page. The order of the day is *reviews* *reviews* *reviews*. Go team! Needs VMware API expert review: * NEW, https://review.openstack.org/#/c/40105/ ,'VMware: use VM uuid for volume attach and detach' https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support core votes,0, non-core votes,3, down votes, 0 * NEW, https://review.openstack.org/#/c/41387/ ,'VMware: Nova boot from cinder volume' https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support core votes,0, non-core votes,3, down votes, 0 * NEW, https://review.openstack.org/#/c/40245/ ,'Nova support for vmware cinder driver' https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support core votes,0, non-core votes,2, down votes, 0 Needs discussion/work (has -1): * NEW, https://review.openstack.org/#/c/43266/ ,'vmware driver selection of vm_folder_ref.' https://bugs.launchpad.net/nova/+bug/1214850 core votes,0, non-core votes,1, down votes, -1 * NEW, https://review.openstack.org/#/c/37659/ ,'Enhance VMware instance disk usage.' https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage core votes,0, non-core votes,1, down votes, -1 * NEW, https://review.openstack.org/#/c/41657/ ,'Fix VMwareVCDriver to support multi-datastore' https://bugs.launchpad.net/nova/+bug/1104994 core votes,0, non-core votes,1, down votes, -1 * NEW, https://review.openstack.org/#/c/30282/ ,'Multiple Clusters using single compute service' https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service core votes,0, non-core votes,2, down votes, -2 * NEW, https://review.openstack.org/#/c/34903/ ,'Deploy vCenter templates' https://blueprints.launchpad.net/nova/+spec/deploy-vcenter-templates-from-vmware-nova-driver core votes,0, non-core votes,2, down votes, -2 * NEW, https://review.openstack.org/#/c/42024/ ,'VMWare: Disabling linked clone doesn't cache images' https://bugs.launchpad.net/nova/+bug/1207064 core votes,0, non-core votes,1, down votes, -3 * NEW, https://review.openstack.org/#/c/42619/ ,'fix broken WSDL logic' https://bugs.launchpad.net/nova/+bug/1171215 core votes,0, non-core votes,0, down votes, -1 Work in progress: * WORKINPROGRESS, https://review.openstack.org/#/c/43270/ ,'vmware driver selection of vm_folder_ref.' https://bugs.launchpad.net/nova/+bug/1214850 core votes,0, non-core votes,1, down votes, 0 Meeting info: * https://wiki.openstack.org/wiki/Meetings/VMwareAPI * If anything is missing, add
Re: [openstack-dev] [keystone] Two BPs for managing the tokens
Hi adam, Can u explain more about 'In conjunction with the caching layer, it might be the right approach: flush the old tokens upon revocation list regeneration.'? when is the list_revoked_tokens called? thanks On Sat, Aug 24, 2013 at 1:51 AM, Adam Young ayo...@redhat.com wrote: On 08/23/2013 12:43 PM, Joe Gordon wrote: On Aug 23, 2013 12:24 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Fri, Aug 23, 2013 at 10:51 AM, Miller, Mark M (EB SW Cloud - RD - Corvallis) mark.m.mil...@hp.com wrote: Hello, I would think you would want to reuse the same token but update the expiration time as if it were the first time the token had been generated. That wouldn't work for PKI tokens, as the resulting signature would have to change. Mark From: Yongsheng Gong [mailto:gong...@unitedstack.com] Sent: Friday, August 23, 2013 12:40 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [keystone] Two BPs for managing the tokens Hi, Talked with Henry Nash and Jamie Lennox on IRC, I have created two BPs to manage the keystone tokens: 1. https://blueprints.launchpad.net/keystone/+spec/periodically-flush-expired-token Not sure that this is worth writing or maintaining. The system services for Cron are much more robust, and we don;t have to maintain them. I do have this review for your consideration, though: https://review.openstack.org/#/c/43510/ In conjunction with the caching layer, it might be the right approach: flush the old tokens upon revocation list regeneration. which is used to delete expired token 2. https://blueprints.launchpad.net/keystone/+spec/reuse-token which will re-use valid token These two BPs will help us to reduce the token records in token table enormously. I have put some ideas on the BP description. Any comments are welcome. What about Adam Young's vision for keystone, which I like, http://adam.younglogic.com/2013/07/a-vision-for-keystone/ These two blueprints don't appear to be in line with it. Also, instead of making keystone reuse tokens why not make the token reuse in the clients better (keyring based). Last I checked it was disabled and broken in nova (there was a patch to fix it, but keep it disabled) Regards, Yong Sheng Gong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Two BPs for managing the tokens
On Fri, Aug 23, 2013 at 7:48 PM, Yongsheng Gong gong...@unitedstack.comwrote: Hi adam, Can u explain more about 'In conjunction with the caching layer, it might be the right approach: flush the old tokens upon revocation list regeneration.'? when is the list_revoked_tokens called? In a PKI-token based deployment, auth_token periodically fetches a list of revoked tokens so that it knows which tokens to deny, even though they are otherwise valid. thanks On Sat, Aug 24, 2013 at 1:51 AM, Adam Young ayo...@redhat.com wrote: On 08/23/2013 12:43 PM, Joe Gordon wrote: On Aug 23, 2013 12:24 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Fri, Aug 23, 2013 at 10:51 AM, Miller, Mark M (EB SW Cloud - RD - Corvallis) mark.m.mil...@hp.com wrote: Hello, I would think you would want to reuse the same token but update the expiration time as if it were the first time the token had been generated. That wouldn't work for PKI tokens, as the resulting signature would have to change. Mark From: Yongsheng Gong [mailto:gong...@unitedstack.com] Sent: Friday, August 23, 2013 12:40 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [keystone] Two BPs for managing the tokens Hi, Talked with Henry Nash and Jamie Lennox on IRC, I have created two BPs to manage the keystone tokens: 1. https://blueprints.launchpad.net/keystone/+spec/periodically-flush-expired-token Not sure that this is worth writing or maintaining. The system services for Cron are much more robust, and we don;t have to maintain them. I do have this review for your consideration, though: https://review.openstack.org/#/c/43510/ In conjunction with the caching layer, it might be the right approach: flush the old tokens upon revocation list regeneration. which is used to delete expired token 2. https://blueprints.launchpad.net/keystone/+spec/reuse-token which will re-use valid token These two BPs will help us to reduce the token records in token table enormously. I have put some ideas on the BP description. Any comments are welcome. What about Adam Young's vision for keystone, which I like, http://adam.younglogic.com/2013/07/a-vision-for-keystone/ These two blueprints don't appear to be in line with it. Also, instead of making keystone reuse tokens why not make the token reuse in the clients better (keyring based). Last I checked it was disabled and broken in nova (there was a patch to fix it, but keep it disabled) Regards, Yong Sheng Gong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev