[openstack-dev] Commit the cinder Driver
Hi All, I developed Cinder driver for CloudByte's ElastiStor, now I want to check in this code into master. I don't have more idea about git and OpenStack commit process, can you guys please guide me to commit the code into master branch. Thanks, Mardan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Commit the cinder Driver
Hi Mardan, Thanks! For the Cloudbyte ElastiStor I can see couple of (duplicate) blueprints filed [1] [2]. You should probably assign one of them to you and update with details till we get cinder-spec available for blueprint review process. For details about commit process please refer [3]. For any issues with commit process, feel free to ask questions at #openstack-cinder @ freenode.net Also, you should try to attend Cinder meetings [4] to update the team about your progress on commits and for any updates you should be having related to new driver submissions. [1] https://blueprints.launchpad.net/cinder/+spec/cloudbyte-elastistor-iscsi-driver [2] https://blueprints.launchpad.net/cinder/+spec/cloudbyte-elastistor-iscsi-driver-cinder [3] https://wiki.openstack.org/wiki/Gerrit_Workflow [4] https://wiki.openstack.org/wiki/CinderMeetings Best Regards, Swapnil Kulkarni irc : coolsvap cools...@gmail.com +91-87960 10622(c) http://in.linkedin.com/in/coolsvap *It's better to SHARE* On Wed, May 7, 2014 at 11:24 AM, Mardan Raghuwanshi mardan.si...@cloudbyte.com wrote: Hi All, I developed Cinder driver for CloudByte's ElastiStor, now I want to check in this code into master. I don't have more idea about git and OpenStack commit process, can you guys please guide me to commit the code into master branch. Thanks, Mardan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey
The survey is not anonymous and I plan to publish it with its raw data we can then discuss how to interpret. Each use case has an accompanying text field so that you can add any comments you wish. At least I did add comments to most use cases when I responded :-) -Sam. -Original Message- From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] Sent: Tuesday, May 06, 2014 10:16 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey I agree that everyone's thoughts should be in it. I don't see why a representative vote does not allow for that. Sam put a text box on each use case to capture extra thoughts. I would hope that no organization would be so confused as to have widely varying viewpoints on *what their customers want*, since that is the supposed purpose of all of this, right? We're supposed to be deciding which use-cases matter *to our customers*, so there should be no real variance for what I would vote versus what my teammates would vote, since we have the same customersŠ Also, if we are using this as a type of voting mechanism then interests of large/vocal organizations drown out smaller organizations. If this is being used as a voting mechanism then how do you suggest we weight votes for smaller companies so that we do not alienate them from further voting/discussions? Cheers, --Jorge On 5/6/14 1:52 PM, Jay Pipes jaypi...@gmail.com wrote: On 05/06/2014 02:42 PM, Jorge Miramontes wrote: Sam, I'm assuming you want one person from each company to answer correct? I'm pretty sure people in each organization will vote the sameŠat least I'd hope! I'd hope not! :) Even within the same organization or company, we all have different ideas on use cases, the appropriateness of certain things in the cloud, and the role of a load balancer service in the general mix of things. I certainly would hope that lots of Mirantis engineers other than myself fill out the use case survey and offer their own insights. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey
Jorge, It's your call. I prefer to gather information for now, if multiple people from the same organization will have drastically different answers, than this should also be considered. -Sam. From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] Sent: Tuesday, May 06, 2014 9:42 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey Sam, I'm assuming you want one person from each company to answer correct? I'm pretty sure people in each organization will vote the same...at least I'd hope! Cheers, --Jorge From: Samuel Bercovici samu...@radware.commailto:samu...@radware.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Tuesday, May 6, 2014 2:56 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey Hi Everyone, The survey is now live via: http://eSurv.org?u=lbaas_project_user The password is: lbaas The survey includes all the tenant facing use cases from https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing Please try and fill the survey this week so we can have enough information to base decisions next week. Regards, -Sam. From: Samuel Bercovici Sent: Monday, May 05, 2014 4:52 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey Hi, I will not freeze the document to allow people to work on requirements which are not tenant facing (ex: operator, etc.) I think that we have enough use cases for tenant facing capabilities to reflect most common use cases. I am in the process of creation a survey in surveymonkey for tenant facing use cases and hope to send it to ML ASAP. Regards, -Sam. From: Samuel Bercovici Sent: Thursday, May 01, 2014 8:40 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Samuel Bercovici Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey Hi Everyone! To assist in evaluating the use cases that matter and since we now have ~45 use cases, I would like to propose to conduct a survey using something like surveymonkey. The idea is to have a non-anonymous survey listing the use cases and ask you identify and vote. Then we will publish the results and can prioritize based on this. To do so in a timely manner, I would like to freeze the document for editing and allow only comments by Monday May 5th 08:00AMUTC and publish the survey link to ML ASAP after that. Please let me know if this is acceptable. Regards, -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Add VMware dvSwitch/vSphere API support for Neutron ML2
Hi, Just to give this some context, the original idea for the driver came from the fact that there was no readily available method for using VLAN-backed port groups in dvSwitches with neutron. We tried using nova-network with regular vSwitches and VlanManager to evaluate the VMware driver in nova-compute but it was decided that neutron is a an absolute requirement for production use. The current code of the driver is tested with nova-compute controlling a vSphere 5.1 cluster. Multiple instances were successfully created, attached to the correct port group and network connectivity was achieved. We are not using VXLANs at the moment and are not actively looking into deploying them, so implementing a VXLAN support in the driver is not currently in our interest. Considering that VXLANs are configured as port groups in dvSwitches on the VMware side there isn¹t much difference in that part. However, configuring the VXLANs in the vShield app is something I think is out of the scope of this driver. We¹re interested in going through the blueprint process. Due to a rather tight schedule on our end we had to get a limited-functionality version of the driver ready before we had time to look into the process of submitting a blueprint and the required specs. The current version of the driver implements the only required feature in our environment - attaching virtual machines on the VMware side to correct dvSwitch port groups. Adding features like creating the port groups based on networks defined in neutron etc. are in consideration. I hope this answers some of the questions and I¹m happy to provide more details, if needed. Regards -- Jussi Sorjonen, Systems Specialist, Data Center +358 (0)50 594 7848, jussi.sorjo...@cybercom.com Cybercom Finland Oy, Urho Kekkosen katu 3 B, 00100 Helsinki, FINLAND www.cybercom.fi | www.cybercom.com On 06/05/14 11:17, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi IIkka, this is a very interesting MD for ML2. Have you ever tried to use your ML2 driver with VMWare drivers on the nova side, so that you could manage your VM with nova, and its network with neutron. Do you think it would be difficult to extend your driver to support vxlan encapsulation? Neutron has a new process to validate BP. Please follow those instructions to submit your spec for review : https://wiki.openstack.org/wiki/Blueprints#Neutron regards On Mon, May 5, 2014 at 2:22 PM, Ilkka Tengvall ilkka.tengv...@cybercom.com wrote: Hi, I would like to start a discussion about a ML2 driver for VMware distributed virtual switch (dvSwitch) for Neutron. There is a new blueprint made by Sami Mäkinen (sjm) in https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dv switch. The driver is described and code is publicly available and hosted in github: https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dv switch We would like to get the driver through contribution process, what ever that exactly means :) The original problem this driver solves for is is the following: We've been running VMware virtualization platform in our data center before OpenStack, and we will keep doing it due existing services. We also have been running OpenStack for a while also. Now we wanted to get the most out of both by combining the customers networks on the both plafroms by using provider networks. The problem is that the networks need two separate managers, neutron and vmware. There was no OpenStack tools to attach the guests on VMware side to OpenStack provider networks during instance creation. Now we are putting our VMware under control of OpenStack. We want to have one master to control the networks, Neutron. We implemented the new ML2 driver to do just that. It is capable of joining the machines created in vSphere to the same provider networks the OpenStack uses, using dvSwitch port groups. I just wanted to open the discussion, for the technical details please contact our experts on the CC list: Sami J. Mäkinen Jussi Sorjonen (freenode: mieleton) BR, Ilkka Tengvall Advisory Consultant, Cloud Architecture email: ilkka.tengv...@cybercom.com mobile: +358408443462 freenode: ikke-t web:http://cybercom.com - http://cybercom.fi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [DriverLog][nova][neutron][cinder] Call for vendor participation please
Hi, Thanks for the effort. While I am looking the website and the driver database, I have a couple of questions and suggestions. - Is it better to include trunk (= Juno) in releases in each driver if it is a part of the trunk or to wait it until Juno is released? We need some guidelines on this. - Which is better as maintainer email, an individual mail address or CI account contact address? IMO an individual mail address looks better because CI account contact address receives all review comments and mails to the address can be missed or not noticed soon from my experience. It is better to have some guideline on the maintainer email. - How is the status of CI tested determined? I am not sure how it is handled in Wiki informaiton. - (Related to the above) How does DriverLog handle a case where multiple drivers are tested under once CI account? AFAIK some CI acounts run third party testing for multiple drivers. - releases in drivers section is a list of release names now. It means we need to update releases in every release. I wonder we can support [from_release, to_release] style. If to_release is omitted, it means trunk. Thanks, Akihiro (2014/04/29 2:05), Jay Pipes wrote: Hi Stackers, Mirantis has been collaborating with a number of OpenStack contributors and PTLs for the last couple months on something called DriverLog. It is an effort to consolidate and display information about the verification of vendor drivers in OpenStack. Current implementation is here: http://staging.stackalytics.com/driverlog/ Public wiki here: https://wiki.openstack.org/wiki/DriverLog Code is here: https://github.com/stackforge/driverlog There is currently a plan by the foundation to publicly announce this in the coming weeks. At this point Evgeniya Shumakher, in cc, is manually maintaining the records, but we aspire for this to become a community driven process over time with vendors submitting updates as described in the wiki and PTLs and cores of the respective projects participating in update reviews. A REQUEST: If you are vendor that has built an OpenStack driver, please check that it is listed on the dashboard and update the record (following the process in the wiki) to make sure the information is accurately reflected. We want to make sure that the data is accurate prior to announcing it to general public. Also, if anybody has a suggestion on what should be improved / changed etc. == please don’t hesitate to share your ideas! Thanks! Jay and Evgeniya ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel-dev] [Fuel][RabbitMQ] nova-compute stuck for a while (AMQP)
Roman, the current stable/4.1 has some fixes that make this less likely to occur and is the most likely to recover. That said, I've done some tracing and there are some issues with nova-conductor processing those messages. Some of the times I've seen the compute-node be the issue, other times I've seen nova-conductor be the issue. As of stable/4.1 I've been able to track it down to nova-conductor. AFAICT it receives the message from nova-compute, takes it from the queue, acks the queue, and selects the object from the DB. However after moving nova-compute and nova-conductor log trace level in amqp and sqlalchemey, the issue appears to stop. I've yet to confirm if the cluster state of rabbit changed, or if the change in logging level changed it or something else. On Tue, May 6, 2014 at 12:42 PM, Roman Sokolkov rsokol...@mirantis.com wrote: Hello, fuelers. I'm using Fuel 4.1A + Havana in HA mode. I permanently observe (on other deployments also) issue with stuck nova-compute service. But i think problem is more fundamental and relates to HA RabbitMQ and OpenStack AMQP driver implementation. Symptoms: Random nova-compute from time to time marked as XXX for a while. I see that service itself works properly. In logs i see that it sends status updates to conductor. But actually nothing is sent. netstat shows that all connections to/from rabbit ESTABLISHED rabbitmqctl shows that compute.node-x queue synced to all slaves. nothing has been broken before, i mean rabbitmq cluster, etc. Axe style solution: /etc/init.d/openstack-nova-compute restart So here i've found a lot of interesting stuff (and solutions): https://bugs.launchpad.net/oslo.messaging/+bug/856764 My questions are: Are there any thoughts particular for Fuel to solve/workaround this issue? Any fast solution for this in 4.1? Like adjust TCP keep-alive timeouts? -- Roman Sokolkov, Deployment Engineer, Mirantis, Inc. Skype rsokolkov, rsokol...@mirantis.com -- Mailing list: https://launchpad.net/~fuel-dev Post to : fuel-...@lists.launchpad.net Unsubscribe : https://launchpad.net/~fuel-dev More help : https://help.launchpad.net/ListHelp -- Andrew Mirantis Ceph community ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][Subnet] Unable to update external network subnet's gateway-ip
Hi All, I am trying below scenario, please let me know the correctness of the scenario:- 1. Create one external network i.e. with router:external=True option. 2. Create one subnet under the above network with gateway-ip provided. 3. Create one router. 4. Issue command neutron router-gateway-set routerID-point3 above networkID-point1 above 5. Update the subnet in point2 above with new gateway IP i.e neutron subnet-update subnetID-point2 --gateway-ip newIP 6. I can see success-full subnet updated response on cli. 7. For validating the changed gateway-ip I verified router namespace present on Network node by using command ip netns exec router-namespace-point3 route -n. But in the output the new gateway-ip is not updated it is still showing the old one. Brief about my setup:- 1. It has one controller node, one Network node and 2 Compute nodes. 2. I am on Icehouse-GA. Regards, Vishal DISCLAIMER: This message is proprietary to Aricent and is intended solely for the use of the individual to whom it is addressed. It may contain privileged or confidential information and should not be circulated or used for any purpose other than for what it is intended. If you have received this message in error, please notify the originator immediately. If you are not the intended recipient, you are notified that you are strictly prohibited from using, copying, altering, or disclosing the contents of this message. Aricent accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Add VMware dvSwitch/vSphere API support for Neutron ML2
Hi, one other part of your future work, if the BP is accepted, is to deploy a third party testing environment. Since your work doesn't rely on opensource solution (VMWare), it won't be tested by the openstack CI. you will have to deploy an infra to test and vote on each patch submitted to neutron that might break your driver. see : http://ci.openstack.org/third_party.html regards On Wed, May 7, 2014 at 8:39 AM, Jussi Sorjonen jussi.sorjo...@cybercom.com wrote: Hi, Just to give this some context, the original idea for the driver came from the fact that there was no readily available method for using VLAN-backed port groups in dvSwitches with neutron. We tried using nova-network with regular vSwitches and VlanManager to evaluate the VMware driver in nova-compute but it was decided that neutron is a an absolute requirement for production use. The current code of the driver is tested with nova-compute controlling a vSphere 5.1 cluster. Multiple instances were successfully created, attached to the correct port group and network connectivity was achieved. We are not using VXLANs at the moment and are not actively looking into deploying them, so implementing a VXLAN support in the driver is not currently in our interest. Considering that VXLANs are configured as port groups in dvSwitches on the VMware side there isn¹t much difference in that part. However, configuring the VXLANs in the vShield app is something I think is out of the scope of this driver. We¹re interested in going through the blueprint process. Due to a rather tight schedule on our end we had to get a limited-functionality version of the driver ready before we had time to look into the process of submitting a blueprint and the required specs. The current version of the driver implements the only required feature in our environment - attaching virtual machines on the VMware side to correct dvSwitch port groups. Adding features like creating the port groups based on networks defined in neutron etc. are in consideration. I hope this answers some of the questions and I¹m happy to provide more details, if needed. Regards -- Jussi Sorjonen, Systems Specialist, Data Center +358 (0)50 594 7848, jussi.sorjo...@cybercom.com Cybercom Finland Oy, Urho Kekkosen katu 3 B, 00100 Helsinki, FINLAND www.cybercom.fi | www.cybercom.com On 06/05/14 11:17, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi IIkka, this is a very interesting MD for ML2. Have you ever tried to use your ML2 driver with VMWare drivers on the nova side, so that you could manage your VM with nova, and its network with neutron. Do you think it would be difficult to extend your driver to support vxlan encapsulation? Neutron has a new process to validate BP. Please follow those instructions to submit your spec for review : https://wiki.openstack.org/wiki/Blueprints#Neutron regards On Mon, May 5, 2014 at 2:22 PM, Ilkka Tengvall ilkka.tengv...@cybercom.com wrote: Hi, I would like to start a discussion about a ML2 driver for VMware distributed virtual switch (dvSwitch) for Neutron. There is a new blueprint made by Sami Mäkinen (sjm) in https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dv switch. The driver is described and code is publicly available and hosted in github: https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dv switch We would like to get the driver through contribution process, what ever that exactly means :) The original problem this driver solves for is is the following: We've been running VMware virtualization platform in our data center before OpenStack, and we will keep doing it due existing services. We also have been running OpenStack for a while also. Now we wanted to get the most out of both by combining the customers networks on the both plafroms by using provider networks. The problem is that the networks need two separate managers, neutron and vmware. There was no OpenStack tools to attach the guests on VMware side to OpenStack provider networks during instance creation. Now we are putting our VMware under control of OpenStack. We want to have one master to control the networks, Neutron. We implemented the new ML2 driver to do just that. It is capable of joining the machines created in vSphere to the same provider networks the OpenStack uses, using dvSwitch port groups. I just wanted to open the discussion, for the technical details please contact our experts on the CC list: Sami J. Mäkinen Jussi Sorjonen (freenode: mieleton) BR, Ilkka Tengvall Advisory Consultant, Cloud Architecture email: ilkka.tengv...@cybercom.com mobile: +358408443462 freenode: ikke-t web:http://cybercom.com - http://cybercom.fi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___
Re: [openstack-dev] [ML2] L2 population mechanism driver
hi, please consider that maintaining drivers from one openstack version to others is really hard when those drivers are out of the scope of neutron developers/continuous Integration process. For instance, ML2 is currently refactored by this patch : https://review.openstack.org/#/c/82945/ you will have to adapt your code permanently! regards On Tue, May 6, 2014 at 8:13 PM, Sławek Kapłoński sla...@kaplonski.pl wrote: Hello, Thanks for explanation. Now it is clear for me :) I made my own driver because I made config on hosts in special way but I can't describe details :/ Best regards Slawek Kaplonski sla...@kaplonski.pl Dnia wtorek, 6 maja 2014 10:35:05 Mathieu Rohon pisze: Hi slawek, As soon as you declare l2population in the MD section of the config file, it will be loaded by ML2 plugin. l2population MD will listen to ML2 DB events and send RPC messages to ovs agent when needed. l2population MD will not bind the port, ovs MD will. By the way, you need to ad add the flag l2population in config file of the agent. just to be curious, what your MD which inherit from l2_pop MD is made for? regards Mathieu On Mon, May 5, 2014 at 11:06 PM, Sławek Kapłoński sla...@kaplonski.pl wrote: Hello, Thanks for answear. Now I made my own mech_driver which inherits from l2_pop driver and it is working ok. But If I will set it as: mechanism_drivers=openvswitch,l2population than ports will be binded with ovs driver so population mechanism will be working in such network? Best regards Slawek Kaplonski sla...@kaplonski.pl Dnia poniedziałek, 5 maja 2014 00:29:56 Narasimhan, Vivekanandan pisze: Hi Slawek, I think L2 pop driver needs to be used in conjunction with other mechanism drivers. It only deals with pro-actively informing agents on which MAC Addresses became available/unavailable on cloud nodes and is not meant for binding/unbinding ports on segments. If you configure mechanism_drivers=openvswitch,l2population in your ml2_conf.ini and restart your neutron-server, you'll notice that bind_port is handled by OVS mechanism driver (via AgentMechanismDriverBase inside ml2/drivers/mech_agent.py). -- Thanks, Vivek -Original Message- From: Sławek Kapłoński [mailto:sla...@kaplonski.pl] Sent: Sunday, May 04, 2014 12:32 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [ML2] L2 population mechanism driver Hello, Last time I want to try using L2pop mechanism driver in ML2 (and openvswitch agents on compute nodes). But every time when I try to spawn instance I have got error binding failed. After some searching in code I found that l2pop driver have not implemented method bind_port and as it inherit directly from MechanismDriver this method is in fact not implemented. Is is ok and this mechanism driver should be used in other way or maybe there is some bug in this driver and it miss this method? Best regards Slawek Kaplonski sla...@kaplonski.pl ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Design Summit Sessions
Kyle Mestery wrote: On Tue, May 6, 2014 at 9:35 AM, Susanne Balle sleipnir...@gmail.com wrote: Kyle Will the Neutron pod location be a known place to attendees? Will there be signs, etc? Per some information I received from ttx, I believe the pods are on the same level as the rooms for the Design Summit Sessions, and the tables will be clearly labeled for each project. The pods are actually in the Design Summit area, on the level just below the Design Summit session rooms. Tables will be labeled and a map will be posted. There will be a flipchart (paperboard) for each pod, so you can use that as a standing advertisement of future discussion times. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Juno-Summit] availability of the project project pod rooms on Monday May 12th?
Susanne Balle wrote: Thierry How do I reserve a pod for a specific time and day. I am getting ready to setup a meeting. There will be a flipchart / paperboard for every pod. I'd suggest using it to advertise the time and subject of your meeting. That's just a suggestion though... each program is free to make the best use of the space allocated to it. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] nova-compute error
sonia verma wrote: Hi all I'm getting following error when booting VM onto compute node from openstack dashboard... [...] Hi! This list is for future development topics only, and your question is about debugging an issue you have using OpenStack. You shall get better answers by asking your question on the general OpenStack mailing-list (openst...@lists.openstack.org). For more information see: https://wiki.openstack.org/wiki/Mailing_Lists -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [swift] swiftclient functional tests gate for python-swiftclient
Hello, Now that the functional tests on swiftclient has been merged, what do you think about adding a job for it to actually test them against a devstack the same way we do in 'swift'? Thanks, Chmouel ---BeginMessage--- Jenkins has submitted this change and it was merged. Change subject: Add functional tests for python-swiftclient .. Add functional tests for python-swiftclient Coverage for swiftclient.client is 71% with these tests. Unit tests have been moved into another subdirectory to separate them from functional tests. Change-Id: Ib8c4d78f7169cee893f82906f6388a5b06c45602 --- A .functests M .unittests M swiftclient/exceptions.py A tests/functional/__init__.py A tests/functional/test_swiftclient.py A tests/sample.conf A tests/unit/__init__.py R tests/unit/test_command_helpers.py R tests/unit/test_multithreading.py R tests/unit/test_swiftclient.py R tests/unit/test_utils.py R tests/unit/utils.py M tox.ini 13 files changed, 320 insertions(+), 2 deletions(-) Approvals: Alistair Coles: Looks good to me (core reviewer); Approved Chmouel Boudjnah: Looks good to me (core reviewer) Jenkins: Verified -- To view, visit https://review.openstack.org/76355 To unsubscribe, visit https://review.openstack.org/settings Gerrit-MessageType: merged Gerrit-Change-Id: Ib8c4d78f7169cee893f82906f6388a5b06c45602 Gerrit-PatchSet: 16 Gerrit-Project: openstack/python-swiftclient Gerrit-Branch: master Gerrit-Owner: Christian Schwede christian.schw...@enovance.com Gerrit-Reviewer: Alistair Coles alistair.co...@hp.com Gerrit-Reviewer: Chmouel Boudjnah chmo...@enovance.com Gerrit-Reviewer: Christian Schwede christian.schw...@enovance.com Gerrit-Reviewer: Donagh McCabe donagh.mcc...@hp.com Gerrit-Reviewer: Elastic Recheck Gerrit-Reviewer: Jenkins Gerrit-Reviewer: Luis de Bethencourt l...@debethencourt.com Gerrit-Reviewer: Samuel Merritt s...@swiftstack.com ---End Message--- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Plugin Mechanism for Openstack Components to provide HA
I was wondering if any work has been done on developing a standard plugin mechanism to provide HA to the OpenStack components Let me try and explain what I mean. Today there is a certain degree of how the components can be made to work in either an active/active or active/passive fashion. But they differ by component. Galera for Mysql for example, RabbitMQ is already multi-node. The rest of the components are usually put behind HAproxy. But with every new component added (for example Heat, Ceilometer, Trove and many more to come), ideally (for me at least) the best way would be the same way you install the package through yum/apt part of this package could have a plugin to a central HA component with all the information needed to make this component highly available. Am I barking up the wrong tree? With best regards, Maish Saidel-Keesing Platform Architect SPVSS msaid...@cisco.commailto:msaid...@cisco.com Phone: +972-2-5886103 Mobile: +972542206103 [http://www.cisco.com/web/europe/images/email/signature/logo02.jpg]http://www.cisco.com/global/IL/ [http://www.cisco.com/assets/social_media_icons/twitter-16x16.png]http://twitter.com/maishsk [http://www.cisco.com/assets/social_media_icons/google-16x16.png] http://vexpert.me/maishsk [http://www.cisco.com/assets/social_media_icons/linkedin-16x16.png] http://il.linkedin.com/in/maish/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] nova-compute error
Thanks Thierry for the help On Wed, May 7, 2014 at 1:31 PM, Thierry Carrez thie...@openstack.orgwrote: sonia verma wrote: Hi all I'm getting following error when booting VM onto compute node from openstack dashboard... [...] Hi! This list is for future development topics only, and your question is about debugging an issue you have using OpenStack. You shall get better answers by asking your question on the general OpenStack mailing-list (openst...@lists.openstack.org). For more information see: https://wiki.openstack.org/wiki/Mailing_Lists -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] nova-compute error
Hi Thierry Thanks for help On Wed, May 7, 2014 at 1:31 PM, Thierry Carrez thie...@openstack.orgwrote: sonia verma wrote: Hi all I'm getting following error when booting VM onto compute node from openstack dashboard... [...] Hi! This list is for future development topics only, and your question is about debugging an issue you have using OpenStack. You shall get better answers by asking your question on the general OpenStack mailing-list (openst...@lists.openstack.org). For more information see: https://wiki.openstack.org/wiki/Mailing_Lists -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
@Mike Thanks. Sorry for misleading you. I mean that I know volume already has a bootable field. My question is that once a volume has been created, its glance_image_metadata will be immutable. However, the volume is constantly having blocks changed, so some property of its glance_image_metadata will become overdue. The example is the hw_scsi_mode property of glance_image_metadata, which will affect the scsi controller used when booting from volume. 2014-05-07 11:09 GMT+08:00 Mike Perez thin...@gmail.com: On 06:31 Wed 07 May , Trump.Zhang wrote: Thanks for your further instructions. I think the situations I mentioned are the reasonable use cases. They are similar to the bootable volume use cases, user can create an empty volume and install os in it from an image or create bootable volume from instance ([1]). If volume metadata is not intended to be interpreted by cinder or nova as meaning anything, maybe Cinder needs to add support for updating some of glance_image_metadata of volume or introduce new property for volume like bootable ? I don't think these two methods are good either. [1] https://blueprints.launchpad.net/cinder/+spec/add-bootable-option Volume already has a bootable field: https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/models.py#L122 -- Mike Perez ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Juno Design Summit schedule frozen
Hi everyone, The Juno Design Summit schedule is now frozen. There may be last-minute changes, but they shall be announced in a post here. So you can start planning your week. You can access the schedule at: http://junodesignsummit.sched.org/ Alternatively you can use the Guidebook app that shows both the conference and design summit schedule: iOS app: https://itunes.apple.com/us/app/openstack-summit/id859699528?mt=8 Android app: https://play.google.com/store/apps/details?id=com.guidebook.apps.OpenStack.android See you all next week! -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
@Tripp, Thanks for your reply and info. I am also thinking if it is proper to add support for updating the volume's glance_image_metadta to reflect the newest status of volume. However, there may be alternative ways to achieve it: 1. Using the volume's metatadata 2. Using the volume's admin_metadata So I am wondering which is the most proper method. 2014-05-07 12:32 GMT+08:00 Tripp, Travis S travis.tr...@hp.com: A few days ago I entered a client blueprint on the same topic [1], but maybe it has a server side dependency as well? When it comes to scheduling, as far as I have been able to tell from looking at Nova code, the scheduler is only getting volume_image_metadata and not the regular cinder_metadata. So, if you want to add some volume_image_metadata for scheduler filtering or for passing compute driver options through after creating a volume, there doesn't seem to be a way to do this from the python-cinderclient. If I'm wrong, please correct me. [1] https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata -Original Message- From: Mike Perez [mailto:thin...@gmail.com] Sent: Tuesday, May 06, 2014 9:10 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata On 06:31 Wed 07 May , Trump.Zhang wrote: Thanks for your further instructions. I think the situations I mentioned are the reasonable use cases. They are similar to the bootable volume use cases, user can create an empty volume and install os in it from an image or create bootable volume from instance ([1]). If volume metadata is not intended to be interpreted by cinder or nova as meaning anything, maybe Cinder needs to add support for updating some of glance_image_metadata of volume or introduce new property for volume like bootable ? I don't think these two methods are good either. [1] https://blueprints.launchpad.net/cinder/+spec/add-bootable-option Volume already has a bootable field: https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/model s.py#L122 -- Mike Perez ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] Intermittent network problems allowed to sneak passed the gate?
On 06/05/14 23:55, Jeremy Stanley wrote: On 2014-05-06 15:52:04 +0100 (+0100), Derek Higgins wrote: [...] The job simply got restarted and this kept happening until the job passed. A legitimately failed job : https://jenkins05.openstack.org/job/check-nova-docker-dsvm-f20/2/ http://logs.openstack.org/14/91514/5/check/check-nova-docker-dsvm-f20/d5c1ebf/console.html [...] If the job fails in such a way that it impacts communication between the slave and the Jenkins master, or tanks the slave so badly that it ceases responding entirely, Jenkins often does not report a build completion status. Because this happens rather unfortunately often due to the nature of connectivity in service providers and due to bugs in Jenkins, Zuul assumes it should automatically reattempt any job which ceases running without explanation. Perhaps one option would be to keep a retry counter and not reattempt a job which fails in this manner more than once or twice...? It won't catch all cases but sounds like a good idea to me, if there is somebody familiar with the zuul code that can quickly do it great, otherwise I can try and make myself familiar. Derek. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] How to install openstack into containers using FUEL?
Nobody tried to use docker for slave nodes, I guess. Actually, you will face several issues. First, you need to install basic system into docker, install packages and run our postinstall script from our kickstart filehttps://github.com/stackforge/fuel-library/blob/master/deployment/puppet/cobbler/templates/kickstart/centos.ks.erb. After that you will need to make sure that nailgun agent sends information to master node. It will allow you to see node on ui and add it to cluster. Then you can either manually modify our database and set 'provisioned' flag for node, or use 'fuel' console client to run deployment without provisioning. After that you'll need to solve issues with storages, if any. Almost every storage expects particular filesystem to be present. Also our master node use docker for services management in 5.0 release. If you going to implement it, please share results. It will be very helpful. On Mon, May 5, 2014 at 2:09 PM, Konstantin Danilov kdani...@mirantis.comwrote: Hi all, At the moment I'm using FUEL with vagrant. This combination allows only emulation mode (qemu), which provides too low vm performance. I want install openstack into containers (Docker, as example) using FUEL, as this would allows me to use full speed virtualization with kvm, but seems like FUEL support only nodes, which booted using PXE boot using FUEL image. This is impossible in case of containers. Is there any way to install FUEL services and make all necessary settings to already running node? Or any other way to add node to FUEL without PXE boot? Thanks -- Kostiantyn Danilov aka koder.ua Principal software engineer, Mirantis skype:koder.ua http://koder-ua.blogspot.com/ http://mirantis.com -- You received this message because you are subscribed to the Google Groups fuel-core-team group. To unsubscribe from this group and stop receiving emails from it, send an email to fuel-core-team+unsubscr...@mirantis.com. For more options, visit https://groups.google.com/a/mirantis.com/d/optout. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Monitoring as a Service
Hi Alexandre, I wanted to let this discussion develop a little before jumping in, as we've already had many circular debates about the cross-over between ceilometer and monitoring infrastructure in general. Personally I'm in favor of the big tent/broad church interpretation of ceilometer's project mandate, and would welcome further development of our capabilities in this area (whether directly within the ceilometer code-tree itself, or within a parallel repo aligned with the Telemetry program). In terms of furthering the discussion, unfortunately you've missed the boat in terms of securing a slot in the design summit next week in Atlanta (proposal deadline was April 20th, and the scheduling has all been finalized at this stage). However, we do have a project pod space available for ad-hoc overflow sessions. I would suggest that we organize something on this theme after the main ceilometer track[1] has completed, say on the Thursday or Friday. Please reach out on IRC to discuss availability for this and we'll work out something around remote participation. Thanks, Eoghan [1] http://junodesignsummit.sched.org/overview/type/ceilometer - Original Message - Thanks to everyone for the feedback. I agree that this falls under the Telemetry Program and I have moved the blueprint. You can find it here: https://blueprints.launchpad.net/ceilometer/+spec/monitoring-as-a-service Wiki page: https://wiki.openstack.org/wiki/MaaS Etherpad: https://etherpad.openstack.org/p/MaaS I can go over the project with you as well as others that are interested. We would like to start working with other open-source developers. I'll also be at the Summit next week. Roland, I currently have no plans to be at the Summit next week. However, I would be interested in exploring what you have already done and learn from it. Maybe we can schedule a meeting? You can always contact me on IRC (aviau) or by e-mail at alexandre.v...@savoirfairelinux.com For now, I think we should focus on the use cases. I invite all of you to help us list them on the Etherpad. Alexandre On 14-05-05 12:00 PM, Hochmuth, Roland M wrote: Alexandre, Great timing on this question and I agree with your proposal. I work for HP and we are just about to open-source a project for Monitoring as a Service (MaaS), called Jahmon. Jahmon is based on our customer-facing monitoring as a service solution and internal monitoring projects. Jahmon is a multi-tenant, highly performant, scalable, reliable and fault-tolerant monitoring solution that scales to service provider levels of metrics throughput. It has a RESTful API that is used for storing/querying metrics, creating compound alarms, querying alarm state/history, sending notifications and more. I can go over the project with you as well as others that are interested. We would like to start working with other open-source developers. I'll also be at the Summit next week. Regards --Roland On 5/4/14, 1:37 PM, John Dickinson m...@not.mn wrote: One of the advantages of the program concept within OpenStack is that separate code projects with complementary goals can be managed under the same program without needing to be the same codebase. The most obvious example across every program are the server and client projects under most programs. This may be something that can be used here, if it doesn't make sense to extend the ceilometer codebase itself. --John On May 4, 2014, at 12:30 PM, Denis Makogon dmako...@mirantis.com wrote: Hello to All. I also +1 this idea. As I can see, Telemetry program (according to Launchpad) covers the process of the infrastructure metrics (networking, etc) and in-compute-instances metrics/monitoring. So, the best option, I guess, is to propose add such great feature to Ceilometer. In-compute-instance monitoring will be the great value-add to upstream Ceilometer. As for me, it's a good chance to integrate well-known production ready monitoring systems that have tons of specific plugins (like Nagios etc.) Best regards, Denis Makogon воскресенье, 4 мая 2014 г. пользователь John Griffith написал: On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand z...@debian.org wrote: On 05/02/2014 05:17 AM, Alexandre Viau wrote: Hello Everyone! My name is Alexandre Viau from Savoir-Faire Linux. We have submited a Monitoring as a Service blueprint and need feedback. Problem to solve: Ceilometer's purpose is to track and *measure/meter* usage information collected from OpenStack components (originally for billing). While Ceilometer is usefull for the cloud operators and infrastructure metering, it is not a *monitoring* solution for the tenants and their services/applications running in the cloud because it does not allow for service/application-level monitoring and it ignores detailed and precise guest system metrics. Proposed solution: We
[openstack-dev] [Trove] Meeting Wednesday, 7 May @ 1800 UTC
Just a quick reminder for the weekly Trove meeting. https://wiki.openstack.org/wiki/Meetings#Trove_.28DBaaS.29_meeting Date/Time: Wednesday, 7 May - 1800 UTC / 1100 PDT / 1300 CDT IRC channel: #openstack-meeting-alt The Meeting Agenda can be found at https://wiki.openstack.org/wiki/Meetings/TroveMeeting See you folks then! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Juno-Summit] availability of the project project pod rooms on Monday May 12th?
Hi Thierry - I had a question about Program pods access. Do people need to be ATCs to get into the area with the pods? I ask, as not all members of the Designate group would be ATCs... Thanks, Graham On Tue, 2014-05-06 at 21:53 +0200, Thierry Carrez wrote: Carl Baldwin wrote: Is there a map, a list, or some other official reference? I may like to use a pod for a cross-project discussion about DNS between Nova, Neutron, and Designate. Not a big deal but it might be nice to know more about what we're looking for when we get there. There will be a designated pod for each of the official programs at: https://wiki.openstack.org/wiki/Programs Some programs share a pod. There will be a map at the center of the space, as well as signage up to help find the relevant pod. For a cross-project discussion you'd have to pick a pod where to have it. In your case I'd recommend the Neutron pod since Nova shares its pod with Glance and there is no Designate pod. Cheers, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 05/08 14-00 UTC
Howdy, y'all! I just wanted to give you a quick update: It looks like the Rackspace team is mostly done with their half of the API comparison, however, it is extremely unlikely I'll be able to finish my half of this in time for the team meeting this Thursday. I apologize for this. Stephen On Tue, May 6, 2014 at 1:27 PM, Eugene Nikanorov enikano...@mirantis.comwrote: Hi folks, This will be the last meeting before the summit, so I suggest we will focus on the agenda for two design track slots we have. Per my experience design tracks are not very good for in-depth discussion, so it only make sense to present a road map and some other items that might need core team attention like interaction with Barbican and such. Another item for the meeting will be comparison of API proposals which as an action item from the last meeting. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Sahara] Split sahara-api into api and engine
Hello people, The following patch set splits monolithic sahara-api process into two - sahara-api and sahara-engine: https://review.openstack.org/#/c/90350/ After the change is merged, there will be three binaries to run Sahara: * sahara-all - runs Sahara all-in-one (like sahara-api does right now) * sahara-api - runs Sahara API endpoint, offloads 'heavy' tasks to sahara-engine * sahara-engine - executes tasks which are either 'heavy' or require remote connection to VMs Most probably you will want to keep running all-in-one process in your dev environment, so you need to switch using sahara-all instead of sahara-api. To make transition smooth, we've merged another change which adds sahara-all process as an alias to sahara-api. That means that you can switch to using sahara-all right now, so when the patch is merged, you will not notice that. Thanks, Dmitry ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron
Hi Advanced Services/LBaaS Stackers, We are setting up a meeting to discuss if it makes sense to separate the advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects. We want a healthy discussion around the pros and cons of separating the advanced services from Neutron and its short or long term feasibility. The meeting is planned for: *Tuesday May 13th at 2pm in the Neutron pod.* There will be a designated pod for each of the official programs at: https://wiki.openstack.org/wiki/Programs Some programs share a pod. There will be a map at the center of the space, as well as signage up to help find the relevant pod. Based on discussions with Rackspace, Mirantis, and others it is clear that the advanced services (i.e. LBaaS) in Neutron are not getting the attention and the support to move forward and create a first in class load-balancer service; from a service provider or operator's perspective. We currently have a lot of momentum and energy behind the LBaaS effort but are being told that the focus for Neutron is bug fixing given the instability in Neutron itself. While the latter is totally understandable, as a high priority for Neutron it leaves the advanced services out in the cold with no way to make progress in developing features that are needed to support the many companies that rely on LBaaS for large scale deployments. The current Neutron LB API and feature set meet minimum requirements for small-medium private cloud deployments, but does not meet the needs of larger, provider (or operator) deployments that include hundreds if not thousands of load balancers and multiple domain users (discrete customer organizations). The OpenStack LBaaS community looked at requirements and noted that the following operator-focused requirements are currently missing: · Scalability · SSL Certificate management – for an operator-based service, SSL certificate management is a much more important function that is currently not addressed in the current API or blueprint · Metrics Collection – a very limited set of metrics are currently provided by the current API. · Separate admin API for NOC and support operations · Minimal downtime when migrating to newer versions · Ability to migrate load balancers (SW to HW, etc.) · Resiliency functions like HA and failover · Operator-based load balancer health checks · Support multiple, simultaneous drivers. We have had great discussions on the LBaaS mailing list and on IRC about all the things we want to do, the new APIs, the User use cases, requirements and priorities, the operator requirements for LBaaS, etc. and I am at this point wondering if Neutron LBaaS as a sub-project of Neutron can fulfill our requirements. I would like this group to discuss the pros and cons of separating the advanced services, including LB, VPN, and FW, out of Neutron and allow for each of the three currently existing advanced services to become stand-alone projects or one standalone project. This should be done under the following assumptions: · Keep backwards compatibility with the current Neutron LBaaS plugin/driver API (to some point) so that existing drivers/plug-ins continues to work for people who have already invested in Neutron LBaaS · Migration strategy. We have a precedence in OpenStack of splitting up services that are becoming too big or where sub-services deserve to become an entity of its own e.g. baremetal Nova and Ironic, Nova-network and Neutron, nova-scheduler is being worked into the Gantt project, etc. At a high-level I see the following steps/blueprints needing to be carried out: · Identify and create a library similar in concept to OpenStack core that contains the common components pieces needed by the advanced services in order to minimize code duplication between the advanced services and Neutron. This library should be consumable by external projects and will allow for cleaner code reuse by not only the three existing advanced services but by new services as well. · Start a new repo for the standalone LBaaS o http://git.openstack.org/cgit/openstack-dev/cookiecutter/tree/ · Write a patch to bridge Neutron LBaaS with the standalone LBaaS for backwards compatibility. Longer term we can deprecate Neutron LBaaS which will be possible once the new LBaaS service is a graduated OpenStack service. Some of the background reasoning for suggesting this is available at: https://etherpad.openstack.org/p/AdvancedServices_and_Neutron Hope to see you there to discuss how we best make sure that the advanced services can support the many companies that rely on LBaaS or other advanced services for large scale deployment. Regards Susanne ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
[openstack-dev] How-to and best practices for developing new OpenStack service
Hello, community. There are numerous open projects evolving that are designed to work with OpenStack and follow OpenStack development principles (technologies stack, architecture, etc.). There are even more efforts within many companies working with cloud technologies. However it is hard to quick-start with all the practices developed at OpenStack community and start developing an OpenStack service the right way. So are there any resources, tutorials, guides on how to start an OpenStack service from scratch? Something that would describe correct usage of Oslo project, common architecture and give recommendations for developers. Thank you. -- Sincerely, Ruslan Kiianchuk. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] nova compute error
Hi I'm getting foollowing error while booting VM on compute node in nova-compute service using devstack due to which the VM is stuck at spawning state.. ERROR nova.openstack.common.rpc.amqp [req-da485913-0cb2-47c1-ac61-6657361286c3 nova service] Exception during message handling [16:18:23] Abhishek Jain: Traceback (most recent call last): 2014-05-07 05:38:52.835 6754 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in _process_data 2014-05-07 05:38:52.835 6754 TRACE nova.openstack.common.rpc.amqp **args) 2014-05-07 05:38:52.835 6754 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 178, in dispatch Please help regarding this.. Thanks Sonia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] How-to and best practices for developing new OpenStack service
You can refer to https://github.com/ddutta/openstack-skeleton for some help. Thanks. 2014-05-07 20:08 GMT+08:00 Ruslan Kiianchuk ruslan.kiianc...@gmail.com: Hello, community. There are numerous open projects evolving that are designed to work with OpenStack and follow OpenStack development principles (technologies stack, architecture, etc.). There are even more efforts within many companies working with cloud technologies. However it is hard to quick-start with all the practices developed at OpenStack community and start developing an OpenStack service the right way. So are there any resources, tutorials, guides on how to start an OpenStack service from scratch? Something that would describe correct usage of Oslo project, common architecture and give recommendations for developers. Thank you. -- Sincerely, Ruslan Kiianchuk. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Programs] Client Tools program discussion
On Tue, May 6, 2014 at 5:45 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, May 6, 2014 at 6:54 AM, Dean Troyer dtro...@gmail.com wrote: On Tue, May 6, 2014 at 7:02 AM, Thierry Carrez thie...@openstack.org wrote: Would you take over the Python client libraries as well ? On one hand they need /some/ domain expertise, but on the other I see no reason to special-case Python against other SDKs, and that may give the libraries a bit more attention and convergence (they currently are the ugly stepchild in some programs, and vary a lot). The future of the existing client libs has not been settled, my working assumption is that they would remain with their home programs as they are now. From the start OpenStackClient was meant to be a clean-slate for the CLI and the Python SDK is taking the same basic approach. Very excited for the OpenStackClient, it is already way nicer then the existing clients. Just working this out in my head. So the work flow would be: 1. At first ClientTools consist of just the OpenStackClient 2. When the pythonSDK is ready to move off of stackforge, it will live in ClientTools 3. Specific python-*clients will be rewritten (from scratch?) to use the PythonSDK. But this time they won't have a built in CLI. These libraries will live along side the respective servers (so nova's python-novaclient will live in Compute)? All while moving OpenStackClient to the new libraries Is that what you are proposing? My understanding is that the SDK aims to be a ground-up replacement for the existing disparate client libraries. Whether that replacement is appropriate for use inside OpenStack may be up for debate (I think I remember someone saying that wasn't necessarily a goal, with the focus being on end users, but I haven't been able to attend many of the meetings so my information may be out of date). In the case you'd absorb the Python client libraries, it might make sense to ship the keystone middleware in a separate package that would still live in the Identity program. This needs to happen anyway, it's time for my semi-annual request to dolphm... I think we need people caring for the end user and their experience of interacting with an OpenStack-backed cloud. I understand that CLI/SDK specialists and GUI-oriented specialists are different crowds, but they share the same objective and would benefit IMHO from being in the same program. There could be two subteams to care for specialists in both areas (or even 3 if you separate the CLI and SDK folks). Overall from the TC perspective it would make a much stronger proposal if you somehow could present a united (and without overlap) proposal. To be honest, until the most recent ML thread I hadn't thought about the UX team or even if they were active. We have three basic categories of projects delivering code: web UI (Horizon), CLI (OpenStackClient) and SDK (at least three active language-based teams). They all should consume the output from a UX RD effort, I guess I am open on the program structure to make that work. Horizon is already a part of a program, OSC needs to be and the SDKs will also need to be in the near future. dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron
Hi, I have added to https://etherpad.openstack.org/p/AdvancedServices_and_Neutron a note recalling two technical challenges that do not exists when LBaaS runs as a Neutron extension. -Sam. From: Susanne Balle [mailto:sleipnir...@gmail.com] Sent: Wednesday, May 07, 2014 2:45 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Balle, Susanne Subject: [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron Hi Advanced Services/LBaaS Stackers, We are setting up a meeting to discuss if it makes sense to separate the advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects. We want a healthy discussion around the pros and cons of separating the advanced services from Neutron and its short or long term feasibility. The meeting is planned for: Tuesday May 13th at 2pm in the Neutron pod. There will be a designated pod for each of the official programs at: https://wiki.openstack.org/wiki/Programs Some programs share a pod. There will be a map at the center of the space, as well as signage up to help find the relevant pod. Based on discussions with Rackspace, Mirantis, and others it is clear that the advanced services (i.e. LBaaS) in Neutron are not getting the attention and the support to move forward and create a first in class load-balancer service; from a service provider or operator's perspective. We currently have a lot of momentum and energy behind the LBaaS effort but are being told that the focus for Neutron is bug fixing given the instability in Neutron itself. While the latter is totally understandable, as a high priority for Neutron it leaves the advanced services out in the cold with no way to make progress in developing features that are needed to support the many companies that rely on LBaaS for large scale deployments. The current Neutron LB API and feature set meet minimum requirements for small-medium private cloud deployments, but does not meet the needs of larger, provider (or operator) deployments that include hundreds if not thousands of load balancers and multiple domain users (discrete customer organizations). The OpenStack LBaaS community looked at requirements and noted that the following operator-focused requirements are currently missing: • Scalability • SSL Certificate management – for an operator-based service, SSL certificate management is a much more important function that is currently not addressed in the current API or blueprint • Metrics Collection – a very limited set of metrics are currently provided by the current API. • Separate admin API for NOC and support operations • Minimal downtime when migrating to newer versions • Ability to migrate load balancers (SW to HW, etc.) • Resiliency functions like HA and failover • Operator-based load balancer health checks • Support multiple, simultaneous drivers. We have had great discussions on the LBaaS mailing list and on IRC about all the things we want to do, the new APIs, the User use cases, requirements and priorities, the operator requirements for LBaaS, etc. and I am at this point wondering if Neutron LBaaS as a sub-project of Neutron can fulfill our requirements. I would like this group to discuss the pros and cons of separating the advanced services, including LB, VPN, and FW, out of Neutron and allow for each of the three currently existing advanced services to become stand-alone projects or one standalone project. This should be done under the following assumptions: • Keep backwards compatibility with the current Neutron LBaaS plugin/driver API (to some point) so that existing drivers/plug-ins continues to work for people who have already invested in Neutron LBaaS • Migration strategy. We have a precedence in OpenStack of splitting up services that are becoming too big or where sub-services deserve to become an entity of its own e.g. baremetal Nova and Ironic, Nova-network and Neutron, nova-scheduler is being worked into the Gantt project, etc. At a high-level I see the following steps/blueprints needing to be carried out: • Identify and create a library similar in concept to OpenStack core that contains the common components pieces needed by the advanced services in order to minimize code duplication between the advanced services and Neutron. This library should be consumable by external projects and will allow for cleaner code reuse by not only the three existing advanced services but by new services as well. • Start a new repo for the standalone LBaaS o http://git.openstack.org/cgit/openstack-dev/cookiecutter/tree/ • Write a patch to bridge Neutron LBaaS with the standalone LBaaS for backwards compatibility. Longer term we can deprecate Neutron LBaaS which will be possible once the new LBaaS service is a graduated OpenStack service. Some of the background
Re: [openstack-dev] How-to and best practices for developing new OpenStack service
Hi Ruslan! I can recommend two resources: 1. cookiecutter [1] - template for a new OpenStack project 2. http://ci.openstack.org/stackforge.html - this page helps to setup a project in OpenStack infrastructure, including code-review, Jenkins automation, etc. [1] http://git.openstack.org/cgit/openstack-dev/cookiecutter/tree/README.rst Regards, Ruslan On Wed, May 7, 2014 at 4:08 PM, Ruslan Kiianchuk ruslan.kiianc...@gmail.com wrote: Hello, community. There are numerous open projects evolving that are designed to work with OpenStack and follow OpenStack development principles (technologies stack, architecture, etc.). There are even more efforts within many companies working with cloud technologies. However it is hard to quick-start with all the practices developed at OpenStack community and start developing an OpenStack service the right way. So are there any resources, tutorials, guides on how to start an OpenStack service from scratch? Something that would describe correct usage of Oslo project, common architecture and give recommendations for developers. Thank you. -- Sincerely, Ruslan Kiianchuk. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel-dev] [Fuel][RabbitMQ] nova-compute stuck for a while (AMQP)
On 05/06/2014 10:42 PM, Roman Sokolkov wrote: Hello, fuelers. I'm using Fuel 4.1A + Havana in HA mode. I permanently observe (on other deployments also) issue with stuck nova-compute service. But i think problem is more fundamental and relates to HA RabbitMQ and OpenStack AMQP driver implementation. Symptoms: * Random nova-compute from time to time marked as XXX for a while. * I see that service itself works properly. In logs i see that it sends status updates to conductor. But actually nothing is sent. * netstat shows that all connections to/from rabbit ESTABLISHED * rabbitmqctl shows that compute.node-x queue synced to all slaves. * nothing has been broken before, i mean rabbitmq cluster, etc. Axe style solution: * /etc/init.d/openstack-nova-compute restart So here i've found a lot of interesting stuff (and solutions): https://bugs.launchpad.net/oslo.messaging/+bug/856764 My questions are: * Are there any thoughts particular for Fuel to solve/workaround this issue? * Any fast solution for this in 4.1? Like adjust TCP keep-alive timeouts? Perhaps, the soultion is to apply https://review.openstack.org/#/c/34949 and check results with rabbitmq and nova. If it is OK, we could submit a task for OSCI team to patch our internal repos and update 4.1.1 / 5.0 targeted MOS packages. -- Roman Sokolkov, Deployment Engineer, Mirantis, Inc. Skype rsokolkov, rsokol...@mirantis.com mailto:rsokol...@mirantis.com -- Best regards, Bogdan Dobrelya, Skype #bogdando_at_yahoo.com Irc #bogdando ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Commit the cinder Driver
Missed some of the additional links which might be helpful/important for you. [1] https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver [2] https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer On Wed, May 7, 2014 at 11:38 AM, Swapnil Kulkarni cools...@gmail.comwrote: Hi Mardan, Thanks! For the Cloudbyte ElastiStor I can see couple of (duplicate) blueprints filed [1] [2]. You should probably assign one of them to you and update with details till we get cinder-spec available for blueprint review process. For details about commit process please refer [3]. For any issues with commit process, feel free to ask questions at #openstack-cinder @ freenode.net Also, you should try to attend Cinder meetings [4] to update the team about your progress on commits and for any updates you should be having related to new driver submissions. [1] https://blueprints.launchpad.net/cinder/+spec/cloudbyte-elastistor-iscsi-driver [2] https://blueprints.launchpad.net/cinder/+spec/cloudbyte-elastistor-iscsi-driver-cinder [3] https://wiki.openstack.org/wiki/Gerrit_Workflow [4] https://wiki.openstack.org/wiki/CinderMeetings Best Regards, Swapnil Kulkarni irc : coolsvap cools...@gmail.com +91-87960 10622(c) http://in.linkedin.com/in/coolsvap *It's better to SHARE* On Wed, May 7, 2014 at 11:24 AM, Mardan Raghuwanshi mardan.si...@cloudbyte.com wrote: Hi All, I developed Cinder driver for CloudByte's ElastiStor, now I want to check in this code into master. I don't have more idea about git and OpenStack commit process, can you guys please guide me to commit the code into master branch. Thanks, Mardan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel-dev] [Fuel][RabbitMQ] nova-compute stuck for a while (AMQP)
On 05/07/2014 04:12 PM, Bogdan Dobrelya wrote: On 05/06/2014 10:42 PM, Roman Sokolkov wrote: Hello, fuelers. I'm using Fuel 4.1A + Havana in HA mode. I permanently observe (on other deployments also) issue with stuck nova-compute service. But i think problem is more fundamental and relates to HA RabbitMQ and OpenStack AMQP driver implementation. Symptoms: * Random nova-compute from time to time marked as XXX for a while. * I see that service itself works properly. In logs i see that it sends status updates to conductor. But actually nothing is sent. * netstat shows that all connections to/from rabbit ESTABLISHED * rabbitmqctl shows that compute.node-x queue synced to all slaves. * nothing has been broken before, i mean rabbitmq cluster, etc. Axe style solution: * /etc/init.d/openstack-nova-compute restart So here i've found a lot of interesting stuff (and solutions): https://bugs.launchpad.net/oslo.messaging/+bug/856764 My questions are: * Are there any thoughts particular for Fuel to solve/workaround this issue? * Any fast solution for this in 4.1? Like adjust TCP keep-alive timeouts? Perhaps, the soultion is to apply https://review.openstack.org/#/c/34949 and check results with rabbitmq and nova. If it is OK, we could submit a task for OSCI team to patch our internal repos and update 4.1.1 / 5.0 targeted MOS packages. Sorry, I mean to sync all Oslo patches from https://bugs.launchpad.net/oslo.messaging/+bug/856764; for nova packages in MOS and check the results with rabbitmq. -- Roman Sokolkov, Deployment Engineer, Mirantis, Inc. Skype rsokolkov, rsokol...@mirantis.com mailto:rsokol...@mirantis.com -- Best regards, Bogdan Dobrelya, Skype #bogdando_at_yahoo.com Irc #bogdando ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] ServiceVM IRC meeting minutes May 6 (was Re: [Neutron] ServiceVM IRC meeting(May 6 Tuesday 5:00(AM)UTC-))
hello, Is there anyone may kindly send me the content of this meeting, i can not visit docs.google.com in China. url: https://docs.google.com/presentation/d/14dvV3S9Eph2z-auk34I_Ftld-lHA3VMoyNWAPRTeWgE/edit?usp=sharing many thanks! At 2014-05-06 14:54:09,Isaku Yamahata isaku.yamah...@gmail.com wrote: Here's the minute. The May 13 will be skipped. The next meeting is on May 20. bmelanda, hareesh, if you have items for session agenda, please update ether and replay this mail. AGREED: unconference Monday 5pm- at the summit ACTION: whoever goes to the unconference board and secure the room ACTION: yamahata update the etherpad for the session ACTION: bmelanda, hareeshpc update session adgenda in etherpad ACTION: bmelanda, yamahata add terminology https://wiki.openstack.org/wiki/ServiceVM/terminology for convergence LINK: https://etherpad.openstack.org/p/servicevm LINK: https://wiki.openstack.org/wiki/ServiceVM/terminology LINK: https://wiki.openstack.org/wiki/ServiceVM lots of short-term gap-filler discussion thanks, Isaku Yamahata On Fri, May 02, 2014 at 02:01:31PM +0900, Isaku Yamahata isaku.yamah...@gmail.com wrote: Hi. This is a reminder mail for the servicevm IRC meeting May 6, 2014 Tuesdays 5:00(AM)UTC- #openstack-meeting on freenode (May 13 will be skipped due to design summit) * design summit plan - unconference * status update * new project planning - project name code name: virtue, ginie, jeeve,... topic name: servicevm, hosting device,... - design API/model - way to review: gerrit or google-doc? - design strategy * open discussion -- Isaku Yamahata isaku.yamah...@gmail.com -- Isaku Yamahata isaku.yamah...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] VIF event callbacks implementation
On 7 May 2014 13:04, Mike Kolesnik mkole...@redhat.com wrote: - Original Message - Yeah, we've already got plans in place to get Cinder to use the interface to provide us more detailed information and eliminate some polling. We also have a very purpose-built notification scheme between nova and cinder that facilitates a callback for a very specific scenario. I'd like to get that converted to use this mechanism as well, so that it becomes the way you tell nova that things it's waiting for have happened. Not sure how you consider this mechanism something generic since it's facilitating only Nova while there might be a number of different services interested in this information. Now Neutron needs to be aware of VIF and Nova's expectations of Neutron in regards to that VIF, which is highly tightly coupled. Using a notification scheme where any subscriber can receive the event from Neutron/Cinder/etc and handle it how it needs instead would be much more decoupled, IMHO. Nothing is merged on the cinder side yet, and won't be merging until after the summit. Making an interface that is generic enough for any entity that may wish to receive event notification from cinder is my concern. The fact nova has already merged something that works for neutron really isn't much of a concern for me - getting the right interface into cinder is. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] Checking for return codes in tempest client calls
I just looked at a patch https://review.openstack.org/#/c/90310/3 which was given a -1 due to not checking that every call to list_hosts returns 200. I realized that we don't have a shared understanding or policy about this. We need to make sure that each api is tested to return the right response, but many tests need to call multiple apis in support of the one they are actually testing. It seems silly to have the caller check the response of every api call. Currently there are many, if not the majority of, cases where api calls are made without checking the response code. I see a few possibilities: 1. Move all response code checking to the tempest clients. They are already checking for failure codes and are now doing validation of json response and headers as well. Callers would only do an explicit check if there were multiple success codes possible. 2. Have a clear policy of when callers should check response codes and apply it. I think the first approach has a lot of advantages. Thoughts? -David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
On 7 May 2014 09:36, Trump.Zhang zhangleiqi...@gmail.com wrote: @Tripp, Thanks for your reply and info. I am also thinking if it is proper to add support for updating the volume's glance_image_metadta to reflect the newest status of volume. However, there may be alternative ways to achieve it: 1. Using the volume's metatadata 2. Using the volume's admin_metadata So I am wondering which is the most proper method. We're suffering from a total overload of the term 'metadata' here, and there are 3 totally separate things that are somehow becoming mangled: 1. Volume metadata - this is for the tenant's own use. Cinder and nova don't assign meaning to it, other than treating it as stuff the tenant can set. It is entirely unrelated to glance_metadata 2. admin_metadata - this is an internal implementation detail for cinder to avoid every extension having to alter the core volume db model. It is not the same thing as glance metadata or volume_metadata. An interface to modify volume_glance_metadata sounds reasonable, however it is *unrelated* to the other two types of metadata. They are different things, not replacements or anything like that. Glance protected properties need to be tied into the modification API somehow, or else it becomes a trivial way of bypassing protected properties. Hopefully a glance expert can pop up and suggest a way of achieving this integration. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] VIF event callbacks implementation
I have additional concern that API is something that's user facing so basically now Nova is exposing some internal synchronization detail to the outside world. We have lots of admin-only APIs. Does it make sense that the user would now be able to send messages to this API? Potentially. Right now you can ask for a refresh of their network info if things are stale, and some RAX people mentioned that they would like to be able to trigger network resets and some other sorts of things through that mechanism. Not sure why RPC is more coupled than API. Maybe you could explain? Because we change our RPC APIs all the time. If an external component has to consume our RPC messages reliably, that means either we have to make sure that component tracks the changes, or we avoid making them. On the other hand, our REST APIs are specifically designed and carefully monitored to maintain compatibility with external things. Currently it's a very specific API putting a burden on Neutron to now know what is VIF and what state is necessary for this VIF to be working, instead of having these calculations in Nova (which is of course aware of VIF, and of Port). It's optional, and for better integration with Neutron and Nova. I don't really think that it's as nova-specific as you're implying. IIRC, the neutron side just fires a notification any time a port changes state right now. I wasn't suggesting touching Nova's RPC but rather utilize the existing notifications sent from Neutron to achieve the same logic. So not sure what changes you believe are to be coordinated from Nova's POV. Neutron consuming Nova's RPC or Nova consuming Neutron's RPC.. either way, it's not how I'd choose to go about it. It could similarly be a queue with some defined message format. I think that's what we've implemented, no? It's a well-defined format that describes an event. It has nothing nova-specific in it. We could alternatively provide a callback functionality that allows various clients to receive notifications from Neutron, specifying an address to send these details to. Sure, if you want to go extend this mechanism to remove the static configuration and allow for dynamic registration of who receives these messages, I would expect that would be fine. We'd need to figure out how a single Nova deployment will manage registering with Neutron in an efficient way, but I'm sure that could be done. Not sure how you consider this mechanism something generic since it's facilitating only Nova while there might be a number of different services interested in this information. Now Neutron needs to be aware of VIF and Nova's expectations of Neutron in regards to that VIF, which is highly tightly coupled. Using a notification scheme where any subscriber can receive the event from Neutron/Cinder/etc and handle it how it needs instead would be much more decoupled, IMHO. Well, I'm not sure why Cinder would need to receive notifications from Neutron, but I understand what you mean. Like I said, nothing about how it's currently architected prevents this from happening, AFAIK, other than the fact that it's currently managed by static configuration. If you want to add a subscription interface to register for these so that neutron can supply notifications to multiple entities dynamically, then that seems like a natural next step. I'm sure Cinder would prefer that as well. --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Checking for return codes in tempest client calls
On 7 May 2014 14:53, David Kranz dkr...@redhat.com wrote: I just looked at a patch https://review.openstack.org/#/c/90310/3 which was given a -1 due to not checking that every call to list_hosts returns 200. I realized that we don't have a shared understanding or policy about this. snip Thoughts? While I don't know the tempest code well enough to opine where the check should be, every call should definitely be checked and failures reported - I've had a few cases where I've debugged failures (some tempest, some other tests) where somebody says 'my volume attach isn't working' and the reason turned out to be because their instance never came up properly, or snapshot delete failed because the create failed but wasn't logged. Anything that causes the test to automatically report the narrowest definition of the fault is definitely a good thing. -- Duncan Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron
Sam, Perfect. I saw Eugene added something too. Let's get more of the known facts and issues down on the etherpad so we are better prep'ed for the Tues meeting. Susanne On Wed, May 7, 2014 at 9:01 AM, Samuel Bercovici samu...@radware.comwrote: Hi, I have added to https://etherpad.openstack.org/p/AdvancedServices_and_Neutron a note recalling two technical challenges that do not exists when LBaaS runs as a Neutron extension. -Sam. *From:* Susanne Balle [mailto:sleipnir...@gmail.com] *Sent:* Wednesday, May 07, 2014 2:45 PM *To:* OpenStack Development Mailing List (not for usage questions) *Cc:* Balle, Susanne *Subject:* [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron Hi Advanced Services/LBaaS Stackers, We are setting up a meeting to discuss if it makes sense to separate the advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects. We want a healthy discussion around the pros and cons of separating the advanced services from Neutron and its short or long term feasibility. The meeting is planned for: *Tuesday May 13th at 2pm in the Neutron pod.* There will be a designated pod for each of the official programs at: https://wiki.openstack.org/wiki/Programs Some programs share a pod. There will be a map at the center of the space, as well as signage up to help find the relevant pod. Based on discussions with Rackspace, Mirantis, and others it is clear that the advanced services (i.e. LBaaS) in Neutron are not getting the attention and the support to move forward and create a first in class load-balancer service; from a service provider or operator's perspective. We currently have a lot of momentum and energy behind the LBaaS effort but are being told that the focus for Neutron is bug fixing given the instability in Neutron itself. While the latter is totally understandable, as a high priority for Neutron it leaves the advanced services out in the cold with no way to make progress in developing features that are needed to support the many companies that rely on LBaaS for large scale deployments. The current Neutron LB API and feature set meet minimum requirements for small-medium private cloud deployments, but does not meet the needs of larger, provider (or operator) deployments that include hundreds if not thousands of load balancers and multiple domain users (discrete customer organizations). The OpenStack LBaaS community looked at requirements and noted that the following operator-focused requirements are currently missing: · Scalability · SSL Certificate management – for an operator-based service, SSL certificate management is a much more important function that is currently not addressed in the current API or blueprint · Metrics Collection – a very limited set of metrics are currently provided by the current API. · Separate admin API for NOC and support operations · Minimal downtime when migrating to newer versions · Ability to migrate load balancers (SW to HW, etc.) · Resiliency functions like HA and failover · Operator-based load balancer health checks · Support multiple, simultaneous drivers. We have had great discussions on the LBaaS mailing list and on IRC about all the things we want to do, the new APIs, the User use cases, requirements and priorities, the operator requirements for LBaaS, etc. and I am at this point wondering if Neutron LBaaS as a sub-project of Neutron can fulfill our requirements. I would like this group to discuss the pros and cons of separating the advanced services, including LB, VPN, and FW, out of Neutron and allow for each of the three currently existing advanced services to become stand-alone projects or one standalone project. This should be done under the following assumptions: · Keep backwards compatibility with the current Neutron LBaaS plugin/driver API (to some point) so that existing drivers/plug-ins continues to work for people who have already invested in Neutron LBaaS · Migration strategy. We have a precedence in OpenStack of splitting up services that are becoming too big or where sub-services deserve to become an entity of its own e.g. baremetal Nova and Ironic, Nova-network and Neutron, nova-scheduler is being worked into the Gantt project, etc. At a high-level I see the following steps/blueprints needing to be carried out: · Identify and create a library similar in concept to OpenStack core that contains the common components pieces needed by the advanced services in order to minimize code duplication between the advanced services and Neutron. This library should be consumable by external projects and will allow for cleaner code reuse by not only the three existing advanced services but by new services as well. · Start a new repo for
[openstack-dev] [all] changes on tempest output formatting
With Robert's help, I've been working on a new subunit real time filter which has a few features beyond what is currently in tempest: * displays which testr worker the test is on (to make it easier to figure out what other tests might be running when your test is running) * displays the time for each tests (caveat, setupclass time is still not accounted for) * displays stdout/stderr inline at the end of a test result (all status) * displays stacktrace and pythonlogging inline at the end of a *failed* test result This merged into Tempest yesterday. Output on success will look something like this: 2014-05-07 05:43:41.488 | {3} tempest.api.compute.keypairs.test_keypairs_negative.KeyPairsNegativeTestXML.test_keypair_create_with_invalid_pub_key [0.301279s] ... ok 2014-05-07 05:43:41.599 | {1} tempest.api.compute.servers.test_delete_server.DeleteServersTestXML.test_delete_active_server [28.083070s] ... ok 2014-05-07 05:43:41.610 | {3} tempest.api.compute.keypairs.test_keypairs_negative.KeyPairsNegativeTestXML.test_keypair_delete_nonexistent_key [0.121472s] ... ok 2014-05-07 05:43:42.480 | {0} tempest.api.compute.servers.test_delete_server.DeleteServersAdminTestXML.test_delete_server_while_in_error_state [24.822415s] ... ok 2014-05-07 05:43:43.384 | {3} tempest.api.compute.security_groups.test_security_groups.SecurityGroupsTestXML.test_security_group_create_get_delete [0.740040s] ... ok 2014-05-07 05:43:44.383 | {3} tempest.api.compute.security_groups.test_security_groups.SecurityGroupsTestXML.test_security_groups_create_list_delete [0.998106s] ... ok 2014-05-07 05:43:58.633 | {2} tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_flavor [0.109771s] ... ok 2014-05-07 05:43:58.634 | {2} tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_image ... SKIPPED: Only one image found {###} is testr worker number Failed output will look like: http://logs.openstack.org/07/83207/61/check/check-tempest-dsvm-full/88816fd/console.html#_2014-05-07_05_55_32_429 The decision to bring this inline vs. at the end of the test run is that when watching long tests runs you'll get this info the moment the test fails, which means you can be following jenkins and start digging into failures early. We did expose a testr bug in the process that worker # is not always getting allocated correctly. That's been filed upstream. It will mean that some runs will report the wrong worker for a class of tests, so if it looks like you are missing a worker, that's why. This also means you can iterate on tempest tests using inline 'print' calls, which many of us find very handy. There are some worker summary patches coming through in the queue as well to give a break down of tests and runtime per worker, to help us understand how unbalanced a run ends up. -Sean -- Sean Dague http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] ML2 extensions info propagation
Hi ML2er and others, I'm considering discussions around ML2 for the summit. Unfortunatly I won't attend the summit, so I'll try to participate through the mailing list and etherpads. I'm especially interested in extension support by Mechanism Driver[1] and Modular agent[2]. During the Juno cycle I'll work on the capacity to propagate IPVPN informations (route-target) down to the agent, so that the agent can manage MPLS encapsulation. I think that the easiest way to do that is to enhance get_device_details() RPC message to add network extension informations of the concerned port in the dict sent. Moreover I think this approach could be generalized, and get_device_details() in the agent should return serialized information of a port with every extension informations (security_group, port_binding...). When the core datamodel or the extension datamodel would be modified, this would result in a port_update() with the updated serialization of the datamodel. This way, we could get rid of security-group and l2pop RPC. Modular agent wouldn't need to deal with one driver by extension which need to register its RPC callbacks. Those informations should also be stored in ML2 driver context. When a port is created by ML2 plugin, it calls super() for creating core datamodel, which will return a dict without extension informations, because extension informations in the Rest call has not been processed yet. But once the plugin call its core extension, it should call MD registered extensions as proposed by nader here [4] and then call make_port_dict(with extension), or an equivalent serialization function, to create the driver context. this seralization function would be used by get_device_details() RPC callbacks too. Regards, Mathieu [1]https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support [2]https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent [3]http://summit.openstack.org/cfp/details/240 [4]https://review.openstack.org/#/c/89211/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Checking for return codes in tempest client calls
Hi David, 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com: I just looked at a patch https://review.openstack.org/#/c/90310/3 which was given a -1 due to not checking that every call to list_hosts returns 200. I realized that we don't have a shared understanding or policy about this. We need to make sure that each api is tested to return the right response, but many tests need to call multiple apis in support of the one they are actually testing. It seems silly to have the caller check the response of every api call. Currently there are many, if not the majority of, cases where api calls are made without checking the response code. I see a few possibilities: 1. Move all response code checking to the tempest clients. They are already checking for failure codes and are now doing validation of json response and headers as well. Callers would only do an explicit check if there were multiple success codes possible. 2. Have a clear policy of when callers should check response codes and apply it. I think the first approach has a lot of advantages. Thoughts? Thanks for proposing this, I also prefer the first approach. We will be able to remove a lot of status code checks if going on this direction. It is necessary for bp/nova-api-test-inheritance tasks also. Current https://review.openstack.org/#/c/92536/ removes status code checks because some Nova v2/v3 APIs return different codes and the codes are already checked in client side. but it is necessary to create a lot of patch for covering all API tests. So for now, I feel it is OK to skip status code checks in API tests only if client side checks are already implemented. After implementing all client validations, we can remove them of API tests. Thanks Kenichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] [ML2] extensions info propagation
Hi ML2er and others, I'm considering discussions around ML2 for the summit. Unfortunatly I won't attend the summit, so I'll try to participate through the mailing list and etherpads. I'm especially interested in extension support by Mechanism Driver[1] and Modular agent[2]. During the Juno cycle I'll work on the capacity to propagate IPVPN informations (route-target) down to the agent, so that the agent can manage MPLS encapsulation. I think that the easiest way to do that is to enhance get_device_details() RPC message to add network extension informations of the concerned port in the dict sent. Moreover I think this approach could be generalized, and get_device_details() in the agent should return serialized information of a port with every extension informations (security_group, port_binding...). When the core datamodel or the extension datamodel would be modified, this would result in a port_update() with the updated serialization of the datamodel. This way, we could get rid of security-group and l2pop RPC. Modular agent wouldn't need to deal with one driver by extension which need to register its RPC callbacks. Those informations should also be stored in ML2 driver context. When a port is created by ML2 plugin, it calls super() for creating core datamodel, which will return a dict without extension informations, because extension informations in the Rest call has not been processed yet. But once the plugin call its core extension, it should call MD registered extensions as proposed by nader here [4] and then call make_port_dict(with extension), or an equivalent serialization function, to create the driver context. this seralization function would be used by get_device_details() RPC callbacks too. Regards, Mathieu [1]https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support [2]https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent [3]http://summit.openstack.org/cfp/details/240 [4]https://review.openstack.org/#/c/89211/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Checking for return codes in tempest client calls
On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote: Hi David, 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com: I just looked at a patch https://review.openstack.org/#/c/90310/3 which was given a -1 due to not checking that every call to list_hosts returns 200. I realized that we don't have a shared understanding or policy about this. We need to make sure that each api is tested to return the right response, but many tests need to call multiple apis in support of the one they are actually testing. It seems silly to have the caller check the response of every api call. Currently there are many, if not the majority of, cases where api calls are made without checking the response code. I see a few possibilities: 1. Move all response code checking to the tempest clients. They are already checking for failure codes and are now doing validation of json response and headers as well. Callers would only do an explicit check if there were multiple success codes possible. 2. Have a clear policy of when callers should check response codes and apply it. I think the first approach has a lot of advantages. Thoughts? Thanks for proposing this, I also prefer the first approach. We will be able to remove a lot of status code checks if going on this direction. It is necessary for bp/nova-api-test-inheritance tasks also. Current https://review.openstack.org/#/c/92536/ removes status code checks because some Nova v2/v3 APIs return different codes and the codes are already checked in client side. but it is necessary to create a lot of patch for covering all API tests. So for now, I feel it is OK to skip status code checks in API tests only if client side checks are already implemented. After implementing all client validations, we can remove them of API tests. Do we still have instances where we want to make a call that we know will fail and not through the exception? I agree there is a certain clarity in putting this down in the rest client. I just haven't figured out if it's going to break some behavior that we currently expect. -Sean -- Sean Dague http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Checking for return codes in tempest client calls
On 05/07/2014 10:07 AM, Duncan Thomas wrote: On 7 May 2014 14:53, David Kranz dkr...@redhat.com wrote: I just looked at a patch https://review.openstack.org/#/c/90310/3 which was given a -1 due to not checking that every call to list_hosts returns 200. I realized that we don't have a shared understanding or policy about this. snip Thoughts? While I don't know the tempest code well enough to opine where the check should be, every call should definitely be checked and failures reported - I've had a few cases where I've debugged failures (some tempest, some other tests) where somebody says 'my volume attach isn't working' and the reason turned out to be because their instance never came up properly, or snapshot delete failed because the create failed but wasn't logged. Anything that causes the test to automatically report the narrowest definition of the fault is definitely a good thing. Yes. To be clear, all calls raise an exception on failure. What we don't check on every call is if an api that is supposed to return 200 might have returned 201, etc. -David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] HEEEELP please..How can use Openstack 's APIs in Javascript/Ajax??
HI can i use Openstack 's APIs in Javascript/Ajax?? Is there any solution pleasE? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] Meeting Thursday May 8th at 22:00UTC
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be tomorrow Thursday, May 8th at 22:00 UTC in the #openstack-meeting channel. The agenda for tomorrow's meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Anyone is welcome to add an item to the agenda. To help people figure out what time 22:00 UTC is in other timezones tomorrow's meeting will be at: 18:00 EDT 07:00 JST 07:30 ACST 0:00 CEST 17:00 CDT 15:00 PDT -Matt Treinish ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Checking for return codes in tempest client calls
Hi Sean, 2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net: On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote: Hi David, 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com: I just looked at a patch https://review.openstack.org/#/c/90310/3 which was given a -1 due to not checking that every call to list_hosts returns 200. I realized that we don't have a shared understanding or policy about this. We need to make sure that each api is tested to return the right response, but many tests need to call multiple apis in support of the one they are actually testing. It seems silly to have the caller check the response of every api call. Currently there are many, if not the majority of, cases where api calls are made without checking the response code. I see a few possibilities: 1. Move all response code checking to the tempest clients. They are already checking for failure codes and are now doing validation of json response and headers as well. Callers would only do an explicit check if there were multiple success codes possible. 2. Have a clear policy of when callers should check response codes and apply it. I think the first approach has a lot of advantages. Thoughts? Thanks for proposing this, I also prefer the first approach. We will be able to remove a lot of status code checks if going on this direction. It is necessary for bp/nova-api-test-inheritance tasks also. Current https://review.openstack.org/#/c/92536/ removes status code checks because some Nova v2/v3 APIs return different codes and the codes are already checked in client side. but it is necessary to create a lot of patch for covering all API tests. So for now, I feel it is OK to skip status code checks in API tests only if client side checks are already implemented. After implementing all client validations, we can remove them of API tests. Do we still have instances where we want to make a call that we know will fail and not through the exception? I agree there is a certain clarity in putting this down in the rest client. I just haven't figured out if it's going to break some behavior that we currently expect. If a server returns unexpected status code, Tempest fails with client validations like the following sample: Traceback (most recent call last): File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py, line 36, in test_create_server_with_admin_password resp, server = self.create_test_server(adminPass='testpassword') File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in create_test_server name, image_id, flavor, **kwargs) File /opt/stack/tempest/tempest/services/compute/json/servers_client.py, line 95, in create_server self.validate_response(schema.create_server, resp, body) File /opt/stack/tempest/tempest/common/rest_client.py, line 596, in validate_response raise exceptions.InvalidHttpSuccessCode(msg) InvalidHttpSuccessCode: The success code is different than the expected one Details: The status code(202) is different than the expected one([200]) Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs
Hi folks, I think it's ok to include content modification in a general API proposal as long as it would then receive separate spec document in neutron-specs. When it comes to particular feature design and implementation it's better to be more granular. Thanks, Eugene. On Tue, May 6, 2014 at 4:05 AM, Stephen Balukoff sbaluk...@bluebox.netwrote: Hi Sam, In working off the document in the wiki on L7 functionality for LBaaS ( https://wiki.openstack.org/wiki/Neutron/LBaaS/l7 ), I notice that MODIFY_CONTENT is one of the actions listed for a L7VipPolicyAssociation. That's the primary reason I included this in the API design I created. To be honest, it frustrates me more than a little to hear after the fact that the only locate-able documentation like this online is inaccurate on many meaningful details like this. I could actually go either way on this issue: I included content modification as one possible action of L7Policies, but it is somewhat wedged in there: It works, but in L7Policies that do content modification or blocking of the request, the order field as I've proposed it could be confusing for users, and these L7Policies wouldn't be associated with a back-end pool anyway. I'm interested in hearing others' opinions on this as well. Stephen On Mon, May 5, 2014 at 6:47 AM, Samuel Bercovici samu...@radware.comwrote: Hi Stephen, For Icehouse we did not go into L7 content modification as the general feeling was that it might not be exactly the same as content switching and we wanted to tackle content switching fiest. L7 content switching and L7 content modification are different, I prefer to be explicit and declarative and use different objects. This will make the API more readable. What do you think? I plan to look deeper into L7 content modification later this week to propose a list of capabilities. -Sam. *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net] *Sent:* Saturday, May 03, 2014 1:33 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs Hi Adam and Samuel! Thanks for the questions / comments! Reactions in-line: On Thu, May 1, 2014 at 8:14 PM, Adam Harwell adam.harw...@rackspace.com wrote: Stephen, the way I understood your API proposal, I thought you could essentially combine L7Rules in an L7Policy, and have multiple L7Policies, implying that the L7Rules would use AND style combination, while the L7Policies themselves would use OR combination (I think I said that right, almost seems like a tongue-twister while I'm running on pure caffeine). So, if I said: Well, my goal wasn't to create a whole DSL for this (or anything much resembling this) because: 1. Real-world usage of the L7 stuff is generally pretty primitive. Most L7Policies will consist of 1 rule. Those that consist of more than one rule are almost always the sort that need a simple sort. This is based off the usage data collected here (which admittedly only has Blue Box's data-- because apparently nobody else even offers L7 right now?) https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing 2. I was trying to keep things as simple as possible to make it easier for load balancer vendors to support. (That is to say, I wouldn't expect all vendors to provide the same kind of functionality as HAProxy ACLs, for example.) Having said this, I think yours and Sam's clarification that different L7Policies can be used to effective OR conditions together makes sense, and therefore assuming all the Rules in a given policy are ANDed together makes sense. If we do this, it therefore also might make sense to expose other criteria on which L7Rules can be made, like HTTP method used for the request and whatnot. Also, should we introduce a flag to say whether a given Rule's condition should be negated? (eg. HTTP method is GET and URL is *not* /api) This would get us closer to being able to use more sophisticated logic for L7 routing. Does anyone foresee the need to offer this kind of functionality? * The policy { rules: [ rule1: match path REGEX .*index.*, rule2: match path REGEX hello/.* ] } directs to Pool A * The policy { rules: [ rule1: match hostname EQ mysite.com ] } directs to Pool B then order would matter for the policies themselves. In this case, if they ran in the order I listed, it would match mysite.com/hello/index.htm and direct it to Pool A, while mysite.com/hello/nope.htm would not match BOTH rules in the first policy, and would be caught by the second policy, directing it to Pool B. If I had wanted the first policy to use OR logic, I would have just specified two separate policies both pointing to Pool A: Clarification on this: There is an 'order' attribute to L7Policies. :) But again, if all the L7Rules in a given policy
Re: [openstack-dev] Juno Design Summit schedule frozen
Thierry Carrez wrote: The Juno Design Summit schedule is now frozen. There may be last-minute changes, but they shall be announced in a post here. FYI, two Barbican sessions (SSL/TLS flow and Kite) were just swapped on the Tuesday. Regards, -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] HEEEELP please..How can use Openstack 's APIs in Javascript/Ajax??
Hachem, Were you looking for a pre-existing javascript library? I'm not familiar with one, but someone else might chime in. The OpenStack APIs are all ReST / JSON and should be easily consumable from a client side. You might want to look at / ask the Horizon project. They are the UI for OpenStack and may be able to better answer you question. You can direct a message to them by including [Horizon] in the subject of your email. Thanks! -- Jarret Raim @jarretraim From: Hachem Chraiti hachem...@gmail.com Reply-To: OpenStack List openstack-dev@lists.openstack.org Date: Wednesday, May 7, 2014 at 9:42 AM To: OpenStack List openstack-dev@lists.openstack.org, openst...@ask.openstack.org openst...@ask.openstack.org Subject: [openstack-dev] HLP please..How can use Openstack 's APIs in Javascript/Ajax?? HI can i use Openstack 's APIs in Javascript/Ajax?? Is there any solution pleasE? smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] HEEEELP please..How can use Openstack 's APIs in Javascript/Ajax??
To add to Jarret's response: I think this section of the SDK wiki might help: https://wiki.openstack.org/wiki/SDKs#_javascript_ Regards, Steve Martinelli Software Developer - Openstack Keystone Core Member Phone: 1-905-413-2851 E-mail: steve...@ca.ibm.com 8200 Warden Ave Markham, ON L6G 1C7 Canada From: Jarret Raim jarret.r...@rackspace.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, openst...@ask.openstack.org openst...@ask.openstack.org, Date: 05/07/2014 11:10 AM Subject: Re: [openstack-dev] HLP please..How can use Openstack 's APIs in _javascript_/Ajax?? Hachem, Were you looking for a pre-existing _javascript_ library? I'm not familiar with one, but someone else might chime in. The OpenStack APIs are all ReST / JSON and should be easily consumable from a client side. You might want to look at / ask the Horizon project. They are the UI for OpenStack and may be able to better answer you question. You can direct a message to them by including [Horizon] in the subject of your email. Thanks! -- Jarret Raim @jarretraim From: Hachem Chraiti hachem...@gmail.com Reply-To: OpenStack List openstack-dev@lists.openstack.org Date: Wednesday, May 7, 2014 at 9:42 AM To: OpenStack List openstack-dev@lists.openstack.org, openst...@ask.openstack.org openst...@ask.openstack.org Subject: [openstack-dev] HLP please..How can use Openstack 's APIs in _javascript_/Ajax?? HI can i use Openstack 's APIs in _javascript_/Ajax?? Is there any solution pleasE?[attachment smime.p7s deleted by Steve Martinelli/Toronto/IBM] ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Mistral][Heat] Feedback on the Mistral DSL
Hi Mistral folks, Congrats on getting the 0.0.2 release out. I had a look at Renat's screencast and the examples, and I wanted to share some feedback based on my experience with Heat. Y'all will have to judge for yourselves to what extent this experience is applicable to Mistral. (Assume that everything I know about it was covered in the screencast and you won't be far wrong.) The first thing that struck me looking at https://github.com/stackforge/mistral-extra/tree/master/examples/create_vm is that I have to teach Mistral how to talk to Nova. I can't overstate how surprising this is as a user, because Mistral is supposed to want to become a part of OpenStack. It should know how to talk to Nova! There is actually an existing DSL for interacting with OpenStack[1], and here's what the equivalent operation looks like: os server create $server_name --image $image_id --flavor $flavor_id --nic net-id=$network_id Note that this is approximately exactly 96.875% shorter (or 3200% shorter, if you're in advertising). This approach reminds me a bit of TOSCA, in the way that it requires you to define every node type before you use it. (Even TOSCA is moving away from this by developing a Simple Profile that includes the most common ones in the box - an approach I assume/hope you're considering also.) The stated reason for this is that they want TOSCA templates to run on any cloud regardless of its underlying features (rather than take a lowest-common-denominator approach, as other attempts at hybrid clouds have done). Contrast that with Heat, which is unapologetically an orchestration system *for OpenStack*. I note from the screencast that Mistral's stated mission is to: Provide a mechanism to define and execute tasks and workflows *in OpenStack clouds* (My emphasis.) IMO the design doesn't reflect the mission. You need to decide whether you are trying to build the OpenStack workflow DSL or the workflow DSL to end all workflow DSLs. That problem could be solved by including built-in definitions for core OpenStack service in a similar way to std.* (i.e. take the TOSCA Simple Profile approach), but I'm actually not sure that goes far enough. The lesson of Heat is that we do best when we orchestrate *only* OpenStack APIs. For example, when we started working on Heat, there was no autoscaling in OpenStack so we implemented it ourselves inside Heat. Two years later, there's still no autoscaling in OpenStack other than what we implemented, and we've been struggling for a year to try to split Heat's implementation out into a separate API so that everyone can use it. Looking at things like std.email, I feel a similar way about them. OpenStack is missing something equivalent to SNS, where a message on a queue can trigger an email or another type of notification, and a lot of projects are going to eventually need something like that. It would be really unfortunate if all of them went out and invented it independently. It's much better to implement such things as their own building blocks that can be combined together in complex ways rather than adding that complexity to a bunch of services. Such a notification service could even be extended to do std.http-like ReST calls, although personally the whole idea of OpenStack services calling out to arbitrary HTTP APIs makes me extremely uncomfortable. Much better IMO to just post messages to queues and let the receiver (long) poll for it. So I would favour a DSL that is *much* simpler, and replaces all of std.* with functions that call OpenStack APIs, and only OpenStack APIs, including the API for posting messages to Marconi queues, which would be the method of communication to the outside world. (If the latter part sounds a bit like SWF, it's for a good reason, but the fact that it would allow access directly to all of the OpenStack APIs before resorting to an SDK makes it much more powerful, as well as providing a solid justification for why this should be part of OpenStack.) The ideal way to get support for all of the possible OpenStack APIs would be to do it by introspection on python-openstackclient. That means you'd only have to do the work once and it will stay up to date. This would avoid the problem we have in Heat, where we have to implement each resource type separately. (This is the source of a great deal of Heat's value to users - the existence of tested resource plugins - but also the thing that stops us from iterating the code quicker.) I'm also unsure that it's a good idea for things like timers to be set up inside the DSL. I would prefer that the DSL just define workflows and export entry points to them. Then have various ways to trigger them: from the API manually, from a message to a Marconi queue, from a timer, c. The latter two you'd set up through the Mistral API. If a user wanted a single document that set up one or more workflows and their triggers, a Heat template
Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN
+1 to implement a modular framework where user can choose whether to use barbican or sqldb On Fri, May 2, 2014 at 4:28 AM, John Wood john.w...@rackspace.com wrote: Hello Samuel, Just noting that the link below shows current-state Barbican. We are in the process of designing SSL certificate support for Barbican via blueprints such as this one: https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates We intend to discuss this feature in Atlanta to enable coding in earnest for Juno. The Container resource is intended to capture/store the final certificate details. Thanks, John From: Samuel Bercovici [samu...@radware.com] Sent: Thursday, May 01, 2014 12:50 PM To: OpenStack Development Mailing List (not for usage questions); os.v...@gmail.com Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN Hi Vijay, I have looked at the Barbican APIs – https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface I was no able to see a “native” API that will accept an SSL certificate (private key, public key, CSR, etc.) and will store it. We can either store the whole certificate as a single file as a secret or use a container and store all the certificate parts as secrets. I think that having LBaaS reference Certificates as IDs using some service is the right way to go so this might be achived by either: 1. Adding to Barbican and API to store / generate certificates 2. Create a new “module” that might start by being hosted in neutron or keystone that will allow to manage certificates and will use Barbican behind the scenes to store them. 3. Decide on a container structure to use in Babican but implement the way to access and arrange it as a neutron library Was any decision made on how to proceed? Regards, -Sam. From: Vijay B [mailto:os.v...@gmail.com] Sent: Wednesday, April 30, 2014 3:24 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Neutron] [LBaaS][VPN] SSL cert implementation for LBaaS and VPN Hi, It looks like there are areas of common effort in multiple efforts that are proceeding in parallel to implement SSL for LBaaS as well as VPN SSL in neutron. Two relevant efforts are listed below: https://review.openstack.org/#/c/74031/ (https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL) https://review.openstack.org/#/c/58897/ (https://blueprints.launchpad.net/openstack/?searchtext=neutron-ssl-vpn) Both VPN and LBaaS will use SSL certificates and keys, and this makes it better to implement SSL entities as first class citizens in the OS world. So, three points need to be discussed here: 1. The VPN SSL implementation above is putting the SSL cert content in a mapping table, instead of maintaining certs separately and referencing them using IDs. The LBaaS implementation stores certificates in a separate table, but implements the necessary extensions and logic under LBaaS. We propose that both these implementations move away from this and refer to SSL entities using IDs, and that the SSL entities themselves are implemented as their own resources, serviced either by a core plugin or a new SSL plugin (assuming neutron; please also see point 3 below). 2. The actual data store where the certs and keys are stored should be configurable at least globally, such that the SSL plugin code will singularly refer to that store alone when working with the SSL entities. The data store candidates currently are Barbican and a sql db. Each should have a separate backend driver, along with the required config values. If further evaluation of Barbican shows that it fits all SSL needs, we should make it a priority over a sqldb driver. 3. Where should the primary entries for the SSL entities be stored? While the actual certs themselves will reside on Barbican or SQLdb, the entities themselves are currently being implemented in Neutron since they are being used/referenced there. However, we feel that implementing them in keystone would be most appropriate. We could also follow a federated model where a subset of keys can reside on another service such as Neutron. We are fine with starting an initial implementation in neutron, in a modular manner, and move it later to keystone. Please provide your inputs on this. Thanks, Regards, Vijay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Programs] Client Tools program discussion
On Wed, May 7, 2014 at 7:38 AM, Doug Hellmann doug.hellm...@dreamhost.com wrote: On Tue, May 6, 2014 at 5:45 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, May 6, 2014 at 6:54 AM, Dean Troyer dtro...@gmail.com wrote: On Tue, May 6, 2014 at 7:02 AM, Thierry Carrez thie...@openstack.org wrote: Would you take over the Python client libraries as well ? On one hand they need /some/ domain expertise, but on the other I see no reason to special-case Python against other SDKs, and that may give the libraries a bit more attention and convergence (they currently are the ugly stepchild in some programs, and vary a lot). The future of the existing client libs has not been settled, my working assumption is that they would remain with their home programs as they are now. From the start OpenStackClient was meant to be a clean-slate for the CLI and the Python SDK is taking the same basic approach. Very excited for the OpenStackClient, it is already way nicer then the existing clients. Just working this out in my head. So the work flow would be: 1. At first ClientTools consist of just the OpenStackClient 2. When the pythonSDK is ready to move off of stackforge, it will live in ClientTools 3. Specific python-*clients will be rewritten (from scratch?) to use the PythonSDK. But this time they won't have a built in CLI. These libraries will live along side the respective servers (so nova's python-novaclient will live in Compute)? All while moving OpenStackClient to the new libraries Is that what you are proposing? My understanding is that the SDK aims to be a ground-up replacement for the existing disparate client libraries. Whether that replacement is appropriate for use inside OpenStack may be up for debate (I think I remember someone saying that wasn't necessarily a goal, with the focus being on end users, but I haven't been able to attend many of the meetings so my information may be out of date). Ideally the python-openstacksdk becomes the one-stop shop for interacting with OpenStack as an OpenStack contributor, an operator, an end-user of an OpenStack cloud, etc. If you're writing Python code to work with OpenStack, that would be the place to go for code, tools, examples, and documentation. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Openstack CloudTrail equivalent
Hi I am new to OpenStack and sorry for the trivial question. I would like to know if there is any project / module equivalent to or similar to AWS cloudTrail.Any pointers would be helpful. Thank you Magesh ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN
Hi John, If the user already has an SSL certificate that was acquired outside of the barbican Ordering system, how can he store it securely in Barbican as a SSL Certificate? The Container stored this information in a very generic way, I think that there should be an API that formalizes a specific way in which SSL certificates can be stored and read back as SSL Certificates and not as loosely coupled container structure. This such API should have RBAC that allows getting back only the public part of an SSL certificate vs. allowing to get back all the details of the SSL certificate. -Sam. From: John Wood [mailto:john.w...@rackspace.com] Sent: Thursday, May 01, 2014 11:28 PM To: OpenStack Development Mailing List (not for usage questions); os.v...@gmail.com Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN Hello Samuel, Just noting that the link below shows current-state Barbican. We are in the process of designing SSL certificate support for Barbican via blueprints such as this one: https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates We intend to discuss this feature in Atlanta to enable coding in earnest for Juno. The Container resource is intended to capture/store the final certificate details. Thanks, John From: Samuel Bercovici [samu...@radware.com] Sent: Thursday, May 01, 2014 12:50 PM To: OpenStack Development Mailing List (not for usage questions); os.v...@gmail.commailto:os.v...@gmail.com Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN Hi Vijay, I have looked at the Barbican APIs - https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface I was no able to see a native API that will accept an SSL certificate (private key, public key, CSR, etc.) and will store it. We can either store the whole certificate as a single file as a secret or use a container and store all the certificate parts as secrets. I think that having LBaaS reference Certificates as IDs using some service is the right way to go so this might be achived by either: 1. Adding to Barbican and API to store / generate certificates 2. Create a new module that might start by being hosted in neutron or keystone that will allow to manage certificates and will use Barbican behind the scenes to store them. 3. Decide on a container structure to use in Babican but implement the way to access and arrange it as a neutron library Was any decision made on how to proceed? Regards, -Sam. From: Vijay B [mailto:os.v...@gmail.com] Sent: Wednesday, April 30, 2014 3:24 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Neutron] [LBaaS][VPN] SSL cert implementation for LBaaS and VPN Hi, It looks like there are areas of common effort in multiple efforts that are proceeding in parallel to implement SSL for LBaaS as well as VPN SSL in neutron. Two relevant efforts are listed below: https://review.openstack.org/#/c/74031/ (https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL) https://review.openstack.org/#/c/58897/ (https://blueprints.launchpad.net/openstack/?searchtext=neutron-ssl-vpn) Both VPN and LBaaS will use SSL certificates and keys, and this makes it better to implement SSL entities as first class citizens in the OS world. So, three points need to be discussed here: 1. The VPN SSL implementation above is putting the SSL cert content in a mapping table, instead of maintaining certs separately and referencing them using IDs. The LBaaS implementation stores certificates in a separate table, but implements the necessary extensions and logic under LBaaS. We propose that both these implementations move away from this and refer to SSL entities using IDs, and that the SSL entities themselves are implemented as their own resources, serviced either by a core plugin or a new SSL plugin (assuming neutron; please also see point 3 below). 2. The actual data store where the certs and keys are stored should be configurable at least globally, such that the SSL plugin code will singularly refer to that store alone when working with the SSL entities. The data store candidates currently are Barbican and a sql db. Each should have a separate backend driver, along with the required config values. If further evaluation of Barbican shows that it fits all SSL needs, we should make it a priority over a sqldb driver. 3. Where should the primary entries for the SSL entities be stored? While the actual certs themselves will reside on Barbican or SQLdb, the entities themselves are currently being implemented in Neutron since they are being used/referenced there. However, we feel that implementing them in keystone would be most appropriate. We could also follow a federated model where a subset of keys can reside on another service such as Neutron. We
[openstack-dev] Monitoring as a Service
Hello everyone, Me and my team have been following this subject and similar ones related to Monitoring. IHMO, seems to be very logical to aggregate monitoring with Ceilometer. Others have been working on similar features, like ICC Lab: Nagios/Ceilometer Integration and SNMP Support The best choice, is to enhance Ceilometer with monitoring features. Are you planning on talking about this on following IRC meetings? For further discussion. Best regards, Paulo J. Nascimento Oliveira http://about.me/pnascimento Advanced Telecommunications and Networks Group - http://atnog.av.it.pt Follow us - @ATNoG_ITAv From: Eoghan Glynn egl...@redhat.com Subject: Re: [openstack-dev] Monitoring as a Service Date: 7 May 2014 09:57:38 GMT+1 To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Reply-To: OpenStack Development Mailing List \(not for usage questions\) openstack-dev@lists.openstack.org Hi Alexandre, I wanted to let this discussion develop a little before jumping in, as we've already had many circular debates about the cross-over between ceilometer and monitoring infrastructure in general. Personally I'm in favor of the big tent/broad church interpretation of ceilometer's project mandate, and would welcome further development of our capabilities in this area (whether directly within the ceilometer code-tree itself, or within a parallel repo aligned with the Telemetry program). In terms of furthering the discussion, unfortunately you've missed the boat in terms of securing a slot in the design summit next week in Atlanta (proposal deadline was April 20th, and the scheduling has all been finalized at this stage). However, we do have a project pod space available for ad-hoc overflow sessions. I would suggest that we organize something on this theme after the main ceilometer track[1] has completed, say on the Thursday or Friday. Please reach out on IRC to discuss availability for this and we'll work out something around remote participation. Thanks, Eoghan [1] http://junodesignsummit.sched.org/overview/type/ceilometer - Original Message - Thanks to everyone for the feedback. I agree that this falls under the Telemetry Program and I have moved the blueprint. You can find it here: https://blueprints.launchpad.net/ceilometer/+spec/monitoring-as-a-service Wiki page: https://wiki.openstack.org/wiki/MaaS Etherpad: https://etherpad.openstack.org/p/MaaS I can go over the project with you as well as others that are interested. We would like to start working with other open-source developers. I'll also be at the Summit next week. Roland, I currently have no plans to be at the Summit next week. However, I would be interested in exploring what you have already done and learn from it. Maybe we can schedule a meeting? You can always contact me on IRC (aviau) or by e-mail at alexandre.v...@savoirfairelinux.com For now, I think we should focus on the use cases. I invite all of you to help us list them on the Etherpad. Alexandre On 14-05-05 12:00 PM, Hochmuth, Roland M wrote: Alexandre, Great timing on this question and I agree with your proposal. I work for HP and we are just about to open-source a project for Monitoring as a Service (MaaS), called Jahmon. Jahmon is based on our customer-facing monitoring as a service solution and internal monitoring projects. Jahmon is a multi-tenant, highly performant, scalable, reliable and fault-tolerant monitoring solution that scales to service provider levels of metrics throughput. It has a RESTful API that is used for storing/querying metrics, creating compound alarms, querying alarm state/history, sending notifications and more. I can go over the project with you as well as others that are interested. We would like to start working with other open-source developers. I'll also be at the Summit next week. Regards --Roland On 5/4/14, 1:37 PM, John Dickinson m...@not.mn wrote: One of the advantages of the program concept within OpenStack is that separate code projects with complementary goals can be managed under the same program without needing to be the same codebase. The most obvious example across every program are the server and client projects under most programs. This may be something that can be used here, if it doesn't make sense to extend the ceilometer codebase itself. --John On May 4, 2014, at 12:30 PM, Denis Makogon dmako...@mirantis.com wrote: Hello to All. I also +1 this idea. As I can see, Telemetry program (according to Launchpad) covers the process of the infrastructure metrics (networking, etc) and in-compute-instances metrics/monitoring. So, the best option, I guess, is to propose add such great feature to Ceilometer. In-compute-instance monitoring will be the great value-add to upstream Ceilometer. As for me, it's a good chance to integrate well-known production ready
Re: [openstack-dev] [Juno-Summit] availability of the project project pod rooms on Monday May 12th?
Hayes, Graham wrote: I had a question about Program pods access. Do people need to be ATCs to get into the area with the pods? I ask, as not all members of the Designate group would be ATCs... In theory, anyone with a full pass can enter the design summit area. I don't expect there to be ATC-only areas. Regards, -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [trove] gpd: a git-push-based dev workflow for OpenStack projects
I just wanted to share a project that I've been working on. It's a development workflow for OpenStack projects. I like to code in PyCharm and push my changes to a DevStack VM. I don't use Vagrant and I don't like manually scp'ing code. So I created git-push-devstack or gpd: https://github.com/mlowery/git-push-devstack After configuring your laptop (or wherever you code) and your DevStack VM, pushing your code from your laptop to your DevStack VM and restarting affected processes is as easy as: $ git commit $ git push test Code and doc are at the GitHub link. Feedback is welcome. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Programs] Client Tools program discussion
On Apr 28, 2014, at 8:20 AM, Dean Troyer dtro...@gmail.commailto:dtro...@gmail.com wrote: I want to open the discussion of an OpenStack Client Tools program proposal to a wider audience. It would initially consist of OpenStackClient and eventually add the existing SDK projects as they are ready to join. The initial wiki page is at https://wiki.openstack.org/wiki/ClientTools. I do want to have the proposal made before the summit, but not necessarily the TC consideration. There has recently been some discussion (specifically around summit sessions) regarding the overlap of client code and the user experience team. This is one of the things I want to get some feedback on before making a formal proposal. The mission statement and description are written with the anticipation of one or more SDK projects joining the program during the Juno cycle. Hi Dean, This is great. We’ve been working towards providing a better developer experience (DX) in OpenStack over the past few Summits. To give you some background there have been design summit sessions like the SDK Discussion [1] [2] and Documenting Application Developer Resources [3] [4] as well as summit sessions like The OpenStack Community Welcomes Developers in All Languages [5]. We’ve also got some summit sessions for Juno in You Sir, Sir Vey [6] and Focusing on Developer Experience and Announcing developer.openstack.orghttp://developer.openstack.org [7]. We would definitely like to discuss your proposal and join together for a DX related program, whether that’s part of a larger DX/UX program or not. I think calling it just “Client Tools” is too limiting because it’s about more than just tools. Considering how close we are to the summit, is there a time/place/particular session where you’d like to discuss it? Thanks, Everett [1] http://openstacksummitfall2012.sched.org/event/2215363b1716a519e786e126b493e3a3 [2] https://etherpad.openstack.org/p/sdk-documentation [3] http://icehousedesignsummit.sched.org/event/d5ad5bd83247868ff07b59ea4384b307 [4] https://etherpad.openstack.org/p/icehouse-doc-app-devs [5] http://openstacksummitnovember2013.sched.org/event/41b333a6736a92f4056246719deec1fc [6] http://openstacksummitmay2014atlanta.sched.org/event/f2675464b7624775f9f0accb78e34259 [7] http://openstacksummitmay2014atlanta.sched.org/event/f1ee610553f3c64296eb1f82ae6bf73d ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO][Tuskar][Summit] Tuskar Session Etherpad
Hey all, Here's a link to the etherpad for the summit session on Tuskar: https://etherpad.openstack.org/p/juno-summit-tripleo-tuskar-planning Please feel free to discuss or add suggestions or make changes! Thanks, Tzu-Mainn Chen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Timestamp formats in the REST API
On 04/29/2014 09:48 AM, Mark McLoughlin wrote: What appeared to be unusual was that the timestamp had both sub-second time resolution and timezone information. It was felt that this wasn't a valid timestamp format and then some debate about how to 'fix' it: My conclusions from all that: 1) This sucks 2) At the very least, we should be clear in our API samples tests which of the three formats we expect - we should only change the format used in a given part of the API after considering any compatibility considerations 3) We should unify on a single format in the v3 API - IMHO, we should be explicit about use of the UTC timezone and we should avoid including microseconds unless there's a clear use case. In other words, we should use the 'isotime' format. 4) The 'xmltime' format is just a dumb historical mistake and since XML support is now firmly out of favor, let's not waste time improving the timestamp situation in XML. 5) We should at least consider moving to a single format in the v2 (JSON) API. IMHO, moving from strtime to isotime for fields like created_at and updated_at would be highly unlikely to cause any real issues for API users. Having dealt with timestamp issues in several other (Python) projects I've come to the following conclusions: * datetime values are always stored and transferred in UTC. * UTC time values are only converted to local time for presentation purposes. * time zone info should be explicit (no guessing), therefore all datetime objects should be aware not naive and when rendered in string representation must include the tz offset (not a time zone name, however, Z for UTC is acceptable). * sub-second resolution is often useful and should be supported, but is not mandatory. -- John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Doc] [ceilometer] [cinder] [glance] [heat] [horizon] [keystone] [neutron] [nova] [swift] [trove] Atlanta Summit – Discuss docs process and tool improvements
Hi all, Anne here - your friendly documentarian! As the docs PTL, it’s my job to gather input about and make doc tool and process decisions. This note is the first step: Your responses to this note give me input to share at the Summit where, hopefully, we will form a consensus about process and tool changes. About a week after the Summit, I'll produce a roadmap for changes that we will implement as a community. A recurring issue that I hope we can solve is barriers to doc contributions. Leading up to the Summit, I've gathered as much data as I can about these barriers. We have an awesome doc team and 20 core people on the team, but only about dozen people write and review the bulk of the cross-project documentation regularly. It's amazing we've accomplished all we have with the resources available. We have increased doc contributions by 10% since the last release, but even so, we must find ways to increase the doc contributor base. Remember three and a half years ago? We had nova and swift and a web site for each. Fast-forward and we now have docs for multiple audiences: - Contributors who make OpenStack - Deployers who configure OpenStack - Operations pros that run OpenStack - End users who consume and administer OpenStack But the docs’ contributor base is not growing at the same rate as our audiences. Why not? The responses to my recent survey about doc contributions indicate that the top barriers to docs’ contributions are: - Tools: DocBook and WADL are difficult - Doc locations: Lack of knowledge about where docs belong - Process: Git/gerrit as process - Subject-matter expertise: People do not have test environments and they feel that they don't know enough to contribute. Also, 70% of the respondents to the survey work on or consume OpenStack fewer than 10 hours a week. It's not easy to interpret and analyze this data, but I believe we're not facilitating part-time OpenStack users to write docs and feel confident in their contributions. To solve this problem, I want to explore possible changes to our doc tools and processes. We must engage more doc contributors to spread the doc creation and maintenance burden. Tuesday at 12:05 in B304, [1] we're having cross-project documentation session to discuss this list of requirements for doc tools and process changes: (See [2] for spreadsheet format.) Experience: Solution must be completely open source Content must be available online Search engines must be able to index Content must be searchable Content should be easily cross-linked by topic and type (priority: low) Enable comments, ratings, and analytics (or ask.openstack.org integration) (priority: low) Distribution: Readers must get versions of technical content specific to version of product Modular authoring of content Graphic and text content should be stored as files, not in a database Consumers must get technical content in PDF, html, video, and audio Workflow for review and approval prior to publishing content Authoring: Content must be re-usable across authors and personas (Single source) Must support many content authors with multiple authoring tools Existing content must migrate smoothly All content versions need to be comparable (diff) across versions Content must be organizationally segregated based on user personas Draft content must be reviewable in HTML Link maintenance - Links must update with little manual maintenance to avoid broken links and link validation I want to gather input about these requirements before we discuss implementation. As far as tools go, some possible solutions are Markdown, reStructuredText (RST), AsciiDoc, each with its own build framework. I'm not anticipating a complete migration, but we need to solve these barriers for certain docs. Please let me know your thoughts through the OpenStack-docs mailing list if you cannot attend in person. I am excited to hear your ideas, and I can't wait to meet a wider audience and serve more doc contributors. Thanks, Anne [1] http://junodesignsummit.sched.org/event/626a1e21babaf30d98973f5eb7402fcf [2] https://docs.google.com/spreadsheet/ccc?key=0AhXvn1h0dcyYdFNaQW1OaVNNejZzYlRMZjBnT0pLMVEusp=sharing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] How to cross-reference resources within OS::Heat::ResourceGroup
On 06/05/14 16:07, Janczuk, Tomasz wrote: Could this be accomplished with 3 resource groups instead of one? The first would create the ports, the second floating IPs, and the last the VMs? In that case, would there be a way to construct a reference to a particular instance of, say, a port, when creating an instance of a floating IP? No, and that wouldn't make any sense. The scaling unit is the 3 resources together, so you should group them into a template and scale that. Also, please post usage questions to ask.openstack.org, not to the development list. thanks, Zane. On 5/6/14, 12:41 PM, Randall Burt randall.b...@rackspace.com wrote: A resource group's definition contains only one resource and you seem to want groups of multiple resources. You would need to use a nested stack or provider template to do what you're proposing. On May 6, 2014, at 2:23 PM, Janczuk, Tomasz tomasz.janc...@hp.com wrote: I am trying to create an OS::Heat::ResourceGroup of VMs and assign each VM a floating IP. As far as I know this requires cross-referencing the VM, port, and floating IP resources. How can I do that within a OS::Heat::ResourceGroup definition? The `port: { get_resource: vm_cluster.vm_port.0 }` below is rejected by Heat. Any help appreciated. Thanks, Tomasz vm_cluster: type: OS::Heat::ResourceGroup properties: count: { get_param: num_instances } resource_def: vm: type: OS::Nova::Server properties: key_name: { get_param: key_name } flavor: { get_param: flavor } image: { get_param: image } networks: - port: { get_resource: vm_cluster.vm_port.0 } vm_port: type: OS::Neutron::Port properties: network_id: { get_param: private_net_id } fixed_ips: - subnet_id: { get_param: private_subnet_id } security_groups: [{ get_resource: rabbit_security_group }] vm_floating_ip: type: OS::Neutron::FloatingIP properties: floating_network_id: { get_param: public_net_id } port_id: { get_resource: vm_cluster.vm_port.0 } ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
We're suffering from a total overload of the term 'metadata' here, and there are 3 totally separate things that are somehow becoming mangled Thanks for the summary. The term metadata definitely gets overloaded. I've been experimenting with the metadata to see what happens with all of it. Glance image properties == ALL properties are copied to volume_image_metadata in Cinder Cinder volume_metadata == Disappears when uploaded to image in Glance Cinder volume_image_metadata == Core image properties are copied. Rest are lost. One thing I did was create an image and added properties. Then I created bootable volume from it. Then I uploaded the bootable volume to Glance. When it got back to Glance all the non-core properties were gone. Was this the intended design? This doesn't seem right. Aside from pure user properties, this also would be a problem in Trump's original scenario where you may change options on a volume needed for Nova scheduling or driver settings like hw_scsi_mode. You lose all those properties set when uploading to image. Regarding the property protections in Glance, it looks to use RBAC. It seems to me that if a volume is being uploaded to glance with protected properties and the user doing the copying doesn't have the right roles to create those properties that Glance should reject the upload request. Based on the etherpads, the primary motivation for property protections was for an image marketplace, which doesn't seem like there would be the same need for volumes. If property protections were needed for other reasons like restricting properties that are picked up by the scheduler or a driver, then that is a another use case which may affect the cinder client blueprint [1], but seems like it would need to be handled by the Cinder API. We are trying to work out some concepts on all of this as part of the Graffiti project [2] and a related Horizon blueprint [3] for metadata management. If we put in a UI element for handling the metadata, where should the cinder metadata go? Would it be reasonable to just use volume_image_metadata if the volume is bootable? Otherwise, use the volume_metadata? [1] https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata [2] https://wiki.openstack.org/wiki/Graffiti/Architecture#Proposed_Horizon_Concepts [3] https://blueprints.launchpad.net/horizon/+spec/tagging -Original Message- From: Duncan Thomas [mailto:duncan.tho...@gmail.com] Sent: Wednesday, May 07, 2014 7:57 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata On 7 May 2014 09:36, Trump.Zhang zhangleiqi...@gmail.com wrote: @Tripp, Thanks for your reply and info. I am also thinking if it is proper to add support for updating the volume's glance_image_metadta to reflect the newest status of volume. However, there may be alternative ways to achieve it: 1. Using the volume's metatadata 2. Using the volume's admin_metadata So I am wondering which is the most proper method. We're suffering from a total overload of the term 'metadata' here, and there are 3 totally separate things that are somehow becoming mangled: 1. Volume metadata - this is for the tenant's own use. Cinder and nova don't assign meaning to it, other than treating it as stuff the tenant can set. It is entirely unrelated to glance_metadata 2. admin_metadata - this is an internal implementation detail for cinder to avoid every extension having to alter the core volume db model. It is not the same thing as glance metadata or volume_metadata. An interface to modify volume_glance_metadata sounds reasonable, however it is *unrelated* to the other two types of metadata. They are different things, not replacements or anything like that. Glance protected properties need to be tied into the modification API somehow, or else it becomes a trivial way of bypassing protected properties. Hopefully a glance expert can pop up and suggest a way of achieving this integration. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] Meeting Tuesday May 6th at 19:00 UTC
On Sun, May 4, 2014 at 9:33 PM, Elizabeth K. Joseph l...@princessleia.com wrote: Hi everyone, The OpenStack Infrastructure (Infra) team is hosting our weekly meeting on Tuesday May 6th, at 19:00 UTC in #openstack-meeting Meeting minutes and log here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-06-19.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-06-19.01.txt Log: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-06-19.01.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 http://www.princessleia.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [openstack-sdk-dotnet] Design details doc available
For anyone interested in the .NET SDK... A new doc covering some of the design details of the project is available. As always, the team is open to and welcomes feedback from the community! https://wiki.openstack.org/wiki/OpenStack-SDK-DotNet/DesignDetails Cheers, --Wayne ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] Node uuid and Nova Hypervisor ID
Can you describe the deployment topology you envision -- which service(s) are on what host(s)? Your question seems based on an assumption that ironic is able to control the power state of the nova-compute host _itself_ -- which is not true (*). When using the nova.virt.ironic driver with Nova, you would run a very small number of nova-compute services, and scale out the ironic-conductor services to some degree. Ironic is not a service to manage the power state of the nova-compute host -- it's a separate service to provision Nova instances on bare metal servers (which are, generally, kept powered off unless in use). (*) unless you're using TripleO, in which case your question seems to be mixing the undercloud (ironic) with the overcloud (nova-compute + libvirt/xen/etc), without explicitly explaining how or why you're doing that. -D On Tue, May 6, 2014 at 1:36 PM, François Rossigneux francois.rossign...@inria.fr wrote: Devananda, I get the Nova Hypervisor ID by typing nova hypervisor-list. I am developing a resource reservation service and the reservations are attached to a Nova Hypervisor ID. I would use Ironic to put the nodes on standby mode when there is no running instances. This is why I need to get the Ironic node UUID from a Nova Hypervisor ID... http://ironic:6385/v1/nodes?instance_uuid=blablabla; is not a perfect solution : without running instances, you cannot retrieve the node UUID... Thanks. Le 06/05/2014 22:05, Devananda van der Veen a écrit : François, Can you clarify by way of a CLI example what exactly you mean by nova hypervisor id? I'm not sure if you mean the instance uuid, compute host id, service id, or something else. I'll assume you mean the nova instance uuid, in which case, you can get the Ironic node uuid from nova show $instance -- it is in the hypervisor_hostname field. -D On Tue, May 6, 2014 at 2:23 AM, François Rossigneux francois.rossign...@inria.fr wrote: Hi all, I need to retrieve the Ironic node uuid from a Nova Hypervisor ID. How can I do that? Thanks. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- François Rossigneux Ingénieur Inria Projet XLcloud http://perso.ens-lyon.fr/francois.rossigneux ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta
Hi folks, ok seems to me more complex than expected ;) Monday is the OpenStack speakers dinner. Let's do it on Sunday. Matthew already suggested a location: Ted's Montana Grill https://plus.google.com/100773563660993493024/posts/P9YSgT8AVXh For all the people interested just send me your mobile number.. Regards Marc From: Koderer, Marc [m.kode...@telekom.de] Sent: Monday, May 05, 2014 8:41 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta All right, let’s meet on Monday ;) We can discuss the details after the QA meeting this week. Von: Frittoli, Andrea (HP Cloud) [mailto:fritt...@hp.com] Gesendet: Donnerstag, 1. Mai 2014 18:42 An: OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta I will arrive Sunday late. If you meet on Monday I’ll see you there ^_^ From: Miguel Lavalle [mailto:mig...@mlavalle.com] Sent: 01 May 2014 17:28 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta I arrive Sunday at 3:30pm. Either Sunday or Monday are fine with me. Lookging forward to it :-) On Wed, Apr 30, 2014 at 5:11 AM, Koderer, Marc m.kode...@telekom.demailto:m.kode...@telekom.de wrote: Hi folks, last time we met one day before the Summit started for a short meet-up. Should we do the same this time? I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be fine for me. Regards, Marc ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Monitoring as a Service
Hi everyone, It is not possible for me to attend the summit and I would prefer if we waited until after the summit. My colleague Jonathan Le Lous, Cloud leader (OpenStack) at Savoir-Faire Linux, will be attending to the first three days of the summit. For those of you who are willing to talk, you may meet him there. I will be attending to Solutions Linux in Paris (May 20-21), you may contact me to arrange a meeting there. I will also most likely attend to the next summit in Paris (Nov 03 – 08) where I plan to arrange discussions around the topic. In the meantime, I will shortly contact everyone subscribed to the blueprint to plan an IRC meeting. I will be posting more information about the meeting on openstack-dev as soon as we have settled on a date and time. Thanks, Alexandre Viau - Original Message - From: Paulo Oliveira paulonascime...@av.it.pt To: openstack-dev@lists.openstack.org Cc: alexandre viau alexandre.v...@savoirfairelinux.com Sent: Wednesday, May 7, 2014 12:10:08 PM Subject: [openstack-dev] Monitoring as a Service Hello everyone, Me and my team have been following this subject and similar ones related to Monitoring. IHMO, seems to be very logical to aggregate monitoring with Ceilometer. Others have been working on similar features, like ICC Lab: Nagios/Ceilometer Integration and SNMP Support The best choice, is to enhance Ceilometer with monitoring features. Are you planning on talking about this on following IRC meetings? For further discussion. Best regards, Paulo J. Nascimento Oliveira http://about.me/pnascimento Advanced Telecommunications and Networks Group - http://atnog.av.it.pt Follow us - @ATNoG_ITAv From: Eoghan Glynn egl...@redhat.com Subject: Re: [openstack-dev] Monitoring as a Service Date: 7 May 2014 09:57:38 GMT+1 To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Reply-To: OpenStack Development Mailing List \(not for usage questions\) openstack-dev@lists.openstack.org Hi Alexandre, I wanted to let this discussion develop a little before jumping in, as we've already had many circular debates about the cross-over between ceilometer and monitoring infrastructure in general. Personally I'm in favor of the big tent/broad church interpretation of ceilometer's project mandate, and would welcome further development of our capabilities in this area (whether directly within the ceilometer code-tree itself, or within a parallel repo aligned with the Telemetry program). In terms of furthering the discussion, unfortunately you've missed the boat in terms of securing a slot in the design summit next week in Atlanta (proposal deadline was April 20th, and the scheduling has all been finalized at this stage). However, we do have a project pod space available for ad-hoc overflow sessions. I would suggest that we organize something on this theme after the main ceilometer track[1] has completed, say on the Thursday or Friday. Please reach out on IRC to discuss availability for this and we'll work out something around remote participation. Thanks, Eoghan [1] http://junodesignsummit.sched.org/overview/type/ceilometer - Original Message - Thanks to everyone for the feedback. I agree that this falls under the Telemetry Program and I have moved the blueprint. You can find it here: https://blueprints.launchpad.net/ceilometer/+spec/monitoring-as-a-service Wiki page: https://wiki.openstack.org/wiki/MaaS Etherpad: https://etherpad.openstack.org/p/MaaS I can go over the project with you as well as others that are interested. We would like to start working with other open-source developers. I'll also be at the Summit next week. Roland, I currently have no plans to be at the Summit next week. However, I would be interested in exploring what you have already done and learn from it. Maybe we can schedule a meeting? You can always contact me on IRC (aviau) or by e-mail at alexandre.v...@savoirfairelinux.com For now, I think we should focus on the use cases. I invite all of you to help us list them on the Etherpad. Alexandre On 14-05-05 12:00 PM, Hochmuth, Roland M wrote: Alexandre, Great timing on this question and I agree with your proposal. I work for HP and we are just about to open-source a project for Monitoring as a Service (MaaS), called Jahmon. Jahmon is based on our customer-facing monitoring as a service solution and internal monitoring projects. Jahmon is a multi-tenant, highly performant, scalable, reliable and fault-tolerant monitoring solution that scales to service provider levels of metrics throughput. It has a RESTful API that is used for storing/querying metrics, creating compound alarms, querying alarm state/history, sending notifications and more. I can go over the project with you as well as others that are interested. We would like to start working with other open-source
Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)
(response inline) - Original Message - From: Zane Bitter zbit...@redhat.com To: openstack-dev@lists.openstack.org Sent: Tuesday, May 6, 2014 9:09:09 PM Subject: Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2) On 05/05/14 13:40, Solly Ross wrote: One thing that I was discussing with @jaypipes and @dansmith over on IRC was the possibility of breaking flavors down into separate components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor. This way, you still get the control of the size of your building blocks (e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid exponential flavor explosion by separating out the axes. Dimitry and I have discussed this on IRC already (no-one changed their mind about anything as a result), but I just wanted to note here that I think even this idea is crazy. VMs are not allocated out of a vast global pool of resources. They're allocated on actual machines that have physical hardware costing real money in fixed ratios. Here's a (very contrived) example. Say your standard compute node can support 16 VCPUs and 64GB of RAM. You can sell a bunch of flavours: maybe 1 VCPU + 4GB, 2 VCPU + 8GB, 4 VCPU + 16GB... c. But if (as an extreme example) you sell a server with 1 VCPU and 64GB of RAM you have a big problem: 15 VCPUs that nobody has paid for and you can't sell. (Disks add a new dimension of wrongness to the problem.) So the simple solution is to not allow a VM that uses all of the RAM on a given host (just don't have a RAM flavor that size), and then schedule the VM on a host that has minimal RAM usage but high CPU usage. This is especially true for disk, where you may have situations where you don't care if an otherwise large VM uses no disk (disks on network stores, etc). The insight of flavours, which is fundamental to the whole concept of IaaS, is that users must pay the *opportunity cost* of their resource usage. If you allow users to opt, at their own convenience, to pay only the actual cost of the resources they use regardless of the opportunity cost to you, then your incentives are no longer aligned with your customers. You'll initially be very popular with the kind of customers who are taking advantage of you, but you'll have to hike prices across the board to make up the cost leading to a sort of dead-sea effect. A Gresham's Law of the cloud, if you will, where bad customers drive out good customers. Simply put, a cloud allowing users to define their own flavours *loses* to one with predefined flavours 10 times out of 10. Note that this proposal wouldn't prevent flavors from being used -- you could still have flavor bundles (or something of the sort) that would act the way current flavors do. In the above example, you just tell the customer: bad luck, you want 64GB of RAM, you buy 16 VCPUs whether you want them or not. It can't actually hurt to get _more_ than you wanted, even though you'd rather not pay for it (provided, of course, that everyone else *is* paying for it, and cross-subsidising you... which they won't). Again, what if you also have a user who needs lots of CPUs, but a relatively small amount of RAM or disk? Now, it's not the OpenStack project's job to prevent operators from going bankrupt. But I think at the point where we are adding significant complexity to the project just to enable people to confirm the effectiveness of a very obviously infallible strategy for losing large amounts of money, it's time to draw a line. By the way, the whole theory behind this idea seems to be that this: nova server-create --cpu-flavor=4 --ram-flavour=16G --disk-flavor=200G minimises the cognitive load on the user, whereas this: nova server-create --flavor=4-64G-200G But, flavors aren't (and shouldn't be) named 16G. Realistically, it would look more like nova create --cpu-flavor=large --ram-flavor=medium --disk-flavor=small for many clouds. Additionally, keep in mind that not everyone uses the command line. Developers often forget that many users will want to use horizon, and selecting 4-64G-200G (or even large-medium-large) from a long list can be very annoying (suppose we have 6 RAM flavors and 6 disk flavors -- that's 36 flavors that start with 4-). will cause the user's brain to explode from its combinatorial complexity. I find this theory absurd. In other words, if you really want to lose some money, it's perfectly feasible with the existing flavour implementation. The operator is only ever 3 for-loops away from setting up every combination of flavours possible from combining the CPU, RAM and disk options, and can even apply whatever constraints they desire. All that said, Heat will expose any API that Nova implements. Choose wisely. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron
On Wed, May 7, 2014 at 6:45 AM, Susanne Balle sleipnir...@gmail.com wrote: Hi Advanced Services/LBaaS Stackers, We are setting up a meeting to discuss if it makes sense to separate the advanced services (LBaaS, FW, VPNaaS) from Neutron into separate projects. We want a healthy discussion around the pros and cons of separating the advanced services from Neutron and its short or long term feasibility. I've already spoken to Susanne about this multiple times, so I wanted to share my thoughts publicly as well. As the PTL of Neutron, I am not in favor of splitting advanced services out of Neutron. There are many reasons for this, and I'll enumerate some of them here: 1. Splitting services out will actually slow down velocity. 2. Splitting out advanced services would require locking down APIs which are currently internal to Neutron and have not been exposed outside. 3. The advanced services team has worked out a high level plan, along with more detailed blueprints, for the work they plan to execute on in Juno [1] [1] https://review.openstack.org/#/c/92200/ The meeting is planned for: Tuesday May 13th at 2pm in the Neutron pod. There will be a designated pod for each of the official programs at: https://wiki.openstack.org/wiki/Programs Some programs share a pod. There will be a map at the center of the space, as well as signage up to help find the relevant pod. Based on discussions with Rackspace, Mirantis, and others it is clear that the advanced services (i.e. LBaaS) in Neutron are not getting the attention and the support to move forward and create a first in class load-balancer service; from a service provider or operator's perspective. We currently have a lot of momentum and energy behind the LBaaS effort but are being told that the focus for Neutron is bug fixing given the instability in Neutron itself. While the latter is totally understandable, as a high priority for Neutron it leaves the advanced services out in the cold with no way to make progress in developing features that are needed to support the many companies that rely on LBaaS for large scale deployments. The reason services haven't gotten the attention some think they deserved in Icehouse was due to the focus on testing and stability for Neutron during Icehouse. We've made great strides there as a team now, and we're continuing those efforts in Juno. And as I alluded to above, the advanced services team has a working document now, along with sub-documents, for how they plan to move forward on services for Juno (LBaaS, VPNaaS, FWaaS, etc.). As I indicated before, splitting this out would slow velocity amongst other logistical issues. The current Neutron LB API and feature set meet minimum requirements for small-medium private cloud deployments, but does not meet the needs of larger, provider (or operator) deployments that include hundreds if not thousands of load balancers and multiple domain users (discrete customer organizations). The OpenStack LBaaS community looked at requirements and noted that the following operator-focused requirements are currently missing: · Scalability · SSL Certificate management – for an operator-based service, SSL certificate management is a much more important function that is currently not addressed in the current API or blueprint · Metrics Collection – a very limited set of metrics are currently provided by the current API. · Separate admin API for NOC and support operations · Minimal downtime when migrating to newer versions · Ability to migrate load balancers (SW to HW, etc.) · Resiliency functions like HA and failover · Operator-based load balancer health checks · Support multiple, simultaneous drivers. Yes, we're working on addressing these as a team. None of the above necessitates a new project being formed. We have had great discussions on the LBaaS mailing list and on IRC about all the things we want to do, the new APIs, the User use cases, requirements and priorities, the operator requirements for LBaaS, etc. and I am at this point wondering if Neutron LBaaS as a sub-project of Neutron can fulfill our requirements. Why? Frankly, any issues you've had on IRC or on the ML would only carry over into a new project. You can't solve logistical problems by adding more layers of bureaucracy. I would like this group to discuss the pros and cons of separating the advanced services, including LB, VPN, and FW, out of Neutron and allow for each of the three currently existing advanced services to become stand-alone projects or one standalone project. This should be done under the following assumptions: · Keep backwards compatibility with the current Neutron LBaaS plugin/driver API (to some point) so that existing drivers/plug-ins continues to work for people who have already invested in Neutron LBaaS · Migration
[openstack-dev] [qa] Debugging tox tests with pdb?
Hi, I've read much of the documentation around Openstack tests, tox, and testr. All I've found indicates debugging can be done, but only by running the entire test suite. I'd like the ability to run a single test module with pdb.set_trace() breakpoints inserted, then step through the test. I've tried this but it causes test failures on a test that would otherewise succeed. The command I use to run the test is similar to this: tox -e py27 test_module_name Is there some way to debug single tests that I haven't found? If not, how is everyone doing test development without the ability to debug? Thanks, Eric Pendergrass smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 05/08 14-00 UTC
Lets go over the Rackspace portion of the API comparison tomorrow then, and we can cover Stephen's on the ML when it's complete. On Wed, May 7, 2014 at 4:55 AM, Stephen Balukoff sbaluk...@bluebox.net wrote: Howdy, y'all! I just wanted to give you a quick update: It looks like the Rackspace team is mostly done with their half of the API comparison, however, it is extremely unlikely I'll be able to finish my half of this in time for the team meeting this Thursday. I apologize for this. Stephen On Tue, May 6, 2014 at 1:27 PM, Eugene Nikanorov enikano...@mirantis.com wrote: Hi folks, This will be the last meeting before the summit, so I suggest we will focus on the agenda for two design track slots we have. Per my experience design tracks are not very good for in-depth discussion, so it only make sense to present a road map and some other items that might need core team attention like interaction with Barbican and such. Another item for the meeting will be comparison of API proposals which as an action item from the last meeting. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Debugging tox tests with pdb?
Hey Eric, The keystone team has a work around for this, not sure if it's applicable to the QA team, but I'll share anyway! It's documented it here: http://docs.openstack.org/developer/keystone/developing.html#running-with-pdb In short, there is a new debug tox environment - https://github.com/openstack/keystone/blob/master/tox.ini#L53 Which just kicks off this script - https://github.com/openstack/keystone/blob/master/tools/debug_helper.sh Regards, Steve Martinelli Software Developer - Openstack Keystone Core Member Phone: 1-905-413-2851 E-mail: steve...@ca.ibm.com 8200 Warden Ave Markham, ON L6G 1C7 Canada From: Pendergrass, Eric eric.pendergr...@hp.com To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org, Date: 05/07/2014 02:17 PM Subject: [openstack-dev] [qa] Debugging tox tests with pdb? Hi, I’ve read much of the documentation around Openstack tests, tox, and testr. All I’ve found indicates debugging can be done, but only by running the entire test suite. I’d like the ability to run a single test module with pdb.set_trace() breakpoints inserted, then step through the test. I’ve tried this but it causes test failures on a test that would otherewise succeed. The command I use to run the test is similar to this: tox -e py27 test_module_name Is there some way to debug single tests that I haven’t found? If not, how is everyone doing test development without the ability to debug? Thanks, Eric Pendergrass[attachment smime.p7s deleted by Steve Martinelli/Toronto/IBM] ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC
Tomorrow's meeting will be at 1500 UTC in #openstack-meeting-3. The current agenda can be found on the subteam meeting page [1]. We will not hold a meeting next week during the summit. Carl Baldwin Neutron L3 Subteam [1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] [Keystone] [TripleO] Making use of domains by name - policy and API issues?
On Tue, Apr 29, 2014 at 1:25 AM, Robert Collins robe...@robertcollins.netwrote: On 29 April 2014 12:27, Dolph Mathews dolph.math...@gmail.com wrote: Sure: domain names are unambiguous but user mutable, whereas Heat's approach to using admin tenant name is at risk to both mutability and ambiguity (in a multi-domain deployment). Isn't domainname/user unambiguous and unique? Yes mutability is really not keystones choice. If keystone won't accept domainname/user then that will force us to either do two stack-updates for a single deploy (ugly) or write patches to heat (and neutron where the callback-to-nova support has the same issue) to manually try a lookup and work around this. Since its trivial to write such a thunk, what benefit is there to your users - e.g. TripleO/heat/nova not have it in keystone itself? -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] [Keystone] [TripleO] Making use of domains by name - policy and API issues?
Excerpts from Robert Collins's message of 2014-04-28 23:25:02 -0700: On 29 April 2014 12:27, Dolph Mathews dolph.math...@gmail.com wrote: Sure: domain names are unambiguous but user mutable, whereas Heat's approach to using admin tenant name is at risk to both mutability and ambiguity (in a multi-domain deployment). Isn't domainname/user unambiguous and unique? mutability is really not keystones choice. If keystone won't accept domainname/user then that will force us to either do two stack-updates for a single deploy (ugly) or write patches to heat (and neutron where the callback-to-nova support has the same issue) to manually try a lookup and work around this. Since its trivial to write such a thunk, what benefit is there to your users - e.g. TripleO/heat/nova not have it in keystone itself? So it sounds like we can drive a change into Keystone. The short version is something like this: Anywhere that accepts a domain ID, should also be able to accept a domain name. Anywhere that accepts a user ID, should also be able to accept a domain name and user name. This sounds like it has several facets and so is spec-worthy. Anyone disagree? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Debugging tox tests with pdb?
Hello, Eric. On Wed, May 7, 2014 at 10:15 PM, Pendergrass, Eric eric.pendergr...@hp.comwrote: Hi, I’ve read much of the documentation around Openstack tests, tox, and testr. All I’ve found indicates debugging can be done, but only by running the entire test suite. I’d like the ability to run a single test module with pdb.set_trace() breakpoints inserted, then step through the test. I’ve tried this but it causes test failures on a test that would otherewise succeed. The command I use to run the test is similar to this: tox -e py27 test_module_name Is there some way to debug single tests that I haven’t found? If not, how is everyone doing test development without the ability to debug? You can do it as easy as: .tox/py27/bin/python -m testtools.run test_module_name -- Kind regards, Yuriy. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 05/08 14-00 UTC
All of our relevant material is in this Google Drive folder == https://drive.google.com/#folders/0B_x8_4x6DRLad1NZMjgyVFhqakU Cheers, --Jorge On 5/7/14 1:19 PM, Kyle Mestery mest...@noironetworks.com wrote: Lets go over the Rackspace portion of the API comparison tomorrow then, and we can cover Stephen's on the ML when it's complete. On Wed, May 7, 2014 at 4:55 AM, Stephen Balukoff sbaluk...@bluebox.net wrote: Howdy, y'all! I just wanted to give you a quick update: It looks like the Rackspace team is mostly done with their half of the API comparison, however, it is extremely unlikely I'll be able to finish my half of this in time for the team meeting this Thursday. I apologize for this. Stephen On Tue, May 6, 2014 at 1:27 PM, Eugene Nikanorov enikano...@mirantis.com wrote: Hi folks, This will be the last meeting before the summit, so I suggest we will focus on the agenda for two design track slots we have. Per my experience design tracks are not very good for in-depth discussion, so it only make sense to present a road map and some other items that might need core team attention like interaction with Barbican and such. Another item for the meeting will be comparison of API proposals which as an action item from the last meeting. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [DriverLog][nova][neutron][cinder] Call for vendor participation please
Hi Akihiro, Please see my comments inline. 2014-05-07 10:35 GMT+04:00 Akihiro Motoki mot...@da.jp.nec.com: Hi, Thanks for the effort. While I am looking the website and the driver database, I have a couple of questions and suggestions. - Is it better to include trunk (= Juno) in releases in each driver if it is a part of the trunk or to wait it until Juno is released? We need some guidelines on this. For drivers that have external CI application explicitly converts results for master branch into the latest release (=Juno), but probably this is a bit misleading since release doesn't exist yet. I give Evgenia and Boris a chance to comment on this. - Which is better as maintainer email, an individual mail address or CI account contact address? IMO an individual mail address looks better because CI account contact address receives all review comments and mails to the address can be missed or not noticed soon from my experience. It is better to have some guideline on the maintainer email. I'd prefer individual email too or (as an option) an alias of team that supports driver. - How is the status of CI tested determined? I am not sure how it is handled in Wiki informaiton. Status CI tested means that driver is tested by vendor and test results are attached to gerrit review. Currently DriverLog takes into account votes only, so if CI doesn't vote (even if leaves a comment) then it is treated as CI not present. The code is implemented this way because format of test result comment is not unified and differs from driver to driver. To specify that driver has CI the one needs to provide CI's gerrit id in attribute ci_id, for example like this https://github.com/stackforge/driverlog/blob/master/etc/default_data.json#L343 - (Related to the above) How does DriverLog handle a case where multiple drivers are tested under once CI account? AFAIK some CI acounts run third party testing for multiple drivers. It is not handled correctly and is subject to discuss and re-implement. For example in Neutron Big Switch CI runs tests against 2 drivers, but sets only 1 vote. Seems like solution may be to parse comment from CI. - releases in drivers section is a list of release names now. It means we need to update releases in every release. I wonder we can support [from_release, to_release] style. If to_release is omitted, it means trunk. This is made intentionally, so maintainers verify list of drivers before every release and add new release only if everything works. Thanks, Akihiro Thanks, Ilya (2014/04/29 2:05), Jay Pipes wrote: Hi Stackers, Mirantis has been collaborating with a number of OpenStack contributors and PTLs for the last couple months on something called DriverLog. It is an effort to consolidate and display information about the verification of vendor drivers in OpenStack. Current implementation is here: http://staging.stackalytics.com/driverlog/ Public wiki here: https://wiki.openstack.org/wiki/DriverLog Code is here: https://github.com/stackforge/driverlog There is currently a plan by the foundation to publicly announce this in the coming weeks. At this point Evgeniya Shumakher, in cc, is manually maintaining the records, but we aspire for this to become a community driven process over time with vendors submitting updates as described in the wiki and PTLs and cores of the respective projects participating in update reviews. A REQUEST: If you are vendor that has built an OpenStack driver, please check that it is listed on the dashboard and update the record (following the process in the wiki) to make sure the information is accurately reflected. We want to make sure that the data is accurate prior to announcing it to general public. Also, if anybody has a suggestion on what should be improved / changed etc. == please don’t hesitate to share your ideas! Thanks! Jay and Evgeniya ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] [UX] Summary of Horizon Usability Testing and plan for Summit session
Hi All, Another topic on the agenda for this session is to talk a bit about creating some guidelines around Error messaging in Horizon, and hopefully this will trickle through other components as needed. I’ve put together some initial thoughts around this topic and want to get it out to the list so folks might have some time to take a look before discussing at the session. If you have any feedback, feel free to respond before summit and I can update this doc. http://people.redhat.com/~lsurette/OpenStack/ImprovingtheUXofMessageswithinHorizon.pdf Thanks, Liz On Apr 30, 2014, at 2:50 PM, Jacki Bauer jacki.ba...@rackspace.com wrote: Hi everyone, As Liz mentioned, we did some testing and one of the big findings was that the Launch Instance form had some usability issues. I took a stab at mocking up a launch instance process that addresses some of these issues. You can read the usability findings here. So, I know some of you will ask about work that is already being done around improving the launch instance form - here and here. That work is represented here too! I took what I felt to be what was best about the current form and best about the new work, addressed the usability issues, and tried to come up with something that wasn’t too different from either of these. If you are interested in any of the design thinking/reasoning behind the mockups, go ahead and keep reading below, otherwise, just take a look at the attachment. Feedback is welcome! Cheers, Jacki Why I did things the way I did: I used a multi-step form for a few reasons. 1-The Horizon people are interested in wizard patterns that could be used for launching instances and other step by step workflows. 2-The current launch instance divides the config options into tabs, but users often didn’t notice the tabs until they tried to launch the instance and got an error. The “*” indicating required fields on each tab confused users as well. Since all but one tab contained required fields, the tabs didn’t do anything to reduce the number of clicks a user had to make in order to complete the form. A best practice for wizards is to never reveal specific steps to the user if the number or names of those steps can change. So, I settled on four steps. Some users might not want to visit all these steps, and this is maybe a flaw. Maybe we can think about a way to allow users to skip steps. I decided to stack all fields vertically with labels to the left. I did this because I wanted the layout of form fields to be consistent throughout. This layout is very readable and pretty standard. It saves vertical space too. I changed the network selection to checkboxes. Users thought the drag and drop style control was inconsistent with the rest of Horizon. Users really liked the graphs that show their quota when they are selecting a flavor. But this didn’t really work with the new layout, so I settled on a different way of presenting the same information. It’s not as visually appealing as the graphs were, but it’s more flexible - could be used throughout Horizon to show quota information on demand and in context. I added the ability to create key pairs. This was a big finding from the usability study. I did not add the ability to create a network on the fly. Another big finding from the usability study was that users were frustrated by the fact that they had to have at least one network created to launch an instance. However, we know that creating networks is largely an admin task, not an end-user one. So instead, I made it impossible for a user to open up the launch instance form when there are no networks. There are more items selected for the user by default. Defaults have some advantages: they allow users to progress more quickly through forms. They also reduce the need for complex logic and error messages, because it becomes harder for users to leave something blank. The disadvantage to defaults is that the user may not notice a field. Since there will be no validation message forcing their attention onto the field, they might wonder how their instance came to have the configuration that it does. Defaults could be problematic for admins who do not want to encourage users to select a pre-defined default. It might mean that we need to allow admins to specify these default values for their environment. The flavor selection has been expanded from a drop down to a list with details about the resources associated with each flavor, because we have some indication that users want more details about flavors when they are selecting them. In the current launch instance form, flavor information is available but it’s somewhat disconnected from the flavor selector itself. On 04/24/2014 09:10 AM, Liz Blanchard wrote: Hi All, One of the sessions that I proposed for the Horizon track is to review the results that we got from the Usability Test
Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey
6 people have completed the survey so far. From: Samuel Bercovici Sent: Tuesday, May 06, 2014 10:56 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey Hi Everyone, The survey is now live via: http://eSurv.org?u=lbaas_project_user The password is: lbaas The survey includes all the tenant facing use cases from https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing Please try and fill the survey this week so we can have enough information to base decisions next week. Regards, -Sam. From: Samuel Bercovici Sent: Monday, May 05, 2014 4:52 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey Hi, I will not freeze the document to allow people to work on requirements which are not tenant facing (ex: operator, etc.) I think that we have enough use cases for tenant facing capabilities to reflect most common use cases. I am in the process of creation a survey in surveymonkey for tenant facing use cases and hope to send it to ML ASAP. Regards, -Sam. From: Samuel Bercovici Sent: Thursday, May 01, 2014 8:40 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Samuel Bercovici Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey Hi Everyone! To assist in evaluating the use cases that matter and since we now have ~45 use cases, I would like to propose to conduct a survey using something like surveymonkey. The idea is to have a non-anonymous survey listing the use cases and ask you identify and vote. Then we will publish the results and can prioritize based on this. To do so in a timely manner, I would like to freeze the document for editing and allow only comments by Monday May 5th 08:00AMUTC and publish the survey link to ML ASAP after that. Please let me know if this is acceptable. Regards, -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] bug triage day after summit
Ok, there are no objections so far, so, let's approve the date on tomorrow's meeting. On Tue, May 6, 2014 at 2:36 PM, Michael McCune mimcc...@redhat.com wrote: +1 - Original Message - Hey sahara folks, let's make a Bug Triage Day after the summit. I'm proposing the May, 26 for it. Any thoughts/objections? Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] team meeting May 8 1800 UTC
Hi folks, We'll be having the Sahara team meeting as usual in #openstack-meeting-alt channel. Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Agenda_for_May.2C_8 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140508T18 -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] bug triage day after summit
Jay, yeah, it's always good to now the actual status of project in the beginning of the dev cycle :) On Tue, May 6, 2014 at 1:24 PM, Jay Pipes jaypi...@gmail.com wrote: ++ You've got my axe. On 05/06/2014 03:13 PM, Sergey Lukjanov wrote: Hey sahara folks, let's make a Bug Triage Day after the summit. I'm proposing the May, 26 for it. Any thoughts/objections? Thanks. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research Thesis Applicability to the OpenStack Project
Mohammad, There are presently no published papers at this point that I can point to. The primary problem with my thesis at this point is that I am almost done, but still have about one or two more sections to go before completing it. The major issue is more in line with the citations (I didn't fully and properly complete them yet), and thus, for legality reasons, I can't publish it out in the open just yet. However, I can certainly provide more background info. What I am primarily researching is how to make Infrastructure as a Service (IaaS) cloud platforms more secure by creating an adaptive firewall that resides on the KVM host. (For my thesis, I examine KVM specifically, but the same principles in my research can equally be applied to XEN virtual machines as well. This is shown in video #5 in the first message) This relies upon the concept that the guest domain is administered by people on the outside of the hosting organization. For example, say I lease/rent/own (or whatever terminology is applicable) a VM from a cloud provider. I am the administrator of the VM, and I can install and configure the VM as I see fit. The cloud operator is responsible for maintaining the infrastructure. Because I am the administrator, I could make and perform poor choices regarding the security of my VM. For example, I could disable the firewall, not install patches, or have an incredibly weak password, to name a few. How can the cloud provider adequately protect the network resources of the adjacent virtual machines (VM's operating on the same host), and the greater network infrastructure? It's hard to administer a system that is controlled by people you do not know, nor have any control over. Thus, my research aims to provide the ability to improve the network security of these architectures by examining the firewall that exists on the KVM host, and making it adaptable. To make the host firewall adaptive, the netfilter/iptables firewall on the KVM host performs high-level firewalling for the guest domains. The advantage of this approach is that the KVM administrator can create and enforce policies for what is, or is not, acceptable on the network. Furthermore, this happens outside of the actual guest domain itself. This means that the KVM administrator need not worry about which OS a guest domain is running, or how a guest OS is even configured. Because these rules are processed on the host, __it does not matter__ what the guest domain does. Additionally, the guest domain has no knowledge that this happens, since it happens outside of the guest's scope; it happens on the host. This is also helpful in the event that a bad rootkit is installed on a guest VM. Because no change is made to the guest OS itself, rootkits won't detect changes in system configuration (the network behavior would obviously change, as traffic is firewalled, but the OS itself remains the same). Therefore, the rootkit will act normally, which can make it easier to detect. This is important, since some malicious applications can detect configuration changes, and will alter behavior to evade detection. By relying upon the host, the guest is completely untouched. Effectively, you can jail a guest VM's network capabilities with the host's firewall. However, netfilter by itself runs local to the system. Thus, to make it powerful such that future appliances can dynamically and automatically create an enforce firewall rules, I created a web service to expose the iptables functionality to outside systems via a RESTful API. This is effectively a Firewall as a Service. This allows, for example, vulnerability scanners to probe the network for vulnerabilities in the infrastructure, and dynamically create firewall rules to close them on the VM's host. This happens without making a single modification to the guest OS itself, thus, you are preserving the integrity of the guest OS. Therefore, by exposing the firewall's capability, the network infrastructure can be better secured, with organizational network policies actually enforceable on the guest domains. I began the research in November/December of 2012, and had a rough working prototype by the end of January, 2013. I noticed that the OpenStack community started examining the FWaaS concept in Feb 2013, and it is refreshing to see that a lot of the decisions I made with regards to the organization of my API (which is in NO WAY a production quality defined API) were actually correct and organized similarly in the OpenStack FWaaS API. So far, during my research, I have seen no other FWaaS type of components out there. The OpenStack project is the only one that I have seen that is actually attempting to expose the firewall capabilities of the host system with an extensible RESTful API. However, it appears from the documentation, that OpenStack is looking from the perspective of administering large sets of VM's, whereas I am looking from the perspective of policy enforcement and closing network vulnerabilities.
Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPNaaS] Advanced Services (particularly LBaaS) and Neutron
On May 7, 2014, at 7:45 AM, Susanne Balle sleipnir...@gmail.commailto:sleipnir...@gmail.com wrote: Based on discussions with Rackspace, Mirantis, and others it is clear that the advanced services (i.e. LBaaS) in Neutron are not getting the attention and the support to move forward and create a first in class load-balancer service; from a service provider or operator's perspective. We currently have a lot of momentum and energy behind the LBaaS effort but are being told that the focus for Neutron is bug fixing given the instability in Neutron itself. While the latter is totally understandable, as a high priority for Neutron it leaves the advanced services out in the cold The only reason the advance services team would be out in the cold is that members actively choose not to participate in the broader community. The code, bugs, and reviews are open so contributing more devs resources for the stability fixes would have improved the velocity of stability improvements. Some of the background reasoning for suggesting this is available at: https://etherpad.openstack.org/p/AdvancedServices_and_Neutron I’ve added comment there to provide additional information, but I do not think splitting these out are in the best interest of the community because of the logistical and technical problems it creates. I’d rather see a deeper participation in the broader Neutron community. Hope to see you there to discuss how we best make sure that the advanced services can support the many companies that rely on LBaaS or other advanced services for large scale deployment. Look forward to discussing this, mark ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Programs] Client Tools program discussion
On Wed, May 7, 2014 at 8:32 AM, Brian Curtin br...@python.org wrote: On Wed, May 7, 2014 at 7:38 AM, Doug Hellmann doug.hellm...@dreamhost.com wrote: On Tue, May 6, 2014 at 5:45 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, May 6, 2014 at 6:54 AM, Dean Troyer dtro...@gmail.com wrote: On Tue, May 6, 2014 at 7:02 AM, Thierry Carrez thie...@openstack.org wrote: Would you take over the Python client libraries as well ? On one hand they need /some/ domain expertise, but on the other I see no reason to special-case Python against other SDKs, and that may give the libraries a bit more attention and convergence (they currently are the ugly stepchild in some programs, and vary a lot). The future of the existing client libs has not been settled, my working assumption is that they would remain with their home programs as they are now. From the start OpenStackClient was meant to be a clean-slate for the CLI and the Python SDK is taking the same basic approach. Very excited for the OpenStackClient, it is already way nicer then the existing clients. Just working this out in my head. So the work flow would be: 1. At first ClientTools consist of just the OpenStackClient 2. When the pythonSDK is ready to move off of stackforge, it will live in ClientTools 3. Specific python-*clients will be rewritten (from scratch?) to use the PythonSDK. But this time they won't have a built in CLI. These libraries will live along side the respective servers (so nova's python-novaclient will live in Compute)? All while moving OpenStackClient to the new libraries Is that what you are proposing? My understanding is that the SDK aims to be a ground-up replacement for the existing disparate client libraries. Whether that replacement is appropriate for use inside OpenStack may be up for debate (I think I remember someone saying that wasn't necessarily a goal, with the focus being on end users, but I haven't been able to attend many of the meetings so my information may be out of date). Ideally the python-openstacksdk becomes the one-stop shop for interacting with OpenStack as an OpenStack contributor, an operator, an end-user of an OpenStack cloud, etc. If you're writing Python code to work with OpenStack, that would be the place to go for code, tools, examples, and documentation. Cool, that is even better. So then step 3 would be: * Each project can continue maintaining there existing python-*client or just deprecate it in favor of the what ClientTools will have. If so that sounds great. Would client tools be limited to only a pythonSDK or in the future could it potentially have other languages? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Consuming keystoneclient's Session object in novaclient
On Tue, May 6, 2014 at 3:22 PM, Jamie Lennox jamielen...@redhat.com wrote: All, TL;DR: novaclient should be able to use the common transport/auth layers of keystoneclient. If it does there are going to be functions like client.authenticate() that won't operate the same way when a session object is passed. For most users who just use the CRUD operations there will be no difference. I'm hoping that at least some of the nova community has heard of the push for using keystoneclient's Session object across all the clients. For those unaware keystoneclient.session.Session is a common transport and authentication layer to remove the need for each python-*client having there own authentication configuration and disparate transport options. It offers: - a single place for updates to transport (eg fixing TLS or other transport issues in one place) - a way for all clients to immediately support the full range of keystone's authentication including v3 auth, SAML, kerberos etc - a common place to handle version discovery such that we support multiple version endpoints from the same service catalog endpoint. For information of how to interact with a session you can see: http://www.jamielennox.net/blog/2014/02/24/client-session-objects/ This mentions the code is uncommitted however has since been committed with a few small details around parameter names being changed. The theory remains the same. To integrate this into novaclient means that if a session= object is passed then the standard HTTPClient code will be ignored in favour of using what was passed. This means that there are changes in the API of the client. In keystoneclient we have take the opinion that by passing a session object then you opt-in to the newer API and therefore accept that some functions are no longer available. For example client.authenticate() is no longer allowed because authentication is not the client's responsibility. It will have no impact on the standard novaclient CRUD operations and so be un-noticed by the vast majority of users. The review showing these changes is here: https://review.openstack.org/#/c/85920 To enable this there are a series of test changes to mock client requests at the HTTP layer rather than in the client. This means that we can test that all client operations against the new and old client construction methods and ensure the same requests are being sent. The foundation of this to turn tests into fixtures can be found by following: https://blueprints.launchpad.net/python-novaclient/+spec/httpretty-testing IMO making these tests into fixtures is a good idea anyway, however I am only pursuing it so that we can transition to using a common Session. Regarding dependencies, novaclient will need a test-requirements.txt on keystoneclient so that it can construct Session objects to test with but it should not need a requirements.txt as the session object is constructed by the user of the client (eg openstackclient, horizon etc). Can we make novaclient use keystoneclient's session by default? And just add this to requirements. If there are concerns with this process please respond here and/or on the review. Thanks, Jamie ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Plugin Mechanism for Openstack Components to provide HA
On Wed, May 7, 2014 at 1:18 AM, Maish Saidel-Keesing (msaidelk) msaid...@cisco.com wrote: I was wondering if any work has been done on developing a standard plugin mechanism to provide HA to the OpenStack components Let me try and explain what I mean. Today there is a certain degree of how the components can be made to work in either an active/active or active/passive fashion. But they differ by component. Galera for Mysql for example, RabbitMQ is already multi-node. The rest of the components are usually put behind HAproxy. But with every new component added (for example Heat, Ceilometer, Trove and many more to come), ideally (for me at least) the best way would be the same way you install the package through yum/apt part of this package could have a plugin to a central HA component with all the information needed to make this component highly available. Am I barking up the wrong tree? With a plethora of quality HA solutions, OpenStack doesn't want to re-implement HAProxy or Galera, instead we want a standard way to deploy them. This is part of what https://wiki.openstack.org/wiki/TripleO is trying to do. *With best regards,* *Maish Saidel-Keesing* Platform Architect SPVSS msaid...@cisco.com Phone: *+972-2-5886103 %2B972-2-5886103* Mobile: *+972542206103 %2B972542206103* [image: http://www.cisco.com/web/europe/images/email/signature/logo02.jpg]http://www.cisco.com/global/IL/ [image: http://www.cisco.com/assets/social_media_icons/twitter-16x16.png]http://twitter.com/maishsk [image: http://www.cisco.com/assets/social_media_icons/google-16x16.png]http://vexpert.me/maishsk [image: http://www.cisco.com/assets/social_media_icons/linkedin-16x16.png]http://il.linkedin.com/in/maish/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Consuming keystoneclient's Session object in novaclient
On 05/07/2014 03:10 PM, Joe Gordon wrote: On Tue, May 6, 2014 at 3:22 PM, Jamie Lennox jamielen...@redhat.com mailto:jamielen...@redhat.com wrote: All, TL;DR: novaclient should be able to use the common transport/auth layers of keystoneclient. If it does there are going to be functions like client.authenticate() that won't operate the same way when a session object is passed. For most users who just use the CRUD operations there will be no difference. I'm hoping that at least some of the nova community has heard of the push for using keystoneclient's Session object across all the clients. For those unaware keystoneclient.session.Session is a common transport and authentication layer to remove the need for each python-*client having there own authentication configuration and disparate transport options. It offers: - a single place for updates to transport (eg fixing TLS or other transport issues in one place) - a way for all clients to immediately support the full range of keystone's authentication including v3 auth, SAML, kerberos etc - a common place to handle version discovery such that we support multiple version endpoints from the same service catalog endpoint. For information of how to interact with a session you can see: http://www.jamielennox.net/blog/2014/02/24/client-session-objects/ This mentions the code is uncommitted however has since been committed with a few small details around parameter names being changed. The theory remains the same. To integrate this into novaclient means that if a session= object is passed then the standard HTTPClient code will be ignored in favour of using what was passed. This means that there are changes in the API of the client. In keystoneclient we have take the opinion that by passing a session object then you opt-in to the newer API and therefore accept that some functions are no longer available. For example client.authenticate() is no longer allowed because authentication is not the client's responsibility. It will have no impact on the standard novaclient CRUD operations and so be un-noticed by the vast majority of users. The review showing these changes is here: https://review.openstack.org/#/c/85920 To enable this there are a series of test changes to mock client requests at the HTTP layer rather than in the client. This means that we can test that all client operations against the new and old client construction methods and ensure the same requests are being sent. The foundation of this to turn tests into fixtures can be found by following: https://blueprints.launchpad.net/python-novaclient/+spec/httpretty-testing IMO making these tests into fixtures is a good idea anyway, however I am only pursuing it so that we can transition to using a common Session. Regarding dependencies, novaclient will need a test-requirements.txt on keystoneclient so that it can construct Session objects to test with but it should not need a requirements.txt as the session object is constructed by the user of the client (eg openstackclient, horizon etc). Can we make novaclient use keystoneclient's session by default? And just add this to requirements. ++ Once it's supported, I would think that someone wanting to use novaclient _without_ keystoneclient should be seen as the exception case and not the normal case. If there are concerns with this process please respond here and/or on the review. Thanks, Jamie ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Programs] Client Tools program discussion
On Wed, May 7, 2014 at 5:05 PM, Joe Gordon joe.gord...@gmail.com wrote: Would client tools be limited to only a pythonSDK or in the future could it potentially have other languages? There are a handful of other language projects active on Stackforge that have expressed interest in joining the program when they reach some state of maturity. That is a big part of the reason for structuring things the way we have. If any were ready today they would be included in the initial proposal. https://git.openstack.org/cgit/stackforge/python-openstacksdkhttps://github.com/stackforge/python-openstacksdk https://git.openstack.org/cgit/stackforge/openstack-sdk-php/http://git.openstack.org/cgit/stackforge/openstack-sdk-php/ https://git.openstack.org/cgit/stackforge/openstack-sdk-dotnet/http://git.openstack.org/cgit/stackforge/openstack-sdk-dotnet/ https://git.openstack.org/cgit/stackforge/golang-client/http://git.openstack.org/cgit/stackforge/golang-client/ https://git.openstack.org/cgit/stackforge/openstack-cli-powershell/http://git.openstack.org/cgit/stackforge/openstack-cli-powershell/ dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [OSSG][OSSN] Some versions of Glance do not apply property protections as expected
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Some versions of Glance do not apply property protections as expected - --- ### Summary ### Tom Leaman reported an issue to the OpenStack mailing list that affects Glance property protections. A permissive property setting in the Glance property protections configuration file will override any previously set stricter ones. ### Affected Services / Software ### Glance, Folsom, Grizzly ### Discussion ### Glance property protections limit the users who can perform CRUD operations on a Glance property to those in specific roles. If there is a specific rule that would reject an action and a less specific rule that comes after that accepts the action, then the action is accepted even though one may expect it to be rejected. This bug only affects the use of user-roles in Glance. It does not occur when policies are used to determine property protections. In the following policy-protections.conf example, the desired result is to restrict 'update' and 'delete' permissions for 'foo_property' to only users with the 'admin' role. - --- Begin Example --- /etc/glance/property-protections.conf [^foo_property$] create = @ read = @ update = admin delete = admin [.*] create = @ read = @ update = @ delete = @ - --- End Example --- Due to the order that the rules are applied in the Folsom and Grizzly OpenStack releases, the admin restriction for 'foo_property' is nullified by the '.*' permissions. This results in all roles being allowed the 'update' and 'delete' permissions on 'foo_property', which is not what was intended. ### Recommended Actions ### This issue has been fixed in Havana (Glance 2013.2.2) and subsequent releases. Users of affected releases should review and reorder the entries in property-protections.conf to place the most open permissions at the start of the configuration and more restrictive ones at the end, as demonstrated below. - --- Begin Example --- /etc/Glance/property-protections.conf [.*] create = @ read = @ update = @ delete = @ [^foo_property$] create = @ read = @ update = admin delete = admin - --- End Example --- In the above example, '.*' and 'foo_property' entries in the protections file have been reversed, ensuring that the more restrictive permissions required for 'foo_property' are applied after the wider '.*' permissions and assuring that 'update' and 'delete' operations are restricted to only users with in the 'admin' role. Configuration files with multiple property protection entries set should be tested to ensure that CRUD actions are constrained in the way the administrator intended. ### Contacts / References ### This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0013 Original Launchpad Bug : https://bugs.launchpad.net/glance/+bug/1271426 Original Report : http://lists.openstack.org/pipermail/openstack-dev/2014-January/024861.html Glance Property Protections : https://wiki.openstack.org/wiki/Glance-property-protections OpenStack Security ML : openstack-secur...@lists.openstack.org OpenStack Security Group : https://launchpad.net/~openstack-ossg -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTarcQAAoJEJa+6E7Ri+EVEA4H/1VmKV7XvaubtwtXKvJf20fi lV42zkpA+WQrxnxOWa05Z8TMCKGN/q3EuNYIcOjSe9hiGS3tuHAyFq/lnD+mQwJn rc+vwr6234/BWlTnV1iuXemzXrBTKlNNk+4th5L0KLujPwUw9U2cLssJxkhfB98f 39SuUe5kmS62tPvvqJQ25yRDal0umP38NDusfTJNcfVu7Ybq3XxdUxQAinfDyiEl piIGkKA276ZeTHX6U1DZpGJpy9pfA7yxSavNNJEHN8ABnFQJZPxz1Q4E5uEZRPBq LQE0rcF8r0Wi0/vsHbEiU71UpTTKBcLK13Os4rNirHVvh+SXf0grvfUP5D0+DP4= =CC5L -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev