Re: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch?
I wonder whether your neutron server codes have added the VPNaaS integration with service type framework change on https://review.openstack.org/#/c/41827/21 , if not, the service_provider option is useless. You need to include the change before developing your own driver. QA (In my opinion and sth may be missing): - What is the difference between service drivers and device drivers? service drivers are driven by vpn service plugin and are responsible for casting rpc request (CRUD of vpnservices) to and do callbacks from vpn agent. device drivers are driven by vpn agent and are responsible for implementing specific vpn operations and report vpn running status. - Could I implement only one of them? device driver must be implemented based on your own device. Unless the default ipsec service driver is definitely appropriate, suggest you implement both of them. After including VPNaaS integration with service type framework, the service driver work is simple. - Whe re I need to put my Python implementation in my OpenStack instance? Do you mean let your instance runs your new codes? The default source codes dir is /opt/stack/neutron, you need to put your new changes into the dir and restart the neutron server. - How could I configure my OpenStack instance to use this implementation? 1. Add your new codes into source dir 2. Add appropriate vpnaas service_provider into neutron.conf and add appropriate vpn_device_driver option into vpn_agent.ini 3. restart n-svc and q-vpn Hope help you. - Original Message - From: Julio Carlos Barrera Juez juliocarlos.barr...@i2cat.net To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Sent: Monday, February 17, 2014 7:18:44 PM Subject: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch? Hi. I have asked in the QA website without success ( https://ask.openstack.org/en/question/12072/how-to-implement-and-configure-a-new-vpnaas-driver-from-scratch/ ). I want to develop a vpnaas implementation. It seems that since Havana, there are plugins, services and device implementations. I like the plugin and his current API, then I don't need to reimplement it. Now I want yo implement a vpnaas driver, and I see I have two main parts to take into account: the service_drivers and the device_drivers. IPsec/OpenSwan implementation is the unique sample I've found. I'm using devstack to test my experiments. I tried to implement VpnDriver Python class extending the main API methods like IPsecVPNDriver does. I placed basic implementation files at the same level of IPsec/OpenSwan does and configured Neutron adding this line to /etc/neutron/neutron.conf file: service_provider = VPN:VPNaaS:neutron.services.vpn.service_drivers.our_python_filename.OurClassName:default I restarted Neutron related services in my devstack instance, but it seemed it didn't work. - What is the difference between service drivers and device drivers? - Could I implement only one of them? - Whe re I need to put my Python implementation in my OpenStack instance? - How could I configure my OpenStack instance to use this implementation? I didn't find almost any documentation about these topics. Thank you very much. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0Am=9uhm%2F59JRfiZ3CXzuhBOpqcTqWk8APswRGJFZ8H2Tos%3D%0As=46fe06049efb1d29a85b63f7ce101cd69695a368c3da6ea3a91bcd7d2b71ce59 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] L7 - Update L7Policy
Hi, It matters, as someone might need to debug the backend setup and the name, if exists, can add details. This obviously a vendor's choice if they wish to push this back to backend but the API should not remove this as a choice. -Sam. From: Eugene Nikanorov [mailto:enikano...@mirantis.com] Sent: Monday, February 17, 2014 12:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] L7 - Update L7Policy Folks, I wonder why using names on the backend could be needed? For the code there is no much difference between name and the id. Thanks, Eugene. On Mon, Feb 17, 2014 at 1:57 PM, Samuel Bercovici samu...@radware.commailto:samu...@radware.com wrote: Hi, My concern is that if from some reason the driver implementer would like to reflect the name also in the backend device, than an update should also be calling the driver. Using readable names also makes sense on the back-end device. -Sam. From: Oleg Bondarev [mailto:obonda...@mirantis.commailto:obonda...@mirantis.com] Sent: Monday, February 17, 2014 11:30 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] L7 - Update L7Policy Hi, On Mon, Feb 17, 2014 at 1:23 PM, Eugene Nikanorov enikano...@mirantis.commailto:enikano...@mirantis.com wrote: Hi Avishay, 1) I think name might be useful. Consider user forms a list of rules which route requests to a pool with static images or static pages, it may make sense to give those policy a name 'static-iamges', 'static-pages', rather then operate on ids. +1 2) I think updating the name is useful as well, but just on DB level, so there is no point in calling the driver and communicating with the backend. +1 Thanks, Eugene. On Mon, Feb 17, 2014 at 12:58 PM, Avishay Balderman avish...@radware.commailto:avish...@radware.com wrote: Hi L7Policy holds a list of L7rules plus 1 attribute: name . Questions: 1) Do we need to have this name attribute? 2) Do we want to allow update operation of the name attribute? Do we need to invoke the driver when such update occurred? Thanks Avishay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Centralized policy rules and quotas
On Fri, Feb 7, 2014 at 4:46 AM, Raildo Mascena rail...@gmail.com wrote: Hello, Currently, there is a blueprint for creating a Domain in New Quota Driver who is waiting approval, but that is already implemented. I believe that is worth checking out. https://blueprints.launchpad.net/nova/+spec/domain-quota-driver Any questions I am available. Regards, Raildo Mascena Hi Raildo, Is this domain quota driver code now available to review? I'm asking this because work items in the blueprint[1] have already been marked as done, and there is also some relevant work (quotas for domain) mentioned by Vinod in another thread. But found nowhere for the domain quota driver code. Appreciated if you can share me some pointers. [1] https://blueprints.launchpad.net/nova/+spec/domain-quota-driver Thanks, -- Qiu Yu ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Centralized policy rules and quotas
Dear Qiu Yu, The domain quota driver as well as the APIs to access the domain quota driver is available. Please check the following BluePrint - https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api Wiki Page - https://wiki.openstack.org/wiki/APIs_for_Domain_Quota_Driver GitHub Code - https://github.com/vinodkumarboppanna/DomainQuotaAPIs The APIs are implemented in the above code. Currently, working to add the options for accessing domain quotas in the command line (like for eg: nova domain-quota-show --domain domain_id --tenant tenant_id --user user_id) Thanks Regards, Vinod Kumar Boppanna From: Qiu Yu [unic...@gmail.com] Sent: 18 February 2014 09:52 To: OpenStack Development Mailing List (not for usage questions) Cc: Vinod Kumar Boppanna; tiago.mart...@hp.com Subject: Re: [openstack-dev] [keystone] Centralized policy rules and quotas On Fri, Feb 7, 2014 at 4:46 AM, Raildo Mascena rail...@gmail.commailto:rail...@gmail.com wrote: Hello, Currently, there is a blueprint for creating a Domain in New Quota Driver who is waiting approval, but that is already implemented. I believe that is worth checking out. https://blueprints.launchpad.net/nova/+spec/domain-quota-driver Any questions I am available. Regards, Raildo Mascena Hi Raildo, Is this domain quota driver code now available to review? I'm asking this because work items in the blueprint[1] have already been marked as done, and there is also some relevant work (quotas for domain) mentioned by Vinod in another thread. But found nowhere for the domain quota driver code. Appreciated if you can share me some pointers. [1] https://blueprints.launchpad.net/nova/+spec/domain-quota-driver Thanks, -- Qiu Yu ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] port forwarding from gateway to internal hosts
Hi, What's the status of this BP? - https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding will it be ready for I3? The BP is approved but the patch is abandoned Regards Yair ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [GSoC] Participate as participant
Hello everyone, I am RobberPhex, a junior in Donghua University(Shanghai, China). I want to participate in GSoC this year as Student. I know that OpenStack as a potential org in GSoC, so I decide to participate. I am a Student major in software engineering. In 2012 August, I first touch OpenStack. After that, I learn OpenStack (included KVM, Python). In 2013 December, I deploy OpenStack on Server. For participation, in this winter vacation, I read a book that about Python, and I try to write Python program (in Github). But, I cannot decide (sub)project to participate. If a mentor can guide me with a easy or medium difficulty project in Openstack, I would be very grateful. BTW, I have added my name to OpenStack GSoC 2014 page. Thanks. -- Regards, RobberPhex About me: http://about.me/RobberPhex ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [GSoC] Participate as participant
And, should I CC this mail to OpenStack maillist? On Tue, Feb 18, 2014 at 5:14 PM, Robber Phex robberp...@gmail.com wrote: Hello everyone, I am RobberPhex, a junior in Donghua University(Shanghai, China). I want to participate in GSoC this year as Student. I know that OpenStack as a potential org in GSoC, so I decide to participate. I am a Student major in software engineering. In 2012 August, I first touch OpenStack. After that, I learn OpenStack (included KVM, Python). In 2013 December, I deploy OpenStack on Server. For participation, in this winter vacation, I read a book that about Python, and I try to write Python program (in Github). But, I cannot decide (sub)project to participate. If a mentor can guide me with a easy or medium difficulty project in Openstack, I would be very grateful. BTW, I have added my name to OpenStack GSoC 2014 page. Thanks. -- Regards, RobberPhex About me: http://about.me/RobberPhex -- Regards, RobberPhex About me: http://about.me/RobberPhex ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Tripleo]help needed - nodepool and zuul support for ci-overcloud
On 17/02/14 23:43, Robert Collins wrote: So we experimented with the ci-overcloud (a TripleO deployed cloud for running CI for TripleO) and it uncovered some bugs/limitations in nodepool and zuul. We need to fix these to reduce our negative impact on the Openstack-infra team, before they'll be comfortable having the ci-overcloud back in rotation... and certainly before we can start checking / gating (because the reality is, until we're truely stable, things will go wrong). https://bugs.launchpad.net/openstack-ci/+bug/1281319 I'll take a look at this one. https://bugs.launchpad.net/openstack-ci/+bug/1281320 One bug is to teach nodepool to startup even if a provider can't be contacted. The other is to have zuul timeout jobs that don't start at all. Both should be small - 1 day of dev for a cold start on the project. Pure python. -Rob ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] libvirt doesn't migrate cdrom devices?
On Mon, Feb 17, 2014 at 11:34:46PM -0700, Michael Still wrote: Hi. For the last day or so I've been chasing https://bugs.launchpad.net/nova/+bug/1246201, and I think I've found the problem... libvirt doesn't migrate devices of type cdrom, even if they're virtual. If I change the type of the config drive to disk, then block migration works just fine. Does anyone know if this is a bug in libvirt? How would people feel about me changing the type of config drives from cdrom to disk, which is something that other hypervisors already do? Be good to know what the reason was for it being set as a cdrom in the first place, before we change it to disk. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Centralized policy rules and quotas
On Tue, Feb 18, 2014 at 4:59 PM, Vinod Kumar Boppanna vinod.kumar.boppa...@cern.ch wrote: Dear Qiu Yu, The domain quota driver as well as the APIs to access the domain quota driver is available. Please check the following BluePrint - https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api Wiki Page - https://wiki.openstack.org/wiki/APIs_for_Domain_Quota_Driver GitHub Code - https://github.com/vinodkumarboppanna/DomainQuotaAPIs Vinod, Thank you for sharing. I did try to dig up in your repo before sending the last email. It looks like domain quota driver code has already been included in your base commit, that is not quite easy for me to read. Do you happen to have a link for the clean commit / patch of just domain quota driver code itself? Thanks! Thanks, Qiu Yu ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] libvirt doesn't migrate cdrom devices?
On Tue, Feb 18, 2014 at 09:38:49AM +, Daniel P. Berrange wrote: On Mon, Feb 17, 2014 at 11:34:46PM -0700, Michael Still wrote: Hi. For the last day or so I've been chasing https://bugs.launchpad.net/nova/+bug/1246201, and I think I've found the problem... libvirt doesn't migrate devices of type cdrom, even if they're virtual. If I change the type of the config drive to disk, then block migration works just fine. Does anyone know if this is a bug in libvirt? How would people feel about me changing the type of config drives from cdrom to disk, which is something that other hypervisors already do? Be good to know what the reason was for it being set as a cdrom in the first place, before we change it to disk. And here's that reason: commit 7f4b5771633f519a54aae985ae526418114b1a29 Author: Tony Yang bjya...@cn.ibm.com Date: Thu Jul 18 18:45:24 2013 +0800 Config drive attached as cdrom Some guest OSes (e.g. Windows) require config drive to be a cdrom device to access the iso9660 filesystem on it. Option config_drive_format is therefore set to also control the device type of config drive. If it's set to 'iso9660', config drive will be cdrom; if it's 'vfat', config drive will be disk. DocImpact Fixes: bug #1155842 Change-Id: I6c08b1b8040a1fd0db8e2b3b1fc798060733001f So I'd saying changing it to 'disk' is out of the question unless we unconditionally use vfat as the filesystem instead of iso9660. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Centralized policy rules and quotas
Dear Qiu Yu, Yes, the domain quota driver is included in the base commit. But if you just want the domain quota driver and nothing else, then you can just download the following files (Hope i haven't missed any file required just for domain quota driver) https://github.com/vinodkumarboppanna/DomainQuotaAPIs/blob/master/nova/quota.py https://github.com/vinodkumarboppanna/DomainQuotaAPIs/blob/master/nova/db/sqlalchemy/models.py https://github.com/vinodkumarboppanna/DomainQuotaAPIs/blob/master/nova/db/sqlalchemy/migrate_repo/versions/230_create_domain_quotas_tables.py The file quota.py contains the domain quota driver implemented by Tiago and his team (I have just added few more functions to complete it). Hope this is ok. If you are finding problem with this as well, then let me know...I will try to create a patch then (I am new to all this code commit, patch etc..so please bare with me for the inconvenience). Tiago once had given me this link https://github.com/tellesnobrega/nova/tree/master/nova (where he has put up the domain quota driver). But again as i said, it was little in-complete ..and some of the functions are missing. Thanks Regards, Vinod Kumar Boppanna From: Qiu Yu [unic...@gmail.com] Sent: 18 February 2014 10:26 To: Vinod Kumar Boppanna Cc: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [keystone] Centralized policy rules and quotas On Tue, Feb 18, 2014 at 4:59 PM, Vinod Kumar Boppanna vinod.kumar.boppa...@cern.chmailto:vinod.kumar.boppa...@cern.ch wrote: Dear Qiu Yu, The domain quota driver as well as the APIs to access the domain quota driver is available. Please check the following BluePrint - https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api Wiki Page - https://wiki.openstack.org/wiki/APIs_for_Domain_Quota_Driver GitHub Code - https://github.com/vinodkumarboppanna/DomainQuotaAPIs Vinod, Thank you for sharing. I did try to dig up in your repo before sending the last email. It looks like domain quota driver code has already been included in your base commit, that is not quite easy for me to read. Do you happen to have a link for the clean commit / patch of just domain quota driver code itself? Thanks! Thanks, Qiu Yu ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Glance v1 and v2
I have only played with Glance v2 locally on a devstack, so take what I write with a graing of salt. What's new in API v2? - + registry: You don't need to run glance-registry anymore. Unless you still support v1. + tags: Every image has a tag list metadata. A tag can be created/updated/deleted by an image owner. A tag can be read by any member of the image. + New membership mechanism: You can read about it here[1]. + You can query the API for JSON schemas describing resources. How well is it supported? - + python-glanceclient: Supports API v2 through the cli flag --os-image-api-version. + Cinder: Supports API v2 (for volume creation from images). You specify the API version in cinder.conf (glance_api_version option). + Nova: doesn't support v2. This mailing thread[2] gives a good overview of the situation. + Horizon: To the best of my knowledge, Horizon doesn't support v2 yet. How well is it tested? -- I tried a few manual tests on a local devstack and it seemed to work. And the v2 seems to be pretty well covered in Tempest[3]. Can v1 and v2 coexist? -- From whatever little I've seen, both APIs can safely be activated alongside each other. Some things to note: + v2 membership is not really taken into account if v1 is still activated. + you still need to run glance-registry if v1 is activated Conclusion -- Again, I have just spent a couple of days playing with it on a devstack. I'm by no means a reference on the subject of the API v2. I hope this will help you get a better idea of where it stands today. And if anyone notices something I may have missed or anything wrong in my summary, please do point it out so I can correct it. [1]: http://docs.openstack.org/api/openstack-image-service/2.0/content/image-sharing.html [2]: http://lists.openstack.org/pipermail/openstack-dev/2014-February/026849.html [3]: http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/image/v2 --- Joe H. Rahme IRC: rahmu On 14 Feb 2014, at 19:37, Pete Zaitcev zait...@redhat.com wrote: Hello: does anyone happen to know, or have a detailed write-up, on the differences between so-called Glance v1 and Glance v2? In particular do we still need Glance Registry in Havana, or do we not? The best answer so far was to run the registry anyway, just in case, which does not feel entirely satisfactory. Surely someone should know exactly what is going on in the API and have a good idea what the implications are for the users of Glance (API, CLI, and Nova (I include Horizon into API)). Thanks, -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] libvirt doesn't migrate cdrom devices?
On Tue, Feb 18, 2014 at 2:43 AM, Daniel P. Berrange berra...@redhat.com wrote: So I'd saying changing it to 'disk' is out of the question unless we unconditionally use vfat as the filesystem instead of iso9660. So, at the moment we conflate a flag about format (iso9660 or vfat) with a flag about device type. We could have both as separate flags with reasonable defaults. However, at the moment we're pretty much presented with a choice of either having a working Windows instance, or working block migration. I suspect that's a trade off we don't want to have to make. Do you have any visibility into why cdrom devices don't get migrated by libvirt correctly? Is there perhaps a flag to the migrate call we could twiddle to make it magically work right? (I did look at simply re-creating the config drive from scratch on the destination node, but its not possible. Much of the information needed to create the drive such as admin password and injected files is gone by the time the migration occurs). Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] libvirt doesn't migrate cdrom devices?
On Tue, Feb 18, 2014 at 3:14 AM, Daniel P. Berrange berra...@redhat.com wrote: On Tue, Feb 18, 2014 at 03:08:59AM -0700, Michael Still wrote: On Tue, Feb 18, 2014 at 2:43 AM, Daniel P. Berrange berra...@redhat.com wrote: So I'd saying changing it to 'disk' is out of the question unless we unconditionally use vfat as the filesystem instead of iso9660. So, at the moment we conflate a flag about format (iso9660 or vfat) with a flag about device type. We could have both as separate flags with reasonable defaults. However, at the moment we're pretty much presented with a choice of either having a working Windows instance, or working block migration. I suspect that's a trade off we don't want to have to make. Why do we need to support both iso9660 and vfat ? If we only cared to support vfat, then we could have working Windows and working block migration. iso9660 was made the default at the request of the cloud-init author. IIRC the decision was based on it being a cleaner interface than vfat (read only, etc). I certainly think we could make vfat the default, and put a big ugly warning about iso9660 in the flag which lets operators select which to use. Incidentally since you say other virt drivers don't use cdrom, does this mean they only use vfat, or are they using iso9660 + disk and thus broken for windows too ? I'm specifically thinking of hyper-v here, and I am sure they tested with Windows. To be honest I don't remember all the details, I just recall that at least one hypervisor had issues with attaching a second cdrom (we support boot from cdrom). Do you have any visibility into why cdrom devices don't get migrated by libvirt correctly? Is there perhaps a flag to the migrate call we could twiddle to make it magically work right? It seems to be done on the assumption that we only care about migrating data for writable disks. I think that was just copied from QEMU's original built-in block migration so that we had consistent behaviour. So there's no quick fix in libvirt we can do - we'd probably have to figure out a way to pass in a list of disks to be mirrored. /* skip shared, RO and source-less disks */ if (disk-shared || disk-readonly || !disk-src) continue; H. Shame that. Ok, I'll take a pass at changing the default tomorrow. Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Centralized policy rules and quotas
On Feb 18, 2014 5:48 PM, Vinod Kumar Boppanna vinod.kumar.boppa...@cern.ch wrote: snip The file quota.py contains the domain quota driver implemented by Tiago and his team (I have just added few more functions to complete it). Hope this is ok. If you are finding problem with this as well, then let me know...I will try to create a patch then (I am new to all this code commit, patch etc..so please bare with me for the inconvenience). Tiago once had given me this link https://github.com/tellesnobrega/nova/tree/master/nova (where he has put up the domain quota driver). But again as i said, it was little in-complete ..and some of the functions are missing. Thanks Vinod, that answers all my questions. Thank you so much for the detail information! :) Thanks, -- Qiu Yu ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] DVR blueprint document locked / private
Regarding the DVR blueprint [1], I noticed that its corresponding Google Doc [2] has been private / blocked for nearly a week now. It's very difficult to participate in upstream design discussions when the document is literally locked. I would appreciate the re-opening of the document, especially considering the recent face to face meeting and the contested discussion points that were raised. [1] https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr [2] https://docs.google.com/document/d/1iXMAyVMf42FTahExmGdYNGOBFyeA4e74sAO3pvr_RjA/edit Thank you, Assaf Muller, Cloud Networking Engineer Red Hat ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [GSoC] Participate as participant
Hi Robber, There are a number of discussions at OpenStack mail-list, you can find them at mail-archive. Hope it helps Damon 2014-02-18 17:15 GMT+08:00 Robber Phex robberp...@gmail.com: And, should I CC this mail to OpenStack maillist? On Tue, Feb 18, 2014 at 5:14 PM, Robber Phex robberp...@gmail.com wrote: Hello everyone, I am RobberPhex, a junior in Donghua University(Shanghai, China). I want to participate in GSoC this year as Student. I know that OpenStack as a potential org in GSoC, so I decide to participate. I am a Student major in software engineering. In 2012 August, I first touch OpenStack. After that, I learn OpenStack (included KVM, Python). In 2013 December, I deploy OpenStack on Server. For participation, in this winter vacation, I read a book that about Python, and I try to write Python program (in Github). But, I cannot decide (sub)project to participate. If a mentor can guide me with a easy or medium difficulty project in Openstack, I would be very grateful. BTW, I have added my name to OpenStack GSoC 2014 page. Thanks. -- Regards, RobberPhex About me: http://about.me/RobberPhex -- Regards, RobberPhex About me: http://about.me/RobberPhex ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] libvirt doesn't migrate cdrom devices?
On 02/18/2014 05:20 AM, Michael Still wrote: On Tue, Feb 18, 2014 at 3:14 AM, Daniel P. Berrange berra...@redhat.com wrote: On Tue, Feb 18, 2014 at 03:08:59AM -0700, Michael Still wrote: On Tue, Feb 18, 2014 at 2:43 AM, Daniel P. Berrange berra...@redhat.com wrote: So I'd saying changing it to 'disk' is out of the question unless we unconditionally use vfat as the filesystem instead of iso9660. So, at the moment we conflate a flag about format (iso9660 or vfat) with a flag about device type. We could have both as separate flags with reasonable defaults. However, at the moment we're pretty much presented with a choice of either having a working Windows instance, or working block migration. I suspect that's a trade off we don't want to have to make. Why do we need to support both iso9660 and vfat ? If we only cared to support vfat, then we could have working Windows and working block migration. iso9660 was made the default at the request of the cloud-init author. IIRC the decision was based on it being a cleaner interface than vfat (read only, etc). I certainly think we could make vfat the default, and put a big ugly warning about iso9660 in the flag which lets operators select which to use. Incidentally since you say other virt drivers don't use cdrom, does this mean they only use vfat, or are they using iso9660 + disk and thus broken for windows too ? I'm specifically thinking of hyper-v here, and I am sure they tested with Windows. To be honest I don't remember all the details, I just recall that at least one hypervisor had issues with attaching a second cdrom (we support boot from cdrom). I think that's a xen limitation. I vaguely remember John Garbutt bringing that up at a previous summit. -Sean -- Sean Dague http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] time consuming of listing resource
Hi Liusheng, We are having the same performance issues and interested in the following bug ticket. https://bugs.launchpad.net/ceilometer/+bug/1264434 You said, As you said.both the schema level and the code level,the SQL driver in Ceilometer should be optimized. thanks for your advicese.I will search around about this. Could you please share the current status of this work. Do you have any specific time line for patch release? Regards, On Mon, 06 Jan 2014 11:12:26 -0500 Jay Pipes jaypi...@gmail.com wrote: On Mon, 2014-01-06 at 21:06 +0800, 刘胜 wrote: Hi jay,Thank you for the comments, I have simply tested the performance of ceilometer with mysql driver.,while,the DB table may become huge in few days.Unfortunately,the result is not satisfied . As you said.both the schema level and the code level,the SQL driver in Ceilometer should be optimized. thanks for your advicese.I will search around about this. Hi there :) Please do let me know what performance improvements you see by following the steps I listed below. All the best, -jay 在 2013-12-29 00:16:47,Jay Pipes jaypi...@gmail.com 写道: On 12/28/2013 05:51 AM, 刘胜 wrote: Hi all: I have reported a bug about time consuming of “resource-list” in ceilometer CLI: https://bugs.launchpad.net/ceilometer/+bug/1264434 In order to Identify the causes of this phenomenon, I have pdb the codes in my invironment(configured mysql as db driver): the most import part of process of listing resource is implemented in following codes: code of get_resources() in /ceilometer/storage/impl_sqlalchemy.py: for meter, first_ts, last_ts in query.all(): yield api_models.Resource( resource_id=meter.resource_id, project_id=meter.project_id, first_sample_timestamp=first_ts, last_sample_timestamp=last_ts, source=meter.sources[0].id, user_id=meter.user_id, metadata=meter.resource_metadata, meter=[ api_models.ResourceMeter( counter_name=m.counter_name, counter_type=m.counter_type, counter_unit=m.counter_unit, ) for m in meter.resource.meters ], ) The method generate iterator of object of api_models.Resource for ceilometer API to show. 1.The operation “query.all()” will query the DB table “meter” with the expression generated forward,in my invironment the DB table “meter” have more than 30 items, so this operation may consume about 30 seconds; 2.The operationfor m in meter.resource.meters will circulate the meters of this resource . a resource of server may have more than 10 meter iterms in my invironment. So the time of whole process is too long. I think the meter of Resource object can be reduced and I have tested this modification, it is OK for listing resource,and reduce the most time consumption I have noticed that there are many methods of db operation may time consumption. ps: I have configured the ceilometer pulling interval from 600s to 60s in /etc/ceilometer/pipeline.yaml, but the invironment has just run 10 days! I'm a beginner of ceilometer,and want to fix this bug,but I haven't found a suitable way may be someone can help me with this? Yep. The performance of the SQL driver in Ceilometer out-of-the-box with that particular line is unusable in our experience. We have our Chef cookbook literally patch Ceilometer's source code and comment out that particular line because it makes performance of Ceilometer unusable. I hate to say it, but the SQL driver in Ceilometer really needs an overhaul, both at the schema level and the code level: On the schema level: * The indexes, especially on sourceassoc, are wrong: ** The order of the columns in the multi-column indexes like idx_sr, idx_sm, idx_su, idx_sp is incorrect. Columns used in predicates should *precede* columns (like source_id) that are used in joins. The way the indexes are structured now makes them unusable by the query optimizer for 99% of queries on the sourceassoc table, which means any queries on sourceassoc trigger a full table scan of the hundreds of millions of records in the table. Things are made worse by the fact that INSERT operations are slowed for each index on a table, and the fact that none of these indexes are used just means we're wasting cycles on each INSERT for no reason. ** The indexes are across the entire VARCHAR(255) field width. This isn't necessary (and I would argue that the base field type should be smaller). Index width can be reduced (and performance increased) by limiting the indexable width to 32 (or smaller). The
Re: [openstack-dev] [infra] reverify/recheck
On Tue, Feb 18, 2014 at 12:27 AM, Gary Kotton gkot...@vmware.com wrote: Thanks! That makes sense. Just odd how a -1 was received. So I think that now a check run is automatically done if the latest Jenkins run is too old (7 days?) This is done because gate failures are so costly. The check run failing would return -1 instead of a gate failure of -2. On 2/17/14 3:15 PM, Akihiro Motoki mot...@da.jp.nec.com wrote: Hi Gary, According to zuul layout.yaml [1], reverify bug # should still work but it seems to work only when verified score from jenkins is -2. https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack-in fra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml%23 L25k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg4 5MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0A s=8fc4d7e6970936977dc85989e4e7e9429afb15f5091a768efa4ce236417d9c9b Note that core team can trigger gate jobs by re-approving the patch. Thanks, Akihiro (2014/02/17 22:03), Gary Kotton wrote: Hi, The reverify no bug was removed. But reverify bug # used to work. That no longer does. With the constant gate failures how can we ensure that a approved patch does a retry? Thanks Gary From: Sylvain Bauza sylvain.ba...@gmail.com mailto:sylvain.ba...@gmail.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, February 17, 2014 2:53 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [infra] reverify/recheck Hi Gary, That's normal, this command has been removed since Dec'13, see https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pip ermail/openstack-dev/2013-December/021649.htmlk=oIvRg1%2BdGAgOoM1BIlLLqw %3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2Fy HMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=826274ca8ad9562b557322274a cdacd97482338456820af8c330b76ea7639838 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pi permail/openstack-dev/2013-December/021649.htmlk=oIvRg1%2BdGAgOoM1BIlLLq w%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=B8yQ75 uRFa7dyD%2FbgPgO%2FMT2x229MkK1vHWBVAzpFtM%3D%0As=d9cbcbde99b7c88c15ca138 499de8f36edd4247ce351e41b241d675b09b79956 -Sylvain 2014-02-17 13:00 GMT+01:00 Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com: Hi, It seems that the command 'reverify bug number' is not working. Anyone else experienced this lately. Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59x Icg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=1a2a59024e18b786b5c2c13535b7f3d050363 ff628dc96b8561e12489d30c0bd https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cg i-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=B8yQ75uRFa7dyD%2F bgPgO%2FMT2x229MkK1vHWBVAzpFtM%3D%0As=21e7f4ebc24f0a13780981720f9332111d d603f3c0661229dc90d82bfb5c3122 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59x Icg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=1a2a59024e18b786b5c2c13535b7f3d050363 ff628dc96b8561e12489d30c0bd ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi- bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59xIcg 6NKX1O2wlkkHHd42bYQim0k0%3D%0As=1a2a59024e18b786b5c2c13535b7f3d050363ff62 8dc96b8561e12489d30c0bd ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [GSoC] Participate as participant
I mean, should I CC this mail to openst...@lists.openstack.org? Because I see this: Get in touch with mentors and students through the openstack-dev mailing list from wiki. On Tue, Feb 18, 2014 at 6:39 PM, Damon Wang damon.dev...@gmail.com wrote: Hi Robber, There are a number of discussions at OpenStack mail-list, you can find them at mail-archive. Hope it helps Damon 2014-02-18 17:15 GMT+08:00 Robber Phex robberp...@gmail.com: And, should I CC this mail to OpenStack maillist? On Tue, Feb 18, 2014 at 5:14 PM, Robber Phex robberp...@gmail.comwrote: Hello everyone, I am RobberPhex, a junior in Donghua University(Shanghai, China). I want to participate in GSoC this year as Student. I know that OpenStack as a potential org in GSoC, so I decide to participate. I am a Student major in software engineering. In 2012 August, I first touch OpenStack. After that, I learn OpenStack (included KVM, Python). In 2013 December, I deploy OpenStack on Server. For participation, in this winter vacation, I read a book that about Python, and I try to write Python program (in Github). But, I cannot decide (sub)project to participate. If a mentor can guide me with a easy or medium difficulty project in Openstack, I would be very grateful. BTW, I have added my name to OpenStack GSoC 2014 page. Thanks. -- Regards, RobberPhex About me: http://about.me/RobberPhex -- Regards, RobberPhex About me: http://about.me/RobberPhex ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, RobberPhex About me: http://about.me/RobberPhex ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] reverify/recheck
You already can reverify on any approved change. Jenkins automatically runs check pipeline jobs after 3 days. On Tue, Feb 18, 2014 at 4:29 PM, Christopher Yeoh cbky...@gmail.com wrote: On Tue, Feb 18, 2014 at 12:27 AM, Gary Kotton gkot...@vmware.com wrote: Thanks! That makes sense. Just odd how a -1 was received. So I think that now a check run is automatically done if the latest Jenkins run is too old (7 days?) This is done because gate failures are so costly. The check run failing would return -1 instead of a gate failure of -2. On 2/17/14 3:15 PM, Akihiro Motoki mot...@da.jp.nec.com wrote: Hi Gary, According to zuul layout.yaml [1], reverify bug # should still work but it seems to work only when verified score from jenkins is -2. https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack-in fra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml%23 L25k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg4 5MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0A s=8fc4d7e6970936977dc85989e4e7e9429afb15f5091a768efa4ce236417d9c9b Note that core team can trigger gate jobs by re-approving the patch. Thanks, Akihiro (2014/02/17 22:03), Gary Kotton wrote: Hi, The reverify no bug was removed. But reverify bug # used to work. That no longer does. With the constant gate failures how can we ensure that a approved patch does a retry? Thanks Gary From: Sylvain Bauza sylvain.ba...@gmail.com mailto:sylvain.ba...@gmail.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, February 17, 2014 2:53 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [infra] reverify/recheck Hi Gary, That's normal, this command has been removed since Dec'13, see https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pip ermail/openstack-dev/2013-December/021649.htmlk=oIvRg1%2BdGAgOoM1BIlLLqw %3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2Fy HMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=826274ca8ad9562b557322274a cdacd97482338456820af8c330b76ea7639838 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pi permail/openstack-dev/2013-December/021649.htmlk=oIvRg1%2BdGAgOoM1BIlLLq w%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=B8yQ75 uRFa7dyD%2FbgPgO%2FMT2x229MkK1vHWBVAzpFtM%3D%0As=d9cbcbde99b7c88c15ca138 499de8f36edd4247ce351e41b241d675b09b79956 -Sylvain 2014-02-17 13:00 GMT+01:00 Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com: Hi, It seems that the command 'reverify bug number' is not working. Anyone else experienced this lately. Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59x Icg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=1a2a59024e18b786b5c2c13535b7f3d050363 ff628dc96b8561e12489d30c0bd https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cg i-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=B8yQ75uRFa7dyD%2F bgPgO%2FMT2x229MkK1vHWBVAzpFtM%3D%0As=21e7f4ebc24f0a13780981720f9332111d d603f3c0661229dc90d82bfb5c3122 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59x Icg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=1a2a59024e18b786b5c2c13535b7f3d050363 ff628dc96b8561e12489d30c0bd ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi- bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59xIcg 6NKX1O2wlkkHHd42bYQim0k0%3D%0As=1a2a59024e18b786b5c2c13535b7f3d050363ff62 8dc96b8561e12489d30c0bd ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
Re: [openstack-dev] [savanna] plugin version or hadoop version?
Matt, thanks, I'm agree with hadoop_version to version transition in v2 api. On Tue, Feb 18, 2014 at 5:02 AM, Matthew Farrellee m...@redhat.com wrote: ok, i spent a little time looking at what the change impacts and it looks like all the template validations we have currently require hadoop_version. additionally, the client uses the name and documentation references it. due to the large number of changes and the difficulty in providing backward compatibility, i propose that we leave it as is for the v1 api client and we change it for the v2 api client. to that end, i've added 'verifying hadoop_version - version' as a work item for both the v2-api-impl and v2-client. https://blueprints.launchpad.net/savanna/+spec/v2-api-impl and https://blueprints.launchpad.net/python-savannaclient/+spec/v2-client best, matt On 02/17/2014 04:23 PM, Alexander Ignatov wrote: Agree to rename this legacy field to 'version'. Adding to John's words about HDP, Vanilla plugin is able to run different hadoop versions by doing some manipulations with DIB scripts :-) So the right name of this field should be 'version' as version of engine of concrete plugin. Regards, Alexander Ignatov On 18 Feb 2014, at 01:01, John Speidel jspei...@hortonworks.com mailto:jspei...@hortonworks.com wrote: Andrew +1 The HDP plugin also returns the HDP distro version. The version needs to make sense in the context of the plugin. Also, many plugins including the HDP plugin will support deployment of several hadoop versions. -John On Mon, Feb 17, 2014 at 2:36 PM, Andrew Lazarev alaza...@mirantis.com mailto:alaza...@mirantis.com wrote: IDH uses version of IDH distro and there is no direct mapping between distro version and hadoop version. E.g. IDH 2.5.1 works with apache hadoop 1.0.3. I suggest to call the field as just 'version' everywhere and assume this version as plugin specific property. Andrew. On Mon, Feb 17, 2014 at 5:06 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: $ savanna plugins-list +-+--+__---+ | name| versions | title | +-+--+__---+ | vanilla | 1.2.1| Vanilla Apache Hadoop | | hdp | 1.3.2| Hortonworks Data Platform | +-+--+__---+ above is output from the /plugins endpoint - http://docs.openstack.org/__developer/savanna/userdoc/__ rest_api_v1.0.html#plugins http://docs.openstack.org/developer/savanna/userdoc/ rest_api_v1.0.html#plugins the question is, should the version be the version of the plugin or the version of hadoop the plugin installs? i ask because it seems like we have version == plugin version for hdp and version == hadoop version for vanilla. the documentation is somewhat vague on the subject, mostly stating version without qualification. however, the json passed to the service references hadoop_version and the arguments in the client are called hadoop_version fyi, this could be complicated by the idh and spark plugins. best, matt _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__ openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/ openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev
Re: [openstack-dev] [GSoC] Participate as participant
Robber, Adding your name in wiki is good, we have prepared the organization application for OpenStack Foundation and submitted it. We will hear from them in on 24th (http://www.google-melange.com/gsoc/events/google/gsoc2013) if OpenStack Foundation has been accepted into the GSoC program or not. -- dims On Tue, Feb 18, 2014 at 7:32 AM, Robber Phex robberp...@gmail.com wrote: I mean, should I CC this mail to openst...@lists.openstack.org? Because I see this: Get in touch with mentors and students through the openstack-dev mailing list from wiki. On Tue, Feb 18, 2014 at 6:39 PM, Damon Wang damon.dev...@gmail.com wrote: Hi Robber, There are a number of discussions at OpenStack mail-list, you can find them at mail-archive. Hope it helps Damon 2014-02-18 17:15 GMT+08:00 Robber Phex robberp...@gmail.com: And, should I CC this mail to OpenStack maillist? On Tue, Feb 18, 2014 at 5:14 PM, Robber Phex robberp...@gmail.com wrote: Hello everyone, I am RobberPhex, a junior in Donghua University(Shanghai, China). I want to participate in GSoC this year as Student. I know that OpenStack as a potential org in GSoC, so I decide to participate. I am a Student major in software engineering. In 2012 August, I first touch OpenStack. After that, I learn OpenStack (included KVM, Python). In 2013 December, I deploy OpenStack on Server. For participation, in this winter vacation, I read a book that about Python, and I try to write Python program (in Github). But, I cannot decide (sub)project to participate. If a mentor can guide me with a easy or medium difficulty project in Openstack, I would be very grateful. BTW, I have added my name to OpenStack GSoC 2014 page. Thanks. -- Regards, RobberPhex About me: http://about.me/RobberPhex -- Regards, RobberPhex About me: http://about.me/RobberPhex ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, RobberPhex About me: http://about.me/RobberPhex ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: http://davanum.wordpress.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] reverify/recheck
We're still working through kinks in the new system, which is why it's not fully documented yet. During the 2 weeks of gate wedging in January, we discovered a number of interesting things. Some review teams are really slow at dealing with patches with a +2 already on them, and may wait 2 - 6 *weeks* before a second core reviewer shows up to approve. In that period of time the dependencies used in the tests probably changed 3 or 4 times. Which means the test results were invalid. We also found that once a patch gets a +A people (core and non-core) blindly reverify on any failure, over and over and over again. I saw one core developer reverify 123456789 on a patch. Or that when trying to land a patch people will recheck half a dozen times to get one clean run, then get their patch pushed to the gate. All this blind meatgrinder behavior was putting tons of code into the gate that could not pass. That, coupled with other race conditions we were dealing with, put us into a state where we had a 60hr gate queue. So we're trying out a new system. A change will not move into the gate unless there is both a +A and Jenkins +1 on the patch. We also rerun check results on comment if the test results are 72hrs old, so there is always a reasonably fresh version of the results. This also helps detect merge conflicts early. Also when you +A a patch, if there aren't 24hr fresh test results, we rerun check first, then if that passes, it gets sent to the gate. There is still a bit of a sticky part of the flow if it fails in the gate. Because that means you don't satisfy the +1 jenkins requirement. Still trying to get the right flow worked out there. -Sean On 02/18/2014 07:51 AM, Sergey Lukjanov wrote: You already can reverify on any approved change. Jenkins automatically runs check pipeline jobs after 3 days. On Tue, Feb 18, 2014 at 4:29 PM, Christopher Yeoh cbky...@gmail.com mailto:cbky...@gmail.com wrote: On Tue, Feb 18, 2014 at 12:27 AM, Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com wrote: Thanks! That makes sense. Just odd how a -1 was received. So I think that now a check run is automatically done if the latest Jenkins run is too old (7 days?) This is done because gate failures are so costly. The check run failing would return -1 instead of a gate failure of -2. On 2/17/14 3:15 PM, Akihiro Motoki mot...@da.jp.nec.com mailto:mot...@da.jp.nec.com wrote: Hi Gary, According to zuul layout.yaml [1], reverify bug # should still work but it seems to work only when verified score from jenkins is -2. https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack-in fra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml%23 L25k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg4 5MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0A s=8fc4d7e6970936977dc85989e4e7e9429afb15f5091a768efa4ce236417d9c9b Note that core team can trigger gate jobs by re-approving the patch. Thanks, Akihiro (2014/02/17 22:03), Gary Kotton wrote: Hi, The reverify no bug was removed. But reverify bug # used to work. That no longer does. With the constant gate failures how can we ensure that a approved patch does a retry? Thanks Gary From: Sylvain Bauza sylvain.ba...@gmail.com mailto:sylvain.ba...@gmail.com mailto:sylvain.ba...@gmail.com mailto:sylvain.ba...@gmail.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, February 17, 2014 2:53 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [infra] reverify/recheck Hi Gary, That's normal, this command has been removed since Dec'13, see https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pip ermail/openstack-dev/2013-December/021649.htmlk=oIvRg1%2BdGAgOoM1BIlLLqw %3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2Fy HMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=826274ca8ad9562b557322274a cdacd97482338456820af8c330b76ea7639838
Re: [openstack-dev] [qa][tempest][rally] Rally Tempest integration: tempest.conf autogeneration
+1 for decoupling Tempest from devstack Openstack deployment tool is TripleO / Heat -so it would be good to have an Heat template to deploy and configure Tempest andrea From: David Kranz [mailto:dkr...@redhat.com] Sent: 12 February 2014 23:23 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [qa][tempest][rally] Rally Tempest integration: tempest.conf autogeneration On 02/12/2014 05:55 PM, Sean Dague wrote: On 02/12/2014 05:08 PM, Boris Pavlovic wrote: Hi all, It goes without saying words that actually tempest[1] is only one public known tool that could fully verify deployments and ensure dthat they work. In Rally[2] we would like to use it as a cloud verifier (before benchmarking it is actually very useful;) to ensure that cloud work). We are going to build on top of tempest pretty interface and aliases support of working with different clouds. E.g.: rally use deployment uuid # use some deployment that is registered in Rally rally verify nova # Run only nova tests against `in-use` deployment rally verify small/big/full # set of tests rally verify list # List all verification results for this deployment rally verify show id # Show detailed information # Okay we found that something failed, fixed it in cloud, restart service and we would like you to run only failed tests rally verify latest_failed # do it in one simple command These commands should be very smart, generate proper tempest.conf for specific cloud, prepare cloud for tempest testing, store somewhere results and so on and so on. So at the end we will have very simple way to work with tempest. We added first patch that adds base functionality to Rally [3]: https://review.openstack.org/#/c/70131/ At QA meeting I discussed it with David Kranz, as a result we agree that part of this functionality (tempest.conf generator cloud prepare), should be implemented inside tempest. Current situation is not super cool because, there are at least 4 projects where we are generating in different way tempest.conf: 1) DevStack 2) Fuel CI 3) Rally 4) Tempest (currently broken) To put it in a nutshell, it's clear that we should make only 1 tempest.conf generator [4], that will cover all cases, and will be enough simple to be used in all other projects. So in the past the issue was we could never get anyone to agree on one. For instance, devstack makes some really fine grained decisions, and the RDO team showed up with a very different answer file approach, which wouldn't work for devstack (it had the wrong level of knob changing). And if the end of the day you replace build a tempest config generator, which takes 200 options, I'm not entirely sure how that was better than just setting those 200 options directly. So while happy to consider options here, realize that there is a reason this has been punted before. -Sean I have thought about this and think we can do better. I will present a spec when I get a chance if no one else does. I would leave devstack out of it, at least for now. In general it would be good to decouple tempest from devstack a little more, especially as it gains wider use in rally, refstack, etc. For example, there are default paths and stuff in tempest.conf.sample that refer to files that are only put there by devstack. -David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone][all] Keystone V2 and V3 support in icehouse
On Mon, Feb 10, 2014 at 5:23 PM, Frittoli, Andrea (Cloud Services) fritt...@hp.com wrote: Hi, I’m working on a tempest blueprint to make tempest able to run 100% on keystone v3 (or later versions) – the auth version to be used will be available via a configuration switch. The rationale is that Keystone V2 is deprecated in icehouse, V3 being the primary version. Thus it would be good to have (at least) one of the gate jobs running entirely with keystone v3. Much appreciated! Have a link to that bp? There are other components beyond tempest that would need some changes to make this happen. Nova and cinder python bindings work only with keystone v2 [0], and as far as I know all core services work with keystone v2 (at least by default). Is there a plan to support identity v3 there until the end of icehouse? Yes (but maybe not by the end of icehouse)- we'd like to make all other client libraries depend on keystoneclient's library for authentication in the long run. Jamie Lennox has done a ton of great work to prepare keystoneclient for that responsibility during Icehouse. If not I think we may have to consider still supporting v2 in icehouse. v2 should certainly be supported in icehouse; which version other projects default to is up to them, but I'd like to see *all* projects at least defaulting to v3 by the end of Juno. Andrea [0] https://bugs.launchpad.net/python-novaclient/+bug/1262843 -- Andrea Frittoli IaaS Systems Engineering Team HP Cloud ☁ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] reverify/recheck
On Tue, Feb 18, 2014 at 11:52 PM, Sean Dague s...@dague.net wrote: All this blind meatgrinder behavior was putting tons of code into the gate that could not pass. That, coupled with other race conditions we were dealing with, put us into a state where we had a 60hr gate queue. So we're trying out a new system. A change will not move into the gate unless there is both a +A and Jenkins +1 on the patch. We also rerun check results on comment if the test results are 72hrs old, so there is always a reasonably fresh version of the results. This also helps detect merge conflicts early. Thanks for the update Sean. I think the check re-run currently gets done even if the new comment is a -2. Perhaps we could skip it in that case? Also when you +A a patch, if there aren't 24hr fresh test results, we rerun check first, then if that passes, it gets sent to the gate. There is still a bit of a sticky part of the flow if it fails in the gate. Because that means you don't satisfy the +1 jenkins requirement. Still trying to get the right flow worked out there. -Sean On 02/18/2014 07:51 AM, Sergey Lukjanov wrote: You already can reverify on any approved change. Jenkins automatically runs check pipeline jobs after 3 days. On Tue, Feb 18, 2014 at 4:29 PM, Christopher Yeoh cbky...@gmail.com mailto:cbky...@gmail.com wrote: On Tue, Feb 18, 2014 at 12:27 AM, Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com wrote: Thanks! That makes sense. Just odd how a -1 was received. So I think that now a check run is automatically done if the latest Jenkins run is too old (7 days?) This is done because gate failures are so costly. The check run failing would return -1 instead of a gate failure of -2. On 2/17/14 3:15 PM, Akihiro Motoki mot...@da.jp.nec.com mailto:mot...@da.jp.nec.com wrote: Hi Gary, According to zuul layout.yaml [1], reverify bug # should still work but it seems to work only when verified score from jenkins is -2. https://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack-in fra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml%23 L25k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg4 5MkPhCZFxPEq8%3D%0Am=Nf6%2FyHMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0A s=8fc4d7e6970936977dc85989e4e7e9429afb15f5091a768efa4ce236417d9c9b Note that core team can trigger gate jobs by re-approving the patch. Thanks, Akihiro (2014/02/17 22:03), Gary Kotton wrote: Hi, The reverify no bug was removed. But reverify bug # used to work. That no longer does. With the constant gate failures how can we ensure that a approved patch does a retry? Thanks Gary From: Sylvain Bauza sylvain.ba...@gmail.com mailto:sylvain.ba...@gmail.com mailto:sylvain.ba...@gmail.com mailto: sylvain.ba...@gmail.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Date: Monday, February 17, 2014 2:53 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [infra] reverify/recheck Hi Gary, That's normal, this command has been removed since Dec'13, see https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pip ermail/openstack-dev/2013-December/021649.htmlk=oIvRg1%2BdGAgOoM1BIlLLqw %3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=Nf6%2Fy HMSSchQO59xIcg6NKX1O2wlkkHHd42bYQim0k0%3D%0As=826274ca8ad9562b557322274a cdacd97482338456820af8c330b76ea7639838 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pi permail/openstack-dev/2013-December/021649.htmlk=oIvRg1%2BdGAgOoM1BIlLLq w%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=B8yQ75 uRFa7dyD%2FbgPgO%2FMT2x229MkK1vHWBVAzpFtM%3D%0As=d9cbcbde99b7c88c15ca138 499de8f36edd4247ce351e41b241d675b09b79956 -Sylvain 2014-02-17 13:00 GMT+01:00 Gary Kotton gkot...@vmware.com mailto:gkot...@vmware.com mailto:gkot...@vmware.com mailto:gkot...@vmware.com: Hi, It seems that the
Re: [openstack-dev] [Murano] Repositoris re-organization
I'd suggest to reduce number of Murano repositories for several reasons: * All other OpenStack projects have a single repo per project. While this point might look like something not worth mentioning, it's really important: - unified project structure simplifies life for new developers. once they get familiar with one project, they can expect something similar from another project - unified project structure simplifies life for deployers. similar project structure simplifies packaging and deployment automation * Another important reason is to simplify gated testing. Just take a look at Solum layout [1], they have everything needed (contrib, functionaltests) to run dvsm job in a single repo. One simple job definition [2] allows to install Solum in DevStack and run Tempest tests for Solum. * As a side-effect, this approach will improve integrity of project components. Having murano-common in the same repo with other components will help to catch integration issues earlier. In an ideal world there will be only the following repos: - murano (api, common, conductor, docs, repository, tests) - python-muranoclient - murano-dashboard - murano-agent - puppet-murano (optional, but nice to have) [1] https://github.com/stackforge/solum [2] https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml Thanks, Ruslan On Thu, Feb 6, 2014 at 3:14 PM, Serg Melikyan smelik...@mirantis.comwrote: Hi, Alexander, In general I am completely agree with Clint and Robert, and as one of contributors of Murano I don't see any practical reasons for repositories reorganization. And regarding of your proposal I have a few thoughts that I would like to share below: This enourmous amount of repositories adds too much infrustructural complexity Creating a new repository is a quick, easy and completely automated procedure that requires only simple commit to Zuul configuration. All infrastructure related to repositories is handled by Openstack CI and supported by Openstack Infra Team, and actually don't require anything from project development team. About what infrastructure complexity you are talking about? I actually think keeping them separate is a great way to make sure you have ongoing API stability. (c) Clint I would like to share a little statistic gathered by Stan Lagun a little time ago regarding repositories count in different PaaS solution. If you are concerned about large number of repositories used by Murano, you will be quite amused: - https://github.com/heroku - 275 - https://github.com/cloudfoundry - 132 - https://github.com/openshift - 49 - https://github.com/CloudifySource - 46 First of all, I would suggest to have a single reposository for all the three main components of Murano: main murano API (the contents of the present), workflow execution engine (currently murano-conductor; also it was suggested to rename the component itself to murano-engine for more consistent naming) and metadata repository (currently murano-repository). *murano-api* and *murano-repository* have many things in common, they are both present HTTP API to the user, and I hope would be rewritten to common framework (Pecan?). But *murano-conductor* have only one thing in common with other two components: code shared under *murano-common*. That repository may be eventually eliminated by moving to Oslo (as it should be done). Also, it has been suggested to move our agents (both windows and unified python) into the main repository as well - just to put them into a separate subfolder. I don't see any reasons why they should be separated from core Murano: I don't believe we are going to have any third-party implementations of our Unified agent proposals, while this possibility was the main reason for separatinng them. Main reason for murano-agent to have separate repository was not a possibility to have another implementation, but that all sources that should be able to be built as package, have tests and can be uploaded to PyPI (or any other gate job) should be placed in different repository. OpenStack CI have several rules regarding how repositories should be organized to support running different gate jobs. For example, to run tests *tox.ini* is need to be present in root directory, to build package *setup.py* should be present in root directory. So we could not simply move them to separate directories in main repository and have same capabilities as in separate repository. Next, deployment scripts and auto-generated docs: are there reasons why they should be in their own repositories, instead of docs and tools/deployment folders of the primary repo? I would prefer the latter: docs and deployment scripts have no meaning without the sources which they document/deploy - so it is better to have them consistent. We have *developers documentation* alongside with all sources:
Re: [openstack-dev] [solum] Question about solum-minimal-cli BP
Thanks Angus and Devdatta. I think I understand. Angus -- what you said seems to mirror the Heroku CLI usage: a) User runs app/plan create (to create the remote repo), then b) user runs git push ... (which pushes the code to the remote repo and creates 1 assembly, resulting in a running application). If this is the intended flow for the user, it makes sense to me. One follow up question: under what circumstances will the user need to explicitly run assembly create? Would it be used exclusively for adding more assemblies to an already running app? Thanks, Shaunak From: Angus Salkeld [angus.salk...@rackspace.com] Sent: Monday, February 17, 2014 5:54 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP On 17/02/14 21:47 +, Shaunak Kashyap wrote: Hey folks, I was reading through https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation and have a question. If I’m understanding “app create” and “assembly create” correctly, the user will have to run “app create” first, followed by “assembly create” to have a running application. Is this correct? If so, what is the reason for “app create” not automatically creating one assembly as well? On that page it seems that app create is the same as plan create. The only reason I can see for seperating the plan from the assembly is when you have git-push. Then you need to have something create the git repo for you. 1 plan create (with a reference to a git-push requirement) would create the remote git repo for you. 2 you clone and populate the repo with your app code 3 you push, and that causes the assembly create/update. Adrian might want to correct my here tho' -Angus Thanks, Shaunak ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] Service dependency decorators in tests
Hi all, Scenario tests feature service dependency decorators in tests – so that a test will run only if all required components are available. I think we should extend them to all tests, including the API ones. For instance Nova image tests depend on Glance, cinder attach/detach tests depend on Nova. If there is agreement on that I’d happy to start a bp to track the test tagging effort. andrea -- Andrea Frittoli IaaS Systems Engineering Team HP Cloud ☁ PC Phone: +445601090317 smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Network] Allocate MAC and IP address for a VM instance
Greetings, Not sure if it is suitable to ask this question in openstack-dev list. Here come a question related to network and want to get some input or comments from you experts. My case is as this: For some security issue, I want to put both MAC and internal IP address to a pool and when create VM, I can get MAC and its mapped IP address and assign the MAC and IP address to the VM. For example, suppose I have following MAC and IP pool: 1) 78:2b:cb:af:78:b0, 192.168.0.10 2) 78:2b:cb:af:78:b1, 192.168.0.11 3) 78:2b:cb:af:78:b2, 192.168.0.12 4) 78:2b:cb:af:78:b3, 192.168.0.13 Then I can create four VMs using above MAC and IP address, each row in above can be mapped to a VM. Does any of you have any idea for the solution of this? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [QA] Service dependency decorators in tests
I'm +1 on that. Mostly it's just a lot of time, so hasn't been dealt with yet. Unless there is a completely pressing need, I'd rather see that happen right after icehouse release, because I'm concerned it will be a lot of changes coming in when people are trying to get other more critical things landed. -Sean On 02/18/2014 09:33 AM, Frittoli, Andrea (Cloud Services) wrote: Hi all, Scenario tests feature service dependency decorators in tests – so that a test will run only if all required components are available. I think we should extend them to all tests, including the API ones. For instance Nova image tests depend on Glance, cinder attach/detach tests depend on Nova. If there is agreement on that I’d happy to start a bp to track the test tagging effort. andrea -- Andrea Frittoli IaaS Systems Engineering Team HP Cloud ☁ PC Phone: +445601090317 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague Samsung Research America s...@dague.net / sean.da...@samsung.com http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [QA] Service dependency decorators in tests
On Tue, 2014-02-18 at 14:33 +, Frittoli, Andrea (Cloud Services) wrote: Hi all, Scenario tests feature service dependency decorators in tests – so that a test will run only if all required components are available. I think we should extend them to all tests, including the API ones. For instance Nova image tests depend on Glance, cinder attach/detach tests depend on Nova. If there is agreement on that I’d happy to start a bp to track the test tagging effort. ++ Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat] lazy translation is breaking Heat
This change was recently merged: https://review.openstack.org/#/c/69133/ Unfortunately it didn't enable lazy translations for the unit tests, so it didn't catch the many places in Heat that won't work when lazy translations are enabled. Notably there are a lot of cases where the code adds the result of a call to _() with another string, and Message objects (which are returned from _ when lazy translations are enabled) can't be added, resulting in an exception being raised. I think the best course of action would be to revert this change, and then reintroduce it along with patches to fix all the problems, while enabling it for the unit tests so bugs won't be reintroduced in the future. Interestingly it also didn't fail any of the tempest tests, I'm not sure why. -- IRC: radix Christopher Armstrong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Repositoris re-organization
Hi Ruslan, Thanks for your feedback. I completely agree with these arguments: actually, these were the reasons why I've initiated this discussion. Team, let's discuss this on the IRC meeting today. -- Regards, Alexander Tivelkov On Tue, Feb 18, 2014 at 5:55 PM, Ruslan Kamaldinov rkamaldi...@mirantis.com wrote: I'd suggest to reduce number of Murano repositories for several reasons: * All other OpenStack projects have a single repo per project. While this point might look like something not worth mentioning, it's really important: - unified project structure simplifies life for new developers. once they get familiar with one project, they can expect something similar from another project - unified project structure simplifies life for deployers. similar project structure simplifies packaging and deployment automation * Another important reason is to simplify gated testing. Just take a look at Solum layout [1], they have everything needed (contrib, functionaltests) to run dvsm job in a single repo. One simple job definition [2] allows to install Solum in DevStack and run Tempest tests for Solum. * As a side-effect, this approach will improve integrity of project components. Having murano-common in the same repo with other components will help to catch integration issues earlier. In an ideal world there will be only the following repos: - murano (api, common, conductor, docs, repository, tests) - python-muranoclient - murano-dashboard - murano-agent - puppet-murano (optional, but nice to have) [1] https://github.com/stackforge/solum [2] https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml Thanks, Ruslan On Thu, Feb 6, 2014 at 3:14 PM, Serg Melikyan smelik...@mirantis.comwrote: Hi, Alexander, In general I am completely agree with Clint and Robert, and as one of contributors of Murano I don't see any practical reasons for repositories reorganization. And regarding of your proposal I have a few thoughts that I would like to share below: This enourmous amount of repositories adds too much infrustructural complexity Creating a new repository is a quick, easy and completely automated procedure that requires only simple commit to Zuul configuration. All infrastructure related to repositories is handled by Openstack CI and supported by Openstack Infra Team, and actually don't require anything from project development team. About what infrastructure complexity you are talking about? I actually think keeping them separate is a great way to make sure you have ongoing API stability. (c) Clint I would like to share a little statistic gathered by Stan Lagun a little time ago regarding repositories count in different PaaS solution. If you are concerned about large number of repositories used by Murano, you will be quite amused: - https://github.com/heroku - 275 - https://github.com/cloudfoundry - 132 - https://github.com/openshift - 49 - https://github.com/CloudifySource - 46 First of all, I would suggest to have a single reposository for all the three main components of Murano: main murano API (the contents of the present), workflow execution engine (currently murano-conductor; also it was suggested to rename the component itself to murano-engine for more consistent naming) and metadata repository (currently murano-repository). *murano-api* and *murano-repository* have many things in common, they are both present HTTP API to the user, and I hope would be rewritten to common framework (Pecan?). But *murano-conductor* have only one thing in common with other two components: code shared under *murano-common*. That repository may be eventually eliminated by moving to Oslo (as it should be done). Also, it has been suggested to move our agents (both windows and unified python) into the main repository as well - just to put them into a separate subfolder. I don't see any reasons why they should be separated from core Murano: I don't believe we are going to have any third-party implementations of our Unified agent proposals, while this possibility was the main reason for separatinng them. Main reason for murano-agent to have separate repository was not a possibility to have another implementation, but that all sources that should be able to be built as package, have tests and can be uploaded to PyPI (or any other gate job) should be placed in different repository. OpenStack CI have several rules regarding how repositories should be organized to support running different gate jobs. For example, to run tests *tox.ini* is need to be present in root directory, to build package *setup.py* should be present in root directory. So we could not simply move them to separate directories in main repository and have same capabilities as in separate repository. Next, deployment scripts and auto-generated docs: are there reasons why they should be in
Re: [openstack-dev] [Heat] lazy translation is breaking Heat
This change was recently merged: https://review.openstack.org/#/c/69133/ Unfortunately it didn't enable lazy translations for the unit tests, so it didn't catch the many places in Heat that won't work when lazy translations are enabled. Notably there are a lot of cases where the code adds the result of a call to _() with another string, and Message objects (which are returned from _ when lazy translations are enabled) can't be added, resulting in an exception being raised. I think the best course of action would be to revert this change, and then reintroduce it along with patches to fix all the problems, while enabling it for the unit tests so bugs won't be reintroduced in the future. +1 for me to revert it. It broke Heat before, and it did it again because we didn't have any tests. Let's have a proper test environment for not making that mistake again and again. -- Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-dev] [neutron] [ml2] Neutron and ML2 - adding new network type
[Moving to -dev list] On Feb 18, 2014, at 9:12 AM, Sławek Kapłoński sla...@kaplonski.pl wrote: Hello, I'm trying to make something with neutron and ML2 plugin. Now I need to add my own external network type (as there are Flat, VLAN, GRE and so on). I searched for manuals for that but I can't found anything. Can someone of You explain me how I should do that? Is it enough to add own type_driver and mechanism_driver to ML2? Or I should do something else also? Hi Sławek: Can you explain more about what you’re looking to achieve here? I’m just curious how the existing TypeDrivers won’t cover your use case. ML2 was designed to remove segmentation management from the MechanismDrivers so they could all share segment types. Perhaps understanding what you’re trying to achieve would help better understand the approach to take here. Thanks, Kyle Thanks in advance -- Sławek Kapłoński sla...@kaplonski.pl ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Hyper-V meeting minutes
Hi everyone, Here is the log from today's Hyper-v Meeting. Meeting ended Tue Feb 18 16:17:57 2014 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) Minutes: http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-02-18-16.00.html Minutes (text): http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-02-18-16.00.txt Log: http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-02-18-16.00.log.html Peter J. Pouliot CISSP Sr. SDET OpenStack Microsoft New England Research Development Center 1 Memorial Drive Cambridge, MA 02142 P: 1.(857).4536436 E: ppoul...@microsoft.commailto:ppoul...@microsoft.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone][all] Keystone V2 and V3 support in icehouse
Hi, thanks for the update Link to the tempest bp I’m working on: https://blueprints.launchpad.net/tempest/+spec/multi-keystone-api-version-tests The update of the python binding to use the keystone binding is targeted for icehouse or juno? andrea From: Dolph Mathews [mailto:dolph.math...@gmail.com] Sent: 18 February 2014 13:54 To: OpenStack Development Mailing List (not for usage questions) Cc: Jamie Lennox Subject: Re: [openstack-dev] [keystone][all] Keystone V2 and V3 support in icehouse On Mon, Feb 10, 2014 at 5:23 PM, Frittoli, Andrea (Cloud Services) fritt...@hp.com mailto:fritt...@hp.com wrote: Hi, I’m working on a tempest blueprint to make tempest able to run 100% on keystone v3 (or later versions) – the auth version to be used will be available via a configuration switch. The rationale is that Keystone V2 is deprecated in icehouse, V3 being the primary version. Thus it would be good to have (at least) one of the gate jobs running entirely with keystone v3. Much appreciated! Have a link to that bp? There are other components beyond tempest that would need some changes to make this happen. Nova and cinder python bindings work only with keystone v2 [0], and as far as I know all core services work with keystone v2 (at least by default). Is there a plan to support identity v3 there until the end of icehouse? Yes (but maybe not by the end of icehouse)- we'd like to make all other client libraries depend on keystoneclient's library for authentication in the long run. Jamie Lennox has done a ton of great work to prepare keystoneclient for that responsibility during Icehouse. If not I think we may have to consider still supporting v2 in icehouse. v2 should certainly be supported in icehouse; which version other projects default to is up to them, but I'd like to see *all* projects at least defaulting to v3 by the end of Juno. Andrea [0] https://bugs.launchpad.net/python-novaclient/+bug/1262843 -- Andrea Frittoli IaaS Systems Engineering Team HP Cloud ☁ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Tripleo] How much local storage device preparation should tripleo do?
This question is focused on Tripleo overcloud nodes meant to handle block storage or object storage rather than the regular control and compute nodes. Basically I want to get peoples thoughts on how much manipulation of the underlying storage devices we should be expecting to do if we want standalone overcloud nodes to provide block and object storage via their local disks, i.e. not via NFS/gluster/ceph etc Consider an overcloud node which will be used for providing object storage (swift) from its local disks: IIUC swift really just cares that for each disk you want to use for storage: a) it has a partition b) that the partition has a filesystem on it c) that the partition is mounted under /srv/node Given tripleo is taking ownership of installing the operating system on these nodes, how much responsibility should tripleo take for getting the above steps done? In the case that this machine just came in off the truck, all of those steps would need to be done prior to the system being a usable part of the overcloud. If we don't want tripleo dealing with this stuff right now, e.g. its eventually going to be done by ironic e.g. https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk, then what is the best process today? Is it that someone does a bunch of work on these machines before we start the tripleo deployment process? Presumably we would at least need to be able to feed Heat a list of partitions which we then mount under /srv/node and update fstab accordingly so the changes stick? [Right now we skip all of a)-c) above and just have swift using a folder, /srv/node/d1 (doesn't this want to be under /mnt/state?), to store all its content.] Now consider an overcloud node which will be used for providing block storage (cinder) from its local disks: IIUC the cinder LVM driver is the preferred option when accessing local storage. In this case cinder really just cares that for each disk you want to use for storage it is added to a specific volume group. [Assuming we're not going to allow people to create disk partitions and then select particular ones]. We would then presumably need to include the appropriate filter options in lvm.conf so the selected devices get correctly scanned by lvm at startup? [Right now we do all this for a dummy loopback device which gets created under /mnt/state/var/lib/cinder/ and whose size you can set via the Heat template: https://github.com/openstack/tripleo-image-elements/blob/master/elements/cinder-volume/os-refresh-config/post-configure.d/72-cinder-resize-volume-groups ] Thanks Charles ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] lazy translation is breaking Heat
On 2014-02-18 09:15, Christopher Armstrong wrote: This change was recently merged: https://review.openstack.org/#/c/69133/ [1] Unfortunately it didn't enable lazy translations for the unit tests, so it didn't catch the many places in Heat that won't work when lazy translations are enabled. Notably there are a lot of cases where the code adds the result of a call to _() with another string, and Message objects (which are returned from _ when lazy translations are enabled) can't be added, resulting in an exception being raised. I think the best course of action would be to revert this change, and then reintroduce it along with patches to fix all the problems, while enabling it for the unit tests so bugs won't be reintroduced in the future. Agreed. That behavior was intentional because we shouldn't be adding translated strings that way, but obviously it needs to be fixed before lazy translation is enabled. :-) Interestingly it also didn't fail any of the tempest tests, I'm not sure why. That is kind of concerning. I see that the error does show up in the logs a couple of times though: http://logs.openstack.org/33/69133/5/gate/gate-tempest-dsvm-full/d6aa1bd/logs/screen-h-api.txt.gz#_2014-02-17_17_38_25_521 Maybe this is something that would need the new error message test enabled to be caught? -- IRC: radix Christopher Armstrong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] Links: -- [1] https://review.openstack.org/#/c/69133/ [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Regarding language pack database schema
I'm also a +1 for #2.However as discussed on IRC, we should clearly spell out that the JSON blob should never be treated in a SQL-like manner. The moment somebody says 'I want to make that item in the json searchable' is the time to discuss adding it as part of the SQL schema. On 2/13/14 4:39 PM, Clayton Coleman ccole...@redhat.com wrote: I like option #2, simply because we should force ourselves to justify every attribute that is extracted as a queryable parameter, rather than making them queryable at the start. - Original Message - Hi Arati, I would vote for Option #2 as a short term solution. Probably later we can consider using NoSQL DB or MariaDB which has Column_JSON type to store complex types. Thanks Georgy On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane arati.mahim...@rackspace.com wrote: Hi All, I have been working on defining the Language pack database schema. Here is a link to my review which is still a WIP - https://review.openstack.org/#/c/71132/3 . There are a couple of different opinions on how we should be designing the schema. Language pack has several complex attributes which are listed here - https://etherpad.openstack.org/p/Solum-Language-pack-json-format We need to support search queries on language packs based on various criteria. One example could be 'find a language pack where type='java' and version1.4' Following are the two options that are currently being discussed for the DB schema: Option 1: Having a separate table for each complex attribute, in order to achieve normalization. The current schema follows this approach. However, this design has certain drawbacks. It will result in a lot of complex DB queries and each new attribute will require a code change. Option 2: We could have a predefined subset of attributes on which we would support search queries. In this case, we would define columns (separate tables in case of complex attributes) only for this subset of attributes and all other attributes will be a part of a json blob. With this option, we will have to go through a schema change in case we decide to support search queries on other attributes at a later stage. I would like to know everyone's thoughts on these two approaches so that we can take a final decision and go ahead with one approach. Suggestions regarding any other approaches are welcome too! Thanks, Arati ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] lazy translation is breaking Heat
All, Myself and Jim Carey have been working on getting the right solution for making lazy_translation work through Nova and Cinder. The patch should have also had changes to remove the use of str() in any LOG or exception messages as well as the removal of any places where strings were being '+' ed together. In the case of Cinder we are doing it as two separate patches that are dependent. I am surprised that this change got past Jenkins. In the case of Cinder and Nova unit test caught a number of problems. We will make sure to work with Liang Chen to avoid this happening again. Jay S. Bryant IBM Cinder Subject Matter ExpertCinder Core Member Department 7YLA, Building 015-2, Office E125, Rochester, MN Telephone: (507) 253-4270, FAX (507) 253-6410 TIE Line: 553-4270 E-Mail: jsbry...@us.ibm.com All the world's a stage and most of us are desperately unrehearsed. -- Sean O'Casey From: Ben Nemec openst...@nemebean.com To: openstack-dev@lists.openstack.org, Date: 02/18/2014 10:37 AM Subject:Re: [openstack-dev] [Heat] lazy translation is breaking Heat On 2014-02-18 09:15, Christopher Armstrong wrote: This change was recently merged: https://review.openstack.org/#/c/69133/ Unfortunately it didn't enable lazy translations for the unit tests, so it didn't catch the many places in Heat that won't work when lazy translations are enabled. Notably there are a lot of cases where the code adds the result of a call to _() with another string, and Message objects (which are returned from _ when lazy translations are enabled) can't be added, resulting in an exception being raised. I think the best course of action would be to revert this change, and then reintroduce it along with patches to fix all the problems, while enabling it for the unit tests so bugs won't be reintroduced in the future. Agreed. That behavior was intentional because we shouldn't be adding translated strings that way, but obviously it needs to be fixed before lazy translation is enabled. :-) Interestingly it also didn't fail any of the tempest tests, I'm not sure why. That is kind of concerning. I see that the error does show up in the logs a couple of times though: http://logs.openstack.org/33/69133/5/gate/gate-tempest-dsvm-full/d6aa1bd/logs/screen-h-api.txt.gz#_2014-02-17_17_38_25_521 Maybe this is something that would need the new error message test enabled to be caught? -- IRC: radix Christopher Armstrong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance
Hi Jay, In neutron API, you could create port with specified mac_address and fix_ip, and then create vm with this port. But the mapping of them need to manage by yourself. 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com 写道: Greetings, Not sure if it is suitable to ask this question in openstack-dev list. Here come a question related to network and want to get some input or comments from you experts. My case is as this: For some security issue, I want to put both MAC and internal IP address to a pool and when create VM, I can get MAC and its mapped IP address and assign the MAC and IP address to the VM. For example, suppose I have following MAC and IP pool: 1) 78:2b:cb:af:78:b0, 192.168.0.10 2) 78:2b:cb:af:78:b1, 192.168.0.11 3) 78:2b:cb:af:78:b2, 192.168.0.12 4) 78:2b:cb:af:78:b3, 192.168.0.13 Then I can create four VMs using above MAC and IP address, each row in above can be mapped to a VM. Does any of you have any idea for the solution of this? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance
Jay, We've got a similar requirement at CERN where we would like to have pools of ip/mac combinations for each subnet and have it so that the user is just allocated one (and for the same subnet that the hypervisor is on). We've not found a good solution so far. Tim -Original Message- From: Dong Liu [mailto:willowd...@gmail.com] Sent: 18 February 2014 18:12 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance Hi Jay, In neutron API, you could create port with specified mac_address and fix_ip, and then create vm with this port. But the mapping of them need to manage by yourself. 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com 写道: Greetings, Not sure if it is suitable to ask this question in openstack-dev list. Here come a question related to network and want to get some input or comments from you experts. My case is as this: For some security issue, I want to put both MAC and internal IP address to a pool and when create VM, I can get MAC and its mapped IP address and assign the MAC and IP address to the VM. For example, suppose I have following MAC and IP pool: 1) 78:2b:cb:af:78:b0, 192.168.0.10 2) 78:2b:cb:af:78:b1, 192.168.0.11 3) 78:2b:cb:af:78:b2, 192.168.0.12 4) 78:2b:cb:af:78:b3, 192.168.0.13 Then I can create four VMs using above MAC and IP address, each row in above can be mapped to a VM. Does any of you have any idea for the solution of this? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone][all] Keystone V2 and V3 support in icehouse
On Tue, Feb 18, 2014 at 10:21 AM, Frittoli, Andrea (HP Cloud) fritt...@hp.com wrote: Hi, thanks for the update Link to the tempest bp I’m working on: https://blueprints.launchpad.net/tempest/+spec/multi-keystone-api-version-tests The update of the python binding to use the keystone binding is targeted for icehouse or juno? Clients are tracked against the same release milestones of the services, so the integration can happen whenever someone wants to tackle it and we can release them when they're ready. andrea *From:* Dolph Mathews [mailto:dolph.math...@gmail.com] *Sent:* 18 February 2014 13:54 *To:* OpenStack Development Mailing List (not for usage questions) *Cc:* Jamie Lennox *Subject:* Re: [openstack-dev] [keystone][all] Keystone V2 and V3 support in icehouse On Mon, Feb 10, 2014 at 5:23 PM, Frittoli, Andrea (Cloud Services) fritt...@hp.com wrote: Hi, I’m working on a tempest blueprint to make tempest able to run 100% on keystone v3 (or later versions) – the auth version to be used will be available via a configuration switch. The rationale is that Keystone V2 is deprecated in icehouse, V3 being the primary version. Thus it would be good to have (at least) one of the gate jobs running entirely with keystone v3. Much appreciated! Have a link to that bp? There are other components beyond tempest that would need some changes to make this happen. Nova and cinder python bindings work only with keystone v2 [0], and as far as I know all core services work with keystone v2 (at least by default). Is there a plan to support identity v3 there until the end of icehouse? Yes (but maybe not by the end of icehouse)- we'd like to make all other client libraries depend on keystoneclient's library for authentication in the long run. Jamie Lennox has done a ton of great work to prepare keystoneclient for that responsibility during Icehouse. If not I think we may have to consider still supporting v2 in icehouse. v2 should certainly be supported in icehouse; which version other projects default to is up to them, but I'd like to see *all* projects at least defaulting to v3 by the end of Juno. Andrea [0] https://bugs.launchpad.net/python-novaclient/+bug/1262843 -- Andrea Frittoli IaaS Systems Engineering Team HP Cloud ☁ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Meetup Summary
On 2/17/2014 4:41 PM, Russell Bryant wrote: Greetings, Last week we had an in-person Nova meetup. Bluehost was a wonderful and generous host. Many thanks to them. :-) Here's some observations and a summary of some of the things that we discussed: 1) Mark McClain (Neutron PTL) and and Mark Washenberger (Glance PTL) both attended. Having some cross-project discussion time was *incredibly* useful, so I'm thankful they attended. This makes me very optimistic about our plans to have a cross-project day at the Atlanta design summit. We need to try to get as many opportunities as possible for this sort of collaboration. 2) Gantt - We discussed the progress of the Gantt effort. After discussing the problems encountered so far and the other scheduler work going on, the consensus was that we really need to focus on decoupling the scheduler from the rest of Nova while it's still in the Nova tree. Don was still interested in working on the existing gantt tree to learn what he can about the coupling of the scheduler to the rest of Nova. Nobody had a problem with that, but it doesn't sound like we'll be ready to regenerate the gantt tree to be the real gantt tree soon. We probably need another cycle of development before it will be ready. As a follow-up to this, I wonder if we should rename the current gantt repository from openstack/gantt to stackforge/gantt to avoid any possible confusion. We should make it clear that we don't expect the current repo to be used yet. 3) v3 API - We discussed the current status of this effort, including the tasks API, and all other v3 work. There are some notes here: https://etherpad.openstack.org/p/NovaV3APIDoneCriteria I actually think we need to talk about this some more before we mark v3 as stable. I'll get notes together and start another thread soon. 4) We talked about Nova's integration with Neutron and made some good progress. We came up with a blueprint (ideally for Icehouse) to improve Nova-Neutron interaction. There are two cases we need to improve that have been particularly painful. The first is the network info cache. Neutron can issue an API callback to Nova to let us know that we need to refresh the cache. The second is knowing that VIF setup is complete. Right now we have cases where we issue a request to Neutron and it is processed asynchronously. We have no way to know when it has finished. For example, we really need to know that VIF plumbing is set up before we boot an instance and it tries its DHCP request. We can do this with nova-network, but with Neutron it's just a giant race. I'm actually surprised we've made it this long without fixing this. One or both of these issues (thinking VIF readiness) is also causing a gate failure in master and stable/havana: https://bugs.launchpad.net/nova/+bug/1210483 I'd like to propose skipping that test if Tempest is configured with Neutron until we get the bug fixed/blueprint resolved. By the way, can I get a link to the blueprint to reference in the bug (or vice-versa)? 5) Driver CI - We talked about the ongoing effort to set up CI for all of the compute drivers. The discussion was mostly a status review. At this point, the Xenserver and Docker drivers are both at risk of being removed from Nova for the Icehouse release if CI is not up and running in time. 6) Upgrades - we discussed the state of upgrading Nova. It was mostly a review of the excellent progress being made this cycle. Dan Smith has been doing a lot of work to get us closer to where we can upgrade the control services at once with downtime, but roll through upgrading the computes later after service is back up. Joe Gordon has been working on automating the testing of this to make sure we don't break it, so that should be running soon. Lastly, everyone in attendance seemed to really enjoy it, and the overwhelming vote in the room was for doing the same thing again during the Juno cycle. Dates and location TBD. +1 Thanks, -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Unit Testing Nova
I am trying to figure out how I should be doing unit testing, and documenting it in https://wiki.openstack.org/wiki/Gerrit_Workflow Oddly, the situation for Nova seems reversed: run_tests.sh works and tox does not. See http://paste.openstack.org/show/66969/ for my experiences with each. Am I doing something wrong, or is tox really not working for Nova? In the latter case, is this a bug or just to be expected? Thanks, Mike___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Unit Testing Nova
Mike, What version of tox do you use? Best regards, Boris Pavlovic On Tue, Feb 18, 2014 at 9:46 PM, Mike Spreitzer mspre...@us.ibm.com wrote: I am trying to figure out how I should be doing unit testing, and documenting it in https://wiki.openstack.org/wiki/Gerrit_Workflow Oddly, the situation for Nova seems reversed: run_tests.sh works and tox does not. See http://paste.openstack.org/show/66969/ for my experiences with each. Am I doing something wrong, or is tox really not working for Nova? In the latter case, is this a bug or just to be expected? Thanks, Mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][IPv6] Testing functionality of IPv6 modes using Horizon
Hi shshang, all, I have some preliminary Horizon diffs available and if anyone would be kind enough to patch them and try to test the functionality, I'd really appreciate it. I know I'm able to create subnets successfully with the two modes but if there's anything else you'd like to test or have any other user experience comments, please feel free to let me know. The diffs are at - https://review.openstack.org/74453 Thanks!! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] VPC Proposal
There was a lot of emails on that thread, but I am not seeing the discussion converging. I would like to reiterate my concerns: - We are trying to implement an API on a feature that is not supported by openstack - As a result, the implementation is overloading existing construct without implementing full AWS capabilities and semantic (e.g. Shared network isolation from VPC, or Floating IP scoping to VPC). - Dependent blueprints are not implemented and have been deferred, resulting in the broken implementation (e.g. https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron) - this feature is only available through EC2 API, which is likely going to result in another implementation for general use. - users adopting the VPC model proposed through EC2 API will be stuck in an upgrade mess when the proper implementation comes along. - There are new constructs in work that are better suited for implementing this concept properly (Multi tenant hierarchy and domains). As you can guess, I'm not really a fan, but it seems that only few individuals are concerned. I would think that this topic would create more interest, specially on the network side. Maybe because of the subject tags. I will therefore copy this email with the Neutron tag. JC On Feb 17, 2014, at 10:10 PM, Rudra Rugge rru...@juniper.net wrote: I am not sure on how to dig out the archives. There were a couple of emails exchanged with Salvatore on the thread pertaining to the extensions we were referring to as part of this blueprint. There are a few notes on the whiteboard of the blueprint as well. Regards, Rudra On 2/17/14, 1:28 PM, jc Martin jch.mar...@gmail.com wrote: Thanks, Do you have the links for the discussions ? Thanks, JC Sent from my iPhone On Feb 17, 2014, at 11:29 AM, Rudra Rugge rru...@juniper.net wrote: JC, BP has been updated with the correct links. I have removed the abandoned review #3. Please review #1 and #2. 1. https://review.openstack.org/#/c/40071/ This is the active review. There is one comment by Sean regarding adding a knob when Neutron is not used. That will be addressed with the next path. 2. https://review.openstack.org/#/c/53171 This is the active review for tempest test cases as requested by Joe Gordon. Currently abandoned until #1 goes through. 3. https://review.openstack.org/#/c/53171 This review is not active. It was accidentally submitted with a new change-id. Regards, Rudra On 2/16/14, 9:25 AM, Martin, JC jch.mar...@gmail.com wrote: Harshad, I tried to find some discussion around this blueprint. Could you provide us with some notes or threads ? Also, about the code review you mention. which one are you talking about : https://review.openstack.org/#/c/40071/ https://review.openstack.org/#/c/49470/ https://review.openstack.org/#/c/53171 because they are all abandoned. Could you point me to the code, and update the BP because it seems that the links are not correct. Thanks, JC On Feb 16, 2014, at 9:04 AM, Allamaraju, Subbu su...@subbu.org wrote: Harshad, Thanks for clarifying. We started looking at this as some our customers/partners were interested in get AWS API compatibility. We have this blueprint and code review pending for long time now. We will know based on this thread wether the community is interested. But I assumed that community was interested as the blueprint was approved and code review has no -1(s) for long time now. Makes sense. I would leave it to others on this list to chime in if there is sufficient interest or not. To clarify, a clear incremental path from an AWS compatible API to an OpenStack model is not clear. In my mind AWS compatible API does not need new openstack model. As more discussion happen on JC's proposal and implementation becomes clear we will know how incremental is the path. But at high level there two major differences 1. New first class object will be introduced which effect all components 2. more than one project can be supported within VPC. But it does not change AWS API(s). So even in JC(s) model if you want AWS API then we will have to keep VPC to project mapping 1:1, since the API will not take both VPC ID and project ID. As more users want to migrate from AWS or IaaS providers who want compete with AWS should be interested in this compatibility. IMHO that's a tough sell. Though an AWS compatible API does not need an OpenStack abstraction, we would end up with two independent ways of doing similar things. That would OpenStack repeating itself! Subbu ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
Re: [openstack-dev] [Solum] Regarding language pack database schema
I agree. Let's proceed with option #2, and submit a wishlist bug to track this as tech debt. We would like to come back to this later and add an option to use a blob store for the JSON blob content, as Georgy mentioned. These could be stored in swift, or a K/V store. It might be nice to have a thin get/set abstraction there to allow alternates to be implemented as needed. I'm not sure exactly where we can track Paul Czarkowski's suggested restriction. We may need to just rely on reviewers to prevent this, because if we ever start introspecting the JSON blob, we will be using an SQL anti-pattern. I'm generally opposed to putting arbitrary sized text and blob entries into a SQL database, because eventually you may run into the maximum allowable size (ie: max-allowed-packet) and cause unexpected error conditions. Thanks, Adrian On Feb 18, 2014, at 8:48 AM, Paul Czarkowski paul.czarkow...@rackspace.com wrote: I'm also a +1 for #2.However as discussed on IRC, we should clearly spell out that the JSON blob should never be treated in a SQL-like manner. The moment somebody says 'I want to make that item in the json searchable' is the time to discuss adding it as part of the SQL schema. On 2/13/14 4:39 PM, Clayton Coleman ccole...@redhat.com wrote: I like option #2, simply because we should force ourselves to justify every attribute that is extracted as a queryable parameter, rather than making them queryable at the start. - Original Message - Hi Arati, I would vote for Option #2 as a short term solution. Probably later we can consider using NoSQL DB or MariaDB which has Column_JSON type to store complex types. Thanks Georgy On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane arati.mahim...@rackspace.com wrote: Hi All, I have been working on defining the Language pack database schema. Here is a link to my review which is still a WIP - https://review.openstack.org/#/c/71132/3 . There are a couple of different opinions on how we should be designing the schema. Language pack has several complex attributes which are listed here - https://etherpad.openstack.org/p/Solum-Language-pack-json-format We need to support search queries on language packs based on various criteria. One example could be 'find a language pack where type='java' and version1.4' Following are the two options that are currently being discussed for the DB schema: Option 1: Having a separate table for each complex attribute, in order to achieve normalization. The current schema follows this approach. However, this design has certain drawbacks. It will result in a lot of complex DB queries and each new attribute will require a code change. Option 2: We could have a predefined subset of attributes on which we would support search queries. In this case, we would define columns (separate tables in case of complex attributes) only for this subset of attributes and all other attributes will be a part of a json blob. With this option, we will have to go through a schema change in case we decide to support search queries on other attributes at a later stage. I would like to know everyone's thoughts on these two approaches so that we can take a final decision and go ahead with one approach. Suggestions regarding any other approaches are welcome too! Thanks, Arati ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Repositoris re-organization
Ruslan, I'm absolutely agree with you, only one correction, I think murano-guestagent will better fit the repo content. Thanks. On Tue, Feb 18, 2014 at 7:23 PM, Alexander Tivelkov ativel...@mirantis.comwrote: Hi Ruslan, Thanks for your feedback. I completely agree with these arguments: actually, these were the reasons why I've initiated this discussion. Team, let's discuss this on the IRC meeting today. -- Regards, Alexander Tivelkov On Tue, Feb 18, 2014 at 5:55 PM, Ruslan Kamaldinov rkamaldi...@mirantis.com wrote: I'd suggest to reduce number of Murano repositories for several reasons: * All other OpenStack projects have a single repo per project. While this point might look like something not worth mentioning, it's really important: - unified project structure simplifies life for new developers. once they get familiar with one project, they can expect something similar from another project - unified project structure simplifies life for deployers. similar project structure simplifies packaging and deployment automation * Another important reason is to simplify gated testing. Just take a look at Solum layout [1], they have everything needed (contrib, functionaltests) to run dvsm job in a single repo. One simple job definition [2] allows to install Solum in DevStack and run Tempest tests for Solum. * As a side-effect, this approach will improve integrity of project components. Having murano-common in the same repo with other components will help to catch integration issues earlier. In an ideal world there will be only the following repos: - murano (api, common, conductor, docs, repository, tests) - python-muranoclient - murano-dashboard - murano-agent - puppet-murano (optional, but nice to have) [1] https://github.com/stackforge/solum [2] https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml Thanks, Ruslan On Thu, Feb 6, 2014 at 3:14 PM, Serg Melikyan smelik...@mirantis.comwrote: Hi, Alexander, In general I am completely agree with Clint and Robert, and as one of contributors of Murano I don't see any practical reasons for repositories reorganization. And regarding of your proposal I have a few thoughts that I would like to share below: This enourmous amount of repositories adds too much infrustructural complexity Creating a new repository is a quick, easy and completely automated procedure that requires only simple commit to Zuul configuration. All infrastructure related to repositories is handled by Openstack CI and supported by Openstack Infra Team, and actually don't require anything from project development team. About what infrastructure complexity you are talking about? I actually think keeping them separate is a great way to make sure you have ongoing API stability. (c) Clint I would like to share a little statistic gathered by Stan Lagun a little time ago regarding repositories count in different PaaS solution. If you are concerned about large number of repositories used by Murano, you will be quite amused: - https://github.com/heroku - 275 - https://github.com/cloudfoundry - 132 - https://github.com/openshift - 49 - https://github.com/CloudifySource - 46 First of all, I would suggest to have a single reposository for all the three main components of Murano: main murano API (the contents of the present), workflow execution engine (currently murano-conductor; also it was suggested to rename the component itself to murano-engine for more consistent naming) and metadata repository (currently murano-repository). *murano-api* and *murano-repository* have many things in common, they are both present HTTP API to the user, and I hope would be rewritten to common framework (Pecan?). But *murano-conductor* have only one thing in common with other two components: code shared under *murano-common*. That repository may be eventually eliminated by moving to Oslo (as it should be done). Also, it has been suggested to move our agents (both windows and unified python) into the main repository as well - just to put them into a separate subfolder. I don't see any reasons why they should be separated from core Murano: I don't believe we are going to have any third-party implementations of our Unified agent proposals, while this possibility was the main reason for separatinng them. Main reason for murano-agent to have separate repository was not a possibility to have another implementation, but that all sources that should be able to be built as package, have tests and can be uploaded to PyPI (or any other gate job) should be placed in different repository. OpenStack CI have several rules regarding how repositories should be organized to support running different gate jobs. For example, to run tests *tox.ini* is need to be present in root directory, to build package *setup.py* should be present in root directory. So
Re: [openstack-dev] DVR blueprint document locked / private
Hi Assaf, Thanks for letting me know. This document was completely open and accessible by everyone till last week. Someone might have changed the settings on this document. I don't remember that I changed any settings on this document. The Powerpoint slides link that I forwarded yesterday also captures the same details and still that is accessible. I have opened it again for the public. If it would have caused any trouble for you to participate in the upstream discussions for the last two days, I feel sorry for it. Feel free to send an email to me, if you have any other issues or concerns. Thanks Swami -Original Message- From: Assaf Muller [mailto:amul...@redhat.com] Sent: Tuesday, February 18, 2014 2:33 AM To: openstack-dev@lists.openstack.org Cc: Vasudevan, Swaminathan (PNB Roseville); Baldwin, Carl (HPCS Neutron); mmccl...@yahoo-inc.com; Livnat Peer; Sylvain Afchain Subject: [Neutron] DVR blueprint document locked / private Regarding the DVR blueprint [1], I noticed that its corresponding Google Doc [2] has been private / blocked for nearly a week now. It's very difficult to participate in upstream design discussions when the document is literally locked. I would appreciate the re-opening of the document, especially considering the recent face to face meeting and the contested discussion points that were raised. [1] https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr [2] https://docs.google.com/document/d/1iXMAyVMf42FTahExmGdYNGOBFyeA4e74sAO3pvr_RjA/edit Thank you, Assaf Muller, Cloud Networking Engineer Red Hat ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Unit Testing Nova
Boris Pavlovic bpavlo...@mirantis.com wrote on 02/18/2014 12:55:26 PM: What version of tox do you use? That was using version 1.6.1 (as a workaround to https://bugs.launchpad.net/openstack-ci/+bug/1274135). Also, this was a fresh DevStack install (done about an hour or two before I posted to the list), with a pretty plain local.conf; it added only Heat to the default stuff. Thanks, Mike___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Db interaction: conductor vs objects vs apis
Hi all, I'm writing some new code which uses objects sent around by rpc. I got to implementing some new conductor methods and started wondering... what's the current recommended way of interacting with the database? Many seem to be around: - fat model approach - put the db interaction in objects - put the db interactions in the conductor itself - completely separate it (quotas style) Since the new code isn't that big or generalized in any way, I'm dropping the third option completely. But conductor -vs- object stays. What should do the db calls? If it's objects, should I also do that even for things that would take only one line in conductor/manager? Regards, Stanisław Pitucha Cloud Services Hewlett Packard smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][policy] Using network services with network policies
Thanks Sumit and Stephen for information provided. It appears to me that we can (and should) use the notion of services/service chains within the group policy extension (and that has been always one of our options). If this is a reasonable approach, then we need to see how we can bring in these services to our group policy and if there are changes we may require. The first thing that comes to mind is to have a new service insertion context, namely policy (or should it be policy_rule?). If that is in place, then a service chain (we can start with a chain of one single service) gets created with it's context set to a particular policy. While the service plugin is responsible for standing up the service, the connectivity is established through the implementation of the group policy extension, in particular the redirect action. Is this a reasonable approach? This approach requires some kind of coordination wrt how these operations are done by the service plugin and the group policy extension. May be a policy simply provides the insertion context for creation of the service chain (in isolation and by the appropriate service plugin) and policy rules are then used to make the service operational. This is different from how services are expected to be instantiated right now. Right? Thinking aloud here. Please comment. A lot of interesting things to work on. May be Juno is where all these efforts come to fruition together :) Mohammad From: Sumit Naiksatam sumitnaiksa...@gmail.com To: Mohammad Banikazemi/Watson/IBM@IBMUS, Cc: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 02/17/2014 02:12 AM Subject:Re: [openstack-dev] [neutron][policy] Using network services with network policies Thanks Mohammad for bringing this up. I responded in another thread: http://lists.openstack.org/pipermail/openstack-dev/2014-February/027306.html ~Sumit. On Sun, Feb 16, 2014 at 7:27 AM, Mohammad Banikazemi m...@us.ibm.com wrote: During the last IRC call we started talking about network services and how they can be integrated into the group Policy framework. In particular, with the redirect action we need to think how we can specify the network services we want to redirect the traffic to/from. There has been a substantial work in the area of service chaining and service insertion and in the last summit advanced service in VMs were discussed. I think the first step for us is to find out the status of those efforts and then see how we can use them. Here are a few questions that come to mind. 1- What is the status of service chaining, service insertion and advanced services work? 2- How could we use a service chain? Would simply referring to it in the action be enough? Are there considerations wrt creating a service chain and/or a service VM for use with the Group Policy framework that need to be taken into account? Let's start the discussion on the ML before taking it to the next call. Thanks, Mohammad inline: graycol.gif___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] lazy translation is breaking Heat
On Tue, Feb 18, 2014 at 11:14 AM, Jay S Bryant jsbry...@us.ibm.com wrote: All, Myself and Jim Carey have been working on getting the right solution for making lazy_translation work through Nova and Cinder. The patch should have also had changes to remove the use of str() in any LOG or exception messages as well as the removal of any places where strings were being '+' ed together. In the case of Cinder we are doing it as two separate patches that are dependent. I am surprised that this change got past Jenkins. In the case of Cinder and Nova unit test caught a number of problems. We will make sure to work with Liang Chen to avoid this happening again.https://review.openstack.org/#/dashboard/7135 fwiw Thomas Hervé has posted a patch to revert the introduction of lazy translation: https://review.openstack.org/#/c/74454/ -- IRC: radix Christopher Armstrong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?
2014-02-11 16:14 GMT+01:00 Anita Kuno: On 02/11/2014 04:57 AM, Alan Pevec wrote: Hi Mark and Anita, could we declare stable/havana neutron gate jobs good enough at this point? There are still random failures as this no-op change shows https://review.openstack.org/72576 but I don't think they're stable/havana specific. ... I will reaffirm here what I had stated in IRC. If Mark McClain gives his assent for stable/havana patches to be approved, I will not remove Neutron stable/havana patches from the gate queue before they start running tests. If after they start running tests, they demonstrate that they are failing, I will remove them from the gate as a means to keep the gate flowing. If the stable/havana gate jobs are indeed stable, I will not be removing any patches that should be merged. As discussed on #openstack-infra last week, stable-maint team should start looking more closely at Tempest stable/havana branch and Matthew Treinish from Tempest core joined the stable-maint team to help us there. In the meantime, we need to do something more urgently, there are remaining failures showing up frequently in stable/havana jobs which seem to have been fixed or at least improved on master: * bug 1254890 - Timed out waiting for thing ... to become ACTIVE causes tempest-dsvm-* failures resolution unclear? * bug 1253896 - Attempts to verify guests are running via SSH fails. SSH connection to guest does not work. based on Salvatore's comment 56, I've marked it as Won't Fix in neutron/havana and opened tempest/havana to propose what Tempest test or jobs should skip for Havana. Please chime-in in the bug if you have suggestions. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][neutron] SRIOV Meeting on Wednesday Feb.19th
Hi Folks, Irena suggested to have another sync-up meeting on Wednesday. So let's meet at 8:00am at #openstack-meeting-alt. Thanks, Robert ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Urgent questions on Service Type Framework for VPNaaS
Hi Paul Sorry, I have missed this mail. The reason for putting -1 was the gating issue, so it is OK now. PS Thank you for your rebasing this one 2014-02-16 16:43 GMT-08:00 Sumit Naiksatam sumitnaiksa...@gmail.com: Hi Paul, Our plan with FWaaS was to get it to parity with LBaaS as far as STF is concerned. That way any changes to STF can be explored in the context of all services, and the migration can also be performed for all services. Accordingly, Gary Duan has been actively working on the patch: https://review.openstack.org/#/c/60699/ and we hope to get it approved and merged soon. Thanks, ~Sumit. On Sat, Feb 15, 2014 at 5:09 PM, Paul Michali p...@cisco.com wrote: Hi Nachi and other cores! I'm very close to publishing my vendor based VPNaaS driver (service driver is ready, device driver is a day or two out), but have a bit of an issue. This code uses the Service Type Framework, which, as you know, is still out for review (and has been idle for a long time). I updated the STF client code and it is updated in Gerrit. I saw you put a -1 on your STF server code. Is the feature being abandoned or was that for some other reason? If going forward with it, can you update the server STF code, or should I do it (I have a branch with the STF based on master of about 2 weeks ago, so it should update OK)? Also, I'm wondering (worried) about the logistics of my reviews. I wanted to do my service driver and device driver separately (I guess making the latter dependent on the former in Gerrit). However, because of the STF, I'd need to make my service driver dependent on the STF server code too (my current branch has both code pieces). Really worried about the complexity there and about it getting hung up, if there is more delay on the STF review. I've been working on another branch without the STF dependency, however that has to hack in part of the STF to be able to select the service driver based on config vs hardwired to the reference driver. Should I proceed with the STF review chaining or push out my code w/o the STF? Thanks! PCM (Paul Michali) MAIL p...@cisco.com IRCpcm_ (irc.freenode.net) TW@pmichali GPG key4525ECC253E31A83 Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Unit Testing Nova
Mike Spreitzer/Watson/IBM@IBMUS wrote on 02/18/2014 01:22:33 PM: That was using version 1.6.1 (as a workaround to https:// bugs.launchpad.net/openstack-ci/+bug/1274135). Also, this was a fresh DevStack install (done about an hour or two before I posted to the list), with a pretty plain local.conf; it added only Heat to the default stuff. As long as I am providing missing details, following are two more. This DevStack install was done with https://review.openstack.org/#/c/74430/ applied to fix https://bugs.launchpad.net/devstack/+bug/1281415 . And I manually did `sudo apt-get install libmysqlclient-dev` to work around https://bugs.launchpad.net/devstack/+bug/1203723 . Regards, Mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Meetup Summary
On 02/18/2014 12:36 PM, Matt Riedemann wrote: 4) We talked about Nova's integration with Neutron and made some good progress. We came up with a blueprint (ideally for Icehouse) to improve Nova-Neutron interaction. There are two cases we need to improve that have been particularly painful. The first is the network info cache. Neutron can issue an API callback to Nova to let us know that we need to refresh the cache. The second is knowing that VIF setup is complete. Right now we have cases where we issue a request to Neutron and it is processed asynchronously. We have no way to know when it has finished. For example, we really need to know that VIF plumbing is set up before we boot an instance and it tries its DHCP request. We can do this with nova-network, but with Neutron it's just a giant race. I'm actually surprised we've made it this long without fixing this. One or both of these issues (thinking VIF readiness) is also causing a gate failure in master and stable/havana: https://bugs.launchpad.net/nova/+bug/1210483 I'd like to propose skipping that test if Tempest is configured with Neutron until we get the bug fixed/blueprint resolved. By the way, can I get a link to the blueprint to reference in the bug (or vice-versa)? I haven't seen a blueprint for this yet. Mark, is that something you were planning on driving? -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystone] IRC Channel Venue Change for Keystone Development topics
So that the rest of the community is aware, the majority of keystone specific topics are moving from the #openstack-dev channel to the #openstack-keystone channel on Freenode. This is has been done to help free up #openstack-dev for more cross-project discussion. Expect that the Keystone Core team will still be in #openstack-dev so that we can catch Keystone and Identity related questions (just as before). Cheers, Morgan Fainberg — Morgan Fainberg Principal Software Engineer Core Developer, Keystone m...@metacloud.com___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] Fixed recent gate issues
Hi John, thanks for the summary. I've noticed one more fall out from swiftclient update in Grenade jobs running on stable/havana changes e.g. http://logs.openstack.org/02/73402/1/check/check-grenade-dsvm/a5650ac/console.html ... 2014-02-18 13:00:02.103 | Test Swift 2014-02-18 13:00:02.103 | + swift --os-tenant-name=demo --os-username=demo --os-password=secret --os-auth-url=http://127.0.0.1:5000/v2.0 stat 2014-02-18 13:00:02.284 | Traceback (most recent call last): 2014-02-18 13:00:02.284 | File /usr/local/bin/swift, line 35, in module 2014-02-18 13:00:02.284 | from swiftclient import Connection, HTTPException 2014-02-18 13:00:02.285 | ImportError: cannot import name HTTPException 2014-02-18 13:00:02.295 | + STATUS_SWIFT=Failed ... Grenade job installs swiftclient from git master but then later due to python-swiftclient=1.2,2 requirement in Grizzly, older version 1.9.0 is pulled from pypi and then half-installed or something, producing above conflict between swift CLI binary and libs. Solution could be to remove swiftclient cap in Grizzly, any other suggestions? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone][nova]Hierarchical Multitenancy Meeting
Hi Everyone, I will be on a plane during the Multitenancy Meeting on Friday. You are welcome to have the meeting without me. Otherwise, we can just continue the excellent discussion we have been having on the Mailing list. Thanks, Vish signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][LBaaS] Object Model discussion
Hi folks, Recently we were discussing LBaaS object model with Mark McClain in order to address several problems that we faced while approaching L7 rules and multiple vips per pool. To cut long story short: with existing workflow and model it's impossible to use L7 rules, because each pool being created is 'instance' object in itself, it defines another logical configuration and can't be attached to other existing configuration. To address this problem, plus create a base for multiple vips per pool, the 'loadbalancer' object was introduced (see https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance ). However this approach raised a concern of whether we want to let user to care about 'instance' object. My personal opinion is that letting user to work with 'loadbalancer' entity is no big deal (and might be even useful for terminological clarity; Libra and AWS APIs have that) especially if existing simple workflow is preserved, so the 'loadbalancer' entity is only required when working with L7 or multiple vips per pool. The alternative solution proposed by Mark is described here under #3: https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion In (3) the root object of the configuration is VIP, where all kinds of bindings are made (such as provider, agent, device, router). To address 'multiple vips' case another entity 'Listener' is introduced, which receives most attributes of former 'VIP' (attribute sets are not finalized on those pictures, so don't pay much attention) If you take closer look at #2 and #3 proposals, you'll see that they are essentially similar, where in #3 the VIP object takes instance/loadbalancer role from #2. Both #2 and #3 proposals make sense to me because they address both problems with L7 and multiple vips (or listeners) My concern about #3 is that it redefines lots of workflow and API aspects and even if we manage to make transition to #3 in backward-compatible way, it will be more complex in terms of code/testing, then #2 (which is on review already and works). The whole thing is important design decision, so please share your thoughts everyone. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Hierarchical Multitenancy and resource ownership
I see a lot of good things happening on the hierarchical multi tenancy proposal that Vish made a while back. However, the focus so far is on roles and quota but could not find any discussion related to resource ownership. Is the plan to allow the creation of resources within any level of the hierarchy or is the plan to allow the visibility of the resources up to a level in the hierarchy ? or both ? For example, if I have : - orga.vpca.projecta - orga.vpca.projectb and I want to share a resource like a network between projecta and projectb, should the network be owned by vpca or should it be owned by projecta or projectb, or a vpca.admin project and then shared to all children of vpca ? I think either would work, and both maybe required. Opinions ? JC ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] call for help with nova bug management
So i have been rather underwhelmed in the enthusiastic response to help out :-) So far only wendar and johnthetubaguy have signed up. I was hoping for at least 3-5 people to help with the initial triage. Please sign up this week if you can help and i’ll schedule the meetings starting next week On Feb 14, 2014, at 2:16 PM, Tracy Jones tjo...@vmware.com wrote: Hi Folks - I’ve offered to help Russell out with managing nova’s bug queue. The charter of this is as follows Triage the 125 new bugs Ensure that the critical bugs are assigned properly and are making progress Once this part is done we will shift our focus to things like Bugs in incomplete state with no update by the reporter - they should be set to invalid if they requester does not update them in a timely manner. Bugs which say they are in progress but no progress is being made. If a bug is assigned and simply being ignored we should remove the assignment so others can grab it and work on it The bug triage policy is defined here https://wiki.openstack.org/wiki/BugTriage What can you do??? First I need a group of folks to volunteer to help with 1 and 2. I will start a weekly IRC meeting where we work on the triage and check progress on critical (or even high) prio bugs. If you can help out, please sign up at the end of this etherpad and include your timezone. Once I have a few people to help i will schedule the meeting at a time that I hope is convenient for all. https://etherpad.openstack.org/p/nova-bug-management Thanks in advance for your help. Tracy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] Fixed recent gate issues
On Tue, Feb 18, 2014 at 08:27:16PM +0100, Alan Pevec wrote: Hi John, thanks for the summary. I've noticed one more fall out from swiftclient update in Grenade jobs running on stable/havana changes e.g. http://logs.openstack.org/02/73402/1/check/check-grenade-dsvm/a5650ac/console.html ... 2014-02-18 13:00:02.103 | Test Swift 2014-02-18 13:00:02.103 | + swift --os-tenant-name=demo --os-username=demo --os-password=secret --os-auth-url=http://127.0.0.1:5000/v2.0 stat 2014-02-18 13:00:02.284 | Traceback (most recent call last): 2014-02-18 13:00:02.284 | File /usr/local/bin/swift, line 35, in module 2014-02-18 13:00:02.284 | from swiftclient import Connection, HTTPException 2014-02-18 13:00:02.285 | ImportError: cannot import name HTTPException 2014-02-18 13:00:02.295 | + STATUS_SWIFT=Failed ... Grenade job installs swiftclient from git master but then later due to python-swiftclient=1.2,2 requirement in Grizzly, older version 1.9.0 is pulled from pypi and then half-installed or something, producing above conflict between swift CLI binary and libs. Solution could be to remove swiftclient cap in Grizzly, any other suggestions? Yeah it's pip weirdness where things falls apart because of version cap. It's basically installing bin/swift from 1.9 when it sees the version requirement but it leaves everything in python-swiftclient namespace from master. So I've actually been looking at this since late yesterday the conclusion we've reached is to just skip the exercises on grizzly. Removing the version cap isn't going to be simple on grizzly because there global requirements wasn't enforced back in grizzly. We'd have to change the requirement for both glance, horizon, and swift and being ~3 weeks away from eol for grizzly I don't think we should mess with that. This failure is only an issue with cli swiftclient on grizzly (and one swift functional test) which as it sits now is just the devstack exercises on grenade. So if we just don't run those exercises on the grizzly side of a grenade run there shouldn't be an issue. I've got 2 patches to do this here: https://review.openstack.org/#/c/74419/ https://review.openstack.org/#/c/74451/ -Matt Treinish ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] Meeting Tuesday February 18th at 19:00 UTC
On Mon, Feb 17, 2014 at 2:39 PM, Elizabeth Krumbach Joseph l...@princessleia.com wrote: The OpenStack Infrastructure (Infra) team is hosting our weekly meeting tomorrow, Tuesday February 18th, at 19:00 UTC in #openstack-meeting Thanks to everyone who joined us, minutes and log available here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.txt Log: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 http://www.princessleia.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Db interaction: conductor vs objects vs apis
On 02/18/2014 01:22 PM, Pitucha, Stanislaw Izaak wrote: Hi all, I'm writing some new code which uses objects sent around by rpc. I got to implementing some new conductor methods and started wondering... what's the current recommended way of interacting with the database? Many seem to be around: - fat model approach - put the db interaction in objects - put the db interactions in the conductor itself - completely separate it (quotas style) Since the new code isn't that big or generalized in any way, I'm dropping the third option completely. But conductor -vs- object stays. What should do the db calls? If it's objects, should I also do that even for things that would take only one line in conductor/manager? It may be easier to provide a better answer with more details on what you're doing, but the general answer is use objects always. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Db interaction: conductor vs objects vs apis
- fat model approach - put the db interaction in objects If it's just DB interaction, then yes, in the object for sure. - put the db interactions in the conductor itself There is a reasonable separation between using conductor for mechanics (i.e. API deferring a long-running activity to conductor) and using it for (new) DB proxying. At this point. the former is okay and the latter is not, IMHO. That said, any complex data structure going over RPC should be an object as well, so that we have version tracking on the structure itself. We recently had a case where we broke upgrades because the *format* of an RPC method argument was changed, that argument was not an object, and we ended up with some very ugly failures as a result :) --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Hierarchical Multitenancy and resource ownership
On Feb 18, 2014, at 11:31 AM, Martin, JC jch.mar...@gmail.com wrote: I see a lot of good things happening on the hierarchical multi tenancy proposal that Vish made a while back. However, the focus so far is on roles and quota but could not find any discussion related to resource ownership. Is the plan to allow the creation of resources within any level of the hierarchy or is the plan to allow the visibility of the resources up to a level in the hierarchy ? or both ? For example, if I have : - orga.vpca.projecta - orga.vpca.projectb and I want to share a resource like a network between projecta and projectb, should the network be owned by vpca or should it be owned by projecta or projectb, or a vpca.admin project and then shared to all children of vpca ? I think either would work, and both maybe required. Opinions ? We haven’t discussed inheriting ownership of objects but at first glance it seems confusing: how would one determine if an object in vcpa is “shared” and visible to projects below, and if it is how far down the hierarchy would it be visible? It is probably best to keep this explicit for the moment. I’ve been thinking of sharing as objects that appear at multiple places in the hierarchy. This could be a list of “owners” or “shares”, but I think it would support either of your options. My initial thoughts would be to just put the network resource in orga.vcpa and then share it to the projects. This of course gets a little tedious when other projects are added later, but it avoids the complications i mentioned above. Vish JC ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] L7 - Update L7Policy
Hi! On Tue, Feb 18, 2014 at 12:18 AM, Samuel Bercovici samu...@radware.comwrote: Hi, It matters, as someone might need to “debug” the backend setup and the name, if exists, can add details. This obviously a vendor’s choice if they wish to push this back to backend but the API should not remove this as a choice. +1 -Sam. -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] Unit test cases failing with error 'cannot import rpcapi'
Hi All, All cinder test cases are failing with error 'cannot import rpcapi', though same files work fine in live cinder setup. I wonder what's going wrong when unit testing is triggered. Can any one help me out here? -- Thanks, IK ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Need a new DSL for Murano
My observation has been that Murano has changed from a Windows focused Deployment Service to a Metadata Application Catalog Workflow thing (I fully admit this may be an invalid observation). It's unclear to me what OpenStack pain/use-cases is to be solved by complex object composition, description of data types, contracts... Murano is intended to provide high level definition of Applications which will include description of specific application requirements (like input parameters, external dependencies) as well as low level scripts, heat snippets and workflows which will be required to manage application. Object composition is required to address application diversity. We expect to be an integration layer for numerous different application from different OS and areas like WebServices, BigData, SAP, Enterprise specific apps. In order to decrease amount of work for Application author we will provide a way to reuse existing applications by extending them via object composition. Inside Murano we provide a library of workflows and requirements for general application classes. This library has workflows for instance creation, networking and app deployments. This will help application author to write only application specific stuff. For example if I need to publish my web service based on Tomcat I will create a new application class which has tomcat as a parent and then I will add my application specific parameters like App URL and will add deployment script which will download App war file and put it to Tomcat app directory. All other stuff will be done automatically by existing workflows for tomcat application. You can imagine this as a nested Heat template inclusion but with ability to override and\or extend it. The ability to add a workflow actually looks like a dynamic Heat resource type generation as you can specify actions and associated workflows. These actions can be triggered by external events to provide an ability of application life cycle management. Thanks Georgy On Mon, Feb 17, 2014 at 4:41 PM, Keith Bray keith.b...@rackspace.comwrote: Can someone elaborate further on the things that Murano is intended to solve within the OpenStack ecosystem? My observation has been that Murano has changed from a Windows focused Deployment Service to a Metadata Application Catalog Workflow thing (I fully admit this may be an invalid observation). It's unclear to me what OpenStack pain/use-cases is to be solved by complex object composition, description of data types, contracts... Your thoughts would be much appreciated. Thanks, -Keith From: Renat Akhmerov rakhme...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: Monday, February 17, 2014 1:33 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Murano] Need a new DSL for Murano Clint, We're collaborating with Murano. We may need to do it in a way that others could see it though. There are several things here: - Murano doesn't really have a workflow engine similar to Mistral's. People get confused with that but it's just a legacy terminology, I think Murano folks were going to rename this component to be more precise about it. - Mistral DSL doesn't seem to be a good option for solving tasks that Murano is intended to solve. Specifically I mean things like complex object composition, description of data types, contracts and so on. Like Alex and Stan mentioned Murano DSL tends to grow into a full programming language . - Most likely Mistral will be used in Murano for implementation, at least we see where it would be valuable. But Mistral is not so matured yet, we need to keep working hard and be patient :) Anyway, we keep thinking on how to make both languages look similar or at least the possibility to use them seamlessly, if needed (call Mistral workflows from Murano DSL or vice versa). Renat Akhmerov @ Mirantis Inc. On 16 Feb 2014, at 05:48, Clint Byrum cl...@fewbar.com wrote: Excerpts from Alexander Tivelkov's message of 2014-02-14 18:17:10 -0800: Hi folks, Murano matures, and we are getting more and more feedback from our early adopters. The overall reception is very positive, but at the same time there are some complaints as well. By now the most significant complaint is is hard to write workflows for application deployment and maintenance. Current version of workflow definition markup really have some design drawbacks which limit its potential adoption. They are caused by the fact that it was never intended for use for Application Catalog use-cases. Just curious, is there any reason you're not collaborating on Mistral for this rather than both having a workflow engine? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
Re: [openstack-dev] VPC Proposal
On Tue, Feb 18, 2014 at 10:03 AM, Martin, JC jch.mar...@gmail.com wrote: There was a lot of emails on that thread, but I am not seeing the discussion converging. I would like to reiterate my concerns: - We are trying to implement an API on a feature that is not supported by openstack I don't see it that way. I see this BP as converting neutron calls into AWS VPC calls with a little glue (which can be seen here https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support). But I am no networking expert, so take that with a large grain of salt. - As a result, the implementation is overloading existing construct without implementing full AWS capabilities and semantic (e.g. Shared network isolation from VPC, or Floating IP scoping to VPC). A partial implementation is better then no implementation, that being said if we want to overhaul OpenStack's VPC capabilities I think the partial implementation would have to be thrown out. - Dependent blueprints are not implemented and have been deferred, resulting in the broken implementation (e.g. https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron) That is a requirement for phase 3, and shouldn't matter for phase 1 and 2. And only phase 1 is up for review. - this feature is only available through EC2 API, which is likely going to result in another implementation for general use. - users adopting the VPC model proposed through EC2 API will be stuck in an upgrade mess when the proper implementation comes along. This point concerns me the most, can you elaborate. - There are new constructs in work that are better suited for implementing this concept properly (Multi tenant hierarchy and domains). All that being said, it sounds like there are two separate efforts to get VPC into OpenStack, one by supporting AWS specs, and a second native OpenStack version. It sounds like further discussion is needed between these two efforts, so I am unapproving https://blueprints.launchpad.net/nova/+spec/aws-vpc-support as it needs further discussion. The last thing we want is to merge a controversial blueprint before all the questions can be resolved. As you can guess, I'm not really a fan, but it seems that only few individuals are concerned. I would think that this topic would create more interest, specially on the network side. Maybe because of the subject tags. I will therefore copy this email with the Neutron tag. JC On Feb 17, 2014, at 10:10 PM, Rudra Rugge rru...@juniper.net wrote: I am not sure on how to dig out the archives. There were a couple of emails exchanged with Salvatore on the thread pertaining to the extensions we were referring to as part of this blueprint. There are a few notes on the whiteboard of the blueprint as well. Regards, Rudra On 2/17/14, 1:28 PM, jc Martin jch.mar...@gmail.com wrote: Thanks, Do you have the links for the discussions ? Thanks, JC Sent from my iPhone On Feb 17, 2014, at 11:29 AM, Rudra Rugge rru...@juniper.net wrote: JC, BP has been updated with the correct links. I have removed the abandoned review #3. Please review #1 and #2. 1. https://review.openstack.org/#/c/40071/ This is the active review. There is one comment by Sean regarding adding a knob when Neutron is not used. That will be addressed with the next path. 2. https://review.openstack.org/#/c/53171 This is the active review for tempest test cases as requested by Joe Gordon. Currently abandoned until #1 goes through. 3. https://review.openstack.org/#/c/53171 This review is not active. It was accidentally submitted with a new change-id. Regards, Rudra On 2/16/14, 9:25 AM, Martin, JC jch.mar...@gmail.com wrote: Harshad, I tried to find some discussion around this blueprint. Could you provide us with some notes or threads ? Also, about the code review you mention. which one are you talking about : https://review.openstack.org/#/c/40071/ https://review.openstack.org/#/c/49470/ https://review.openstack.org/#/c/53171 because they are all abandoned. Could you point me to the code, and update the BP because it seems that the links are not correct. Thanks, JC On Feb 16, 2014, at 9:04 AM, Allamaraju, Subbu su...@subbu.org wrote: Harshad, Thanks for clarifying. We started looking at this as some our customers/partners were interested in get AWS API compatibility. We have this blueprint and code review pending for long time now. We will know based on this thread wether the community is interested. But I assumed that community was interested as the blueprint was approved and code review has no -1(s) for long time now. Makes sense. I would leave it to others on this list to chime in if there is sufficient interest or not. To clarify, a clear incremental path from an AWS compatible API to an OpenStack model is not clear. In my mind AWS compatible API does not need new openstack model. As more discussion happen
Re: [openstack-dev] [Nova] Meetup Summary
Sylvain- As you can tell from the meeting today the scheduler sub-group is really not the gantt group meeting, I try to make sure that messages for things like the agenda and what not include both `gantt' and `scheduler' in the subject so it's clear we're talking about the same thing. Note that our ultimate goal is to create a scheduler that is usable by other projects, not just nova, but that is a second task. The first task is to create a separate scheduler that will be usable by nova at a minimum. (World domination will follow later :) -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com] Sent: Monday, February 17, 2014 4:26 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Nova] Meetup Summary Hi Russell and Don, 2014-02-17 23:41 GMT+01:00 Russell Bryant rbry...@redhat.commailto:rbry...@redhat.com: Greetings, 2) Gantt - We discussed the progress of the Gantt effort. After discussing the problems encountered so far and the other scheduler work going on, the consensus was that we really need to focus on decoupling the scheduler from the rest of Nova while it's still in the Nova tree. Don was still interested in working on the existing gantt tree to learn what he can about the coupling of the scheduler to the rest of Nova. Nobody had a problem with that, but it doesn't sound like we'll be ready to regenerate the gantt tree to be the real gantt tree soon. We probably need another cycle of development before it will be ready. As a follow-up to this, I wonder if we should rename the current gantt repository from openstack/gantt to stackforge/gantt to avoid any possible confusion. We should make it clear that we don't expect the current repo to be used yet. There is currently no precise meeting timeslot for Gantt but the one with Nova scheduler subteam. Would it be possible to have a status on the current path for Gantt so that people interested in joining the effort would be able to get in ? There is currently a discussion on how Gantt and Nova should interact, in particular regarding HostState and how Nova Computes could update their status so as Gantt would be able to filter on them. There are also other discussions about testing, API, etc. so I'm just wondering how to help and where. On a side note, if Gantt is becoming a Stackforge project planning to have Nova scheduling first, could we also assume that we could also implement this service for being used by other projects (such as Climate) in parallel with Nova ? The current utilization-aware-scheduling blueprint is nearly done so that it can be used for other queries than just Nova scheduling, but unfortunately as the scheduler is still part of Nova and without a REST API, it can't be leveraged on third-party projects. Thanks, -Sylvain [1] : https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Regarding language pack database schema
Maybe a crazy idea butŠ What if we simply don't store the JSON blob data for M1 instead of putting storing it in a way we don't like long term? This way, there is no need to remember to change something later even though a bug could be created anyways. I believe the fields that would be missing/not stored in the blob are: * Compiler version * Language platform * OS platform Can we live with that for M1? On 2/18/14 12:07 PM, Adrian Otto adrian.o...@rackspace.com wrote: I agree. Let's proceed with option #2, and submit a wishlist bug to track this as tech debt. We would like to come back to this later and add an option to use a blob store for the JSON blob content, as Georgy mentioned. These could be stored in swift, or a K/V store. It might be nice to have a thin get/set abstraction there to allow alternates to be implemented as needed. I'm not sure exactly where we can track Paul Czarkowski's suggested restriction. We may need to just rely on reviewers to prevent this, because if we ever start introspecting the JSON blob, we will be using an SQL anti-pattern. I'm generally opposed to putting arbitrary sized text and blob entries into a SQL database, because eventually you may run into the maximum allowable size (ie: max-allowed-packet) and cause unexpected error conditions. Thanks, Adrian On Feb 18, 2014, at 8:48 AM, Paul Czarkowski paul.czarkow...@rackspace.com wrote: I'm also a +1 for #2.However as discussed on IRC, we should clearly spell out that the JSON blob should never be treated in a SQL-like manner. The moment somebody says 'I want to make that item in the json searchable' is the time to discuss adding it as part of the SQL schema. On 2/13/14 4:39 PM, Clayton Coleman ccole...@redhat.com wrote: I like option #2, simply because we should force ourselves to justify every attribute that is extracted as a queryable parameter, rather than making them queryable at the start. - Original Message - Hi Arati, I would vote for Option #2 as a short term solution. Probably later we can consider using NoSQL DB or MariaDB which has Column_JSON type to store complex types. Thanks Georgy On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane arati.mahim...@rackspace.com wrote: Hi All, I have been working on defining the Language pack database schema. Here is a link to my review which is still a WIP - https://review.openstack.org/#/c/71132/3 . There are a couple of different opinions on how we should be designing the schema. Language pack has several complex attributes which are listed here - https://etherpad.openstack.org/p/Solum-Language-pack-json-format We need to support search queries on language packs based on various criteria. One example could be 'find a language pack where type='java' and version1.4' Following are the two options that are currently being discussed for the DB schema: Option 1: Having a separate table for each complex attribute, in order to achieve normalization. The current schema follows this approach. However, this design has certain drawbacks. It will result in a lot of complex DB queries and each new attribute will require a code change. Option 2: We could have a predefined subset of attributes on which we would support search queries. In this case, we would define columns (separate tables in case of complex attributes) only for this subset of attributes and all other attributes will be a part of a json blob. With this option, we will have to go through a schema change in case we decide to support search queries on other attributes at a later stage. I would like to know everyone's thoughts on these two approaches so that we can take a final decision and go ahead with one approach. Suggestions regarding any other approaches are welcome too! Thanks, Arati ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list
Re: [openstack-dev] [cinder] Unit test cases failing with error 'cannot import rpcapi'
On Tue, Feb 18, 2014 at 1:21 PM, iKhan ik.ibadk...@gmail.com wrote: Hi All, All cinder test cases are failing with error 'cannot import rpcapi', though same files work fine in live cinder setup. I wonder what's going wrong when unit testing is triggered. Can any one help me out here? -- Thanks, IK ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I just pulled a fresh clone and am not seeing any issues on my side. Could it be a problem with your env? Are you running venv? John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] L7 - Update L7Policy
Hi folks, I see little value in being able to debug such things because it is for developers only. However given that such choice doesn't affect workflow and public API, we can add corresponding calls to the driver API. Thanks, Eugene. On Wed, Feb 19, 2014 at 12:20 AM, Stephen Balukoff sbaluk...@bluebox.netwrote: Hi! On Tue, Feb 18, 2014 at 12:18 AM, Samuel Bercovici samu...@radware.comwrote: Hi, It matters, as someone might need to debug the backend setup and the name, if exists, can add details. This obviously a vendor's choice if they wish to push this back to backend but the API should not remove this as a choice. +1 -Sam. -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Meetup Summary
On 2/18/2014 1:12 PM, Russell Bryant wrote: On 02/18/2014 12:36 PM, Matt Riedemann wrote: 4) We talked about Nova's integration with Neutron and made some good progress. We came up with a blueprint (ideally for Icehouse) to improve Nova-Neutron interaction. There are two cases we need to improve that have been particularly painful. The first is the network info cache. Neutron can issue an API callback to Nova to let us know that we need to refresh the cache. The second is knowing that VIF setup is complete. Right now we have cases where we issue a request to Neutron and it is processed asynchronously. We have no way to know when it has finished. For example, we really need to know that VIF plumbing is set up before we boot an instance and it tries its DHCP request. We can do this with nova-network, but with Neutron it's just a giant race. I'm actually surprised we've made it this long without fixing this. One or both of these issues (thinking VIF readiness) is also causing a gate failure in master and stable/havana: https://bugs.launchpad.net/nova/+bug/1210483 I'd like to propose skipping that test if Tempest is configured with Neutron until we get the bug fixed/blueprint resolved. By the way, can I get a link to the blueprint to reference in the bug (or vice-versa)? I haven't seen a blueprint for this yet. Mark, is that something you were planning on driving? Here we go: https://blueprints.launchpad.net/nova/+spec/check-neutron-port-status There are two patches up for it now from Aaron, still needs (exception) approval for Icehouse. -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Unit test cases failing with error 'cannot import rpcapi'
Yes I do run in venv, I'm checking for missing libraries. On Wed, Feb 19, 2014 at 2:03 AM, John Griffith john.griff...@solidfire.comwrote: On Tue, Feb 18, 2014 at 1:21 PM, iKhan ik.ibadk...@gmail.com wrote: Hi All, All cinder test cases are failing with error 'cannot import rpcapi', though same files work fine in live cinder setup. I wonder what's going wrong when unit testing is triggered. Can any one help me out here? -- Thanks, IK ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I just pulled a fresh clone and am not seeing any issues on my side. Could it be a problem with your env? Are you running venv? John ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, IK ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] L7 data types
A couple quick suggestions (additions): Entity: L7Rule o Attribute: type § Possible values: - HTTP_METHOD o Attribute: compare_type § Possible values: - GT (greater than) - LT (less than) - GE (greater than or equal to) - LE (less than or equal to) Will we be doing syntax checking based on the L7Rule type being presented? (eg. if w'ere going to check that HEADER X has a value that is greater than Y, are we going to make sure that Y is an integer? Or if we're going to check that the PATH STARTS_WITH Z, are we going to make sure that Z is a non-zero-length string? ) Thanks, Stephen On Tue, Feb 18, 2014 at 3:58 AM, Avishay Balderman avish...@radware.comwrote: Here are the suggested values for the attributes below: · Entity: L7Rule o Attribute: type § Possible values: · HOST_NAME · PATH · FILE_NAME · FILE_TYPE · HEADER · COOKIE o Attribute: compare_type § Possible values: · EQUAL · CONTAINS · REGEX · STARTS_WITH · ENDS_WITH · Entity:L7VipPolicyAssociation o Attribute:action § Possible values: · POOL (must have pool id) · REDIRECT(must have a url to be used as redirect destination) · REJECT *From:* Oleg Bondarev [mailto:obonda...@mirantis.com] *Sent:* Monday, February 17, 2014 9:17 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [Neutron][LBaaS] L7 data types Hi, I would add another candidate for being a closed set: L7VipPolicyAssociation.action (use_backend, block, etc.) Thanks, Oleg On Sun, Feb 16, 2014 at 3:53 PM, Avishay Balderman avish...@radware.com wrote: (removing extra space from the subject – let email clients apply their filters) *From:* Avishay Balderman *Sent:* Sunday, February 16, 2014 9:56 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* [openstack-dev] [Neutron][LBaaS] L7 data types Hi There are 2 fields in the L7 model that are candidates for being a closed set (Enum). I would like to hear your opinion. Entity: L7Rule Field : type Description: this field holds the part of the request where we should look for a value Possible values: URL,HEADER,BODY,(?) Entity: L7Rule Field : compare_type Description: The way we compare the value against a given value Possible values: REG_EXP, EQ, GT, LT,EQ_IGNORE_CASE,(?) *Note*: With REG_EXP we can cover the rest of the values. In general In the L7rule one can express the following (Example): “check if in the value of header named ‘Jack’ starts with X” – if this is true – this rule “returns” true Thanks Avishay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Regarding language pack database schema
That is exactly option #2 which propose to store attributes in columns. So there will be a limited set of attributes and each of them will have its own column in a table. Thanks Georgy On Tue, Feb 18, 2014 at 10:55 AM, Paul Montgomery paul.montgom...@rackspace.com wrote: Maybe a crazy idea butŠ What if we simply don't store the JSON blob data for M1 instead of putting storing it in a way we don't like long term? This way, there is no need to remember to change something later even though a bug could be created anyways. I believe the fields that would be missing/not stored in the blob are: * Compiler version * Language platform * OS platform Can we live with that for M1? On 2/18/14 12:07 PM, Adrian Otto adrian.o...@rackspace.com wrote: I agree. Let's proceed with option #2, and submit a wishlist bug to track this as tech debt. We would like to come back to this later and add an option to use a blob store for the JSON blob content, as Georgy mentioned. These could be stored in swift, or a K/V store. It might be nice to have a thin get/set abstraction there to allow alternates to be implemented as needed. I'm not sure exactly where we can track Paul Czarkowski's suggested restriction. We may need to just rely on reviewers to prevent this, because if we ever start introspecting the JSON blob, we will be using an SQL anti-pattern. I'm generally opposed to putting arbitrary sized text and blob entries into a SQL database, because eventually you may run into the maximum allowable size (ie: max-allowed-packet) and cause unexpected error conditions. Thanks, Adrian On Feb 18, 2014, at 8:48 AM, Paul Czarkowski paul.czarkow...@rackspace.com wrote: I'm also a +1 for #2.However as discussed on IRC, we should clearly spell out that the JSON blob should never be treated in a SQL-like manner. The moment somebody says 'I want to make that item in the json searchable' is the time to discuss adding it as part of the SQL schema. On 2/13/14 4:39 PM, Clayton Coleman ccole...@redhat.com wrote: I like option #2, simply because we should force ourselves to justify every attribute that is extracted as a queryable parameter, rather than making them queryable at the start. - Original Message - Hi Arati, I would vote for Option #2 as a short term solution. Probably later we can consider using NoSQL DB or MariaDB which has Column_JSON type to store complex types. Thanks Georgy On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane arati.mahim...@rackspace.com wrote: Hi All, I have been working on defining the Language pack database schema. Here is a link to my review which is still a WIP - https://review.openstack.org/#/c/71132/3 . There are a couple of different opinions on how we should be designing the schema. Language pack has several complex attributes which are listed here - https://etherpad.openstack.org/p/Solum-Language-pack-json-format We need to support search queries on language packs based on various criteria. One example could be 'find a language pack where type='java' and version1.4' Following are the two options that are currently being discussed for the DB schema: Option 1: Having a separate table for each complex attribute, in order to achieve normalization. The current schema follows this approach. However, this design has certain drawbacks. It will result in a lot of complex DB queries and each new attribute will require a code change. Option 2: We could have a predefined subset of attributes on which we would support search queries. In this case, we would define columns (separate tables in case of complex attributes) only for this subset of attributes and all other attributes will be a part of a json blob. With this option, we will have to go through a schema change in case we decide to support search queries on other attributes at a later stage. I would like to know everyone's thoughts on these two approaches so that we can take a final decision and go ahead with one approach. Suggestions regarding any other approaches are welcome too! Thanks, Arati ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] VPC Proposal
Joe, See my comments in line. On Feb 18, 2014, at 12:26 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Feb 18, 2014 at 10:03 AM, Martin, JC jch.mar...@gmail.com wrote: There was a lot of emails on that thread, but I am not seeing the discussion converging. I would like to reiterate my concerns: - We are trying to implement an API on a feature that is not supported by openstack I don't see it that way. I see this BP as converting neutron calls into AWS VPC calls with a little glue (which can be seen here https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support). But I am no networking expert, so take that with a large grain of salt. If we had the supporting constructs, I would be in favor of implementing the AWS VPC features. - As a result, the implementation is overloading existing construct without implementing full AWS capabilities and semantic (e.g. Shared network isolation from VPC, or Floating IP scoping to VPC). A partial implementation is better then no implementation, that being said if we want to overhaul OpenStack's VPC capabilities I think the partial implementation would have to be thrown out. I agree too. However, given the choice, I would have preferred that we first augment the neutron network access and sharing model before the building the API. It stills qualify as partial implementation, but at least in the right order. - Dependent blueprints are not implemented and have been deferred, resulting in the broken implementation (e.g. https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron) That is a requirement for phase 3, and shouldn't matter for phase 1 and 2. And only phase 1 is up for review. My point is that it does matter a it gives users the feeling that they get parity in term of isolation but they are not because of the missing isolation and sharing constructs. - this feature is only available through EC2 API, which is likely going to result in another implementation for general use. - users adopting the VPC model proposed through EC2 API will be stuck in an upgrade mess when the proper implementation comes along. This point concerns me the most, can you elaborate. First, while AWS does not support projects they do support, trough IAM, very flexible policies for VPC resources access, see http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_IAM.html If we wanted to reproduce this in openstack, we could map this to levels in the multi tenant hierarchy that Vish is proposing. However, because this project put all the resources in the same VPC project, it's not possible anymore to implement the access policies without moving resources between projects or recreating the VPC. - There are new constructs in work that are better suited for implementing this concept properly (Multi tenant hierarchy and domains). All that being said, it sounds like there are two separate efforts to get VPC into OpenStack, one by supporting AWS specs, and a second native OpenStack version. It sounds like further discussion is needed between these two efforts, so I am unapproving https://blueprints.launchpad.net/nova/+spec/aws-vpc-support as it needs further discussion. The last thing we want is to merge a controversial blueprint before all the questions can be resolved. We should keep the discussion going as I'm sure that we can get to a better proposal. JC ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] L7 data types
Oh! One thing I forgot to mention below: On Sat, Feb 15, 2014 at 11:55 PM, Avishay Balderman avish...@radware.comwrote: Entity: L7Rule Field : compare_type Description: The way we compare the value against a given value Possible values: REG_EXP, EQ, GT, LT,EQ_IGNORE_CASE,(?) *Note*: With REG_EXP we can cover the rest of the values. I seem to recall reading in the haproxy manual that regex matches are generally a lot slower than other kinds of matches. So, if you can do a 'path_beg' match, for example, that's a better performing choice over a 'path_reg' match which would accomplish the same thing. Given you have mentioned that 'STARTS_WITH' and 'ENDS_WITH' are listed as compare_types we're enumerating, I guess this has already been taken into account? Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] VPC Proposal
1. Feature is giving AWS VPC api compatibility with existing openstack structure 2. It does give full AWS compatibility (except for network ACL which was differed). Shared networks, FIP within scope of VPC is not some thing AWS provides. So it is not partial support. 3. IMO it would not be major change to go from what we are proposing to what JC is proposing as far as AWS VPC API(s) are concerned. 4. I can understand developers not liking AWS API(s) but many users of openstack will benefit 5. Multi tenant hierarchy and domains won't effect AWS API(s) in any way IMO there no need to differ this blueprint. On Tue, Feb 18, 2014 at 10:03 AM, Martin, JC jch.mar...@gmail.com wrote: There was a lot of emails on that thread, but I am not seeing the discussion converging. I would like to reiterate my concerns: - We are trying to implement an API on a feature that is not supported by openstack - As a result, the implementation is overloading existing construct without implementing full AWS capabilities and semantic (e.g. Shared network isolation from VPC, or Floating IP scoping to VPC). - Dependent blueprints are not implemented and have been deferred, resulting in the broken implementation (e.g. https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron ) - this feature is only available through EC2 API, which is likely going to result in another implementation for general use. - users adopting the VPC model proposed through EC2 API will be stuck in an upgrade mess when the proper implementation comes along. - There are new constructs in work that are better suited for implementing this concept properly (Multi tenant hierarchy and domains). As you can guess, I'm not really a fan, but it seems that only few individuals are concerned. I would think that this topic would create more interest, specially on the network side. Maybe because of the subject tags. I will therefore copy this email with the Neutron tag. JC On Feb 17, 2014, at 10:10 PM, Rudra Rugge rru...@juniper.net wrote: I am not sure on how to dig out the archives. There were a couple of emails exchanged with Salvatore on the thread pertaining to the extensions we were referring to as part of this blueprint. There are a few notes on the whiteboard of the blueprint as well. Regards, Rudra On 2/17/14, 1:28 PM, jc Martin jch.mar...@gmail.com wrote: Thanks, Do you have the links for the discussions ? Thanks, JC Sent from my iPhone On Feb 17, 2014, at 11:29 AM, Rudra Rugge rru...@juniper.net wrote: JC, BP has been updated with the correct links. I have removed the abandoned review #3. Please review #1 and #2. 1. https://review.openstack.org/#/c/40071/ This is the active review. There is one comment by Sean regarding adding a knob when Neutron is not used. That will be addressed with the next path. 2. https://review.openstack.org/#/c/53171 This is the active review for tempest test cases as requested by Joe Gordon. Currently abandoned until #1 goes through. 3. https://review.openstack.org/#/c/53171 This review is not active. It was accidentally submitted with a new change-id. Regards, Rudra On 2/16/14, 9:25 AM, Martin, JC jch.mar...@gmail.com wrote: Harshad, I tried to find some discussion around this blueprint. Could you provide us with some notes or threads ? Also, about the code review you mention. which one are you talking about : https://review.openstack.org/#/c/40071/ https://review.openstack.org/#/c/49470/ https://review.openstack.org/#/c/53171 because they are all abandoned. Could you point me to the code, and update the BP because it seems that the links are not correct. Thanks, JC On Feb 16, 2014, at 9:04 AM, Allamaraju, Subbu su...@subbu.org wrote: Harshad, Thanks for clarifying. We started looking at this as some our customers/partners were interested in get AWS API compatibility. We have this blueprint and code review pending for long time now. We will know based on this thread wether the community is interested. But I assumed that community was interested as the blueprint was approved and code review has no -1(s) for long time now. Makes sense. I would leave it to others on this list to chime in if there is sufficient interest or not. To clarify, a clear incremental path from an AWS compatible API to an OpenStack model is not clear. In my mind AWS compatible API does not need new openstack model. As more discussion happen on JC's proposal and implementation becomes clear we will know how incremental is the path. But at high level there two major differences 1. New first class object will be introduced which effect all components 2. more than one project can be supported within VPC. But it does not change AWS API(s). So even in JC(s) model if you want AWS API then we will
Re: [openstack-dev] Hierarchical Multitenancy and resource ownership
Vish, See comments below. JC On Feb 18, 2014, at 12:19 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: On Feb 18, 2014, at 11:31 AM, Martin, JC jch.mar...@gmail.com wrote: I see a lot of good things happening on the hierarchical multi tenancy proposal that Vish made a while back. However, the focus so far is on roles and quota but could not find any discussion related to resource ownership. Is the plan to allow the creation of resources within any level of the hierarchy or is the plan to allow the visibility of the resources up to a level in the hierarchy ? or both ? For example, if I have : - orga.vpca.projecta - orga.vpca.projectb and I want to share a resource like a network between projecta and projectb, should the network be owned by vpca or should it be owned by projecta or projectb, or a vpca.admin project and then shared to all children of vpca ? I think either would work, and both maybe required. Opinions ? We haven’t discussed inheriting ownership of objects but at first glance it seems confusing: how would one determine if an object in vcpa is “shared” and visible to projects below, and if it is how far down the hierarchy would it be visible? It is probably best to keep this explicit for the moment. I’ve been thinking of sharing as objects that appear at multiple places in the hierarchy. This could be a list of “owners” or “shares”, but I think it would support either of your options. My initial thoughts would be to just put the network resource in orga.vcpa and then share it to the projects. This of course gets a little tedious when other projects are added later, but it avoids the complications i mentioned above. The way it would work is that when one is, for example, is creating a network with a 'shared' semantic (in a leaf project for example), the call would have to be extended with a scope (for backward compatibility, no scope would mean all/domain). e.g. neutron net-create --shared:orga.vpca vpca-shared-net instead of just neutron net-create --shared orga-shared-net another option is to implement the same policy mechanism that AWS has to allow the definition of scope based on rules. see http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_IAM.html JC ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance
Hi all, In Rackspace's quark plugin (github.com/rackerlabs/quark), we’ve developed an extension for MAC address ranges (MARs) as a top-level resource. Thus, the Neutron service manages the MAC address allocation from a pool of ranges (as opposed to randomly generating a MAC address). However, we haven’t made a relationship between MARs and subnets/networks. Amir On Feb 18, 2014, at 11:24 AM, Tim Bell tim.b...@cern.ch wrote: Jay, We've got a similar requirement at CERN where we would like to have pools of ip/mac combinations for each subnet and have it so that the user is just allocated one (and for the same subnet that the hypervisor is on). We've not found a good solution so far. Tim -Original Message- From: Dong Liu [mailto:willowd...@gmail.com] Sent: 18 February 2014 18:12 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance Hi Jay, In neutron API, you could create port with specified mac_address and fix_ip, and then create vm with this port. But the mapping of them need to manage by yourself. 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com 写道: Greetings, Not sure if it is suitable to ask this question in openstack-dev list. Here come a question related to network and want to get some input or comments from you experts. My case is as this: For some security issue, I want to put both MAC and internal IP address to a pool and when create VM, I can get MAC and its mapped IP address and assign the MAC and IP address to the VM. For example, suppose I have following MAC and IP pool: 1) 78:2b:cb:af:78:b0, 192.168.0.10 2) 78:2b:cb:af:78:b1, 192.168.0.11 3) 78:2b:cb:af:78:b2, 192.168.0.12 4) 78:2b:cb:af:78:b3, 192.168.0.13 Then I can create four VMs using above MAC and IP address, each row in above can be mapped to a VM. Does any of you have any idea for the solution of this? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Meetup Summary
From: Russell Bryant [rbry...@redhat.com] Sent: 17 February 2014 22:41 To: OpenStack Development Mailing List Subject: [openstack-dev] [Nova] Meetup Summary 5) Driver CI - We talked about the ongoing effort to set up CI for all of the compute drivers. The discussion was mostly a status review. At this point, the Xenserver and Docker drivers are both at risk of being removed from Nova for the Icehouse release if CI is not up and running in time. Just a quick update on this - the Citrix CI is up and running and commenting on jobs that pass full tempest using XenServer 6.2 with the XenAPI Driver. We are not currently commenting on jobs that fail as I think there are some false negatives we need to iron out - or convince ourselves they are the same as the bugs that are hitting the official gate - over the next few days. The CI is currently working through a backlog of jobs, but it's running tests against nova, devstack and tempest. Bob ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] help with python-cinderclient unit test
Hi, I am finishing a patch that Seif Lofty started. It shows more information for the quota-usage health. https://review.openstack.org/gitweb?p=openstack%2Fpython-cinderclient.git;a=commitdiff;h=785cae3a17fbeccb366b01ece8f8704edf4d2ae7 I am not sure how the unit test for this should work. I've tried to add... def test_quota_usage_show(self): self.run_command('quota-usage demo') at the end of cinder/tests/v1/test_shell.py, and when running tox I get: AssertionError: Called unknown API method: GET /os-quota-sets/demo?usage=True, expected fakes method name: get_os_quota_sets_demo Any ideas or suggestions? Thanks, Luis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Meetup Summary
Hi Don, 2014-02-18 21:28 GMT+01:00 Dugger, Donald D donald.d.dug...@intel.com: Sylvain- As you can tell from the meeting today the scheduler sub-group is really not the gantt group meeting, I try to make sure that messages for things like the agenda and what not include both `gantt' and `scheduler' in the subject so it's clear we're talking about the same thing. That's the main reason why I was unable to attend the previous scheduler meetings... Now that I attended this today meeting, that's quite clear to me. I apologize for this misunderstanding, but as I can't dedicate all my time on Gantt/Nova, I have to make sure the time I'm taking on it can be worth it. Now that we agreed on a plan for next steps, I think it's important to put the infos on Gantt blueprints, even if most of the changes are related to Nova. The current etherpad is huge, and frightening people who would want to contribute IMHO. Note that our ultimate goal is to create a scheduler that is usable by other projects, not just nova, but that is a second task. The first task is to create a separate scheduler that will be usable by nova at a minimum. (World domination will follow later J Agreed. I'm just thinking on the opportunity of providing a REST API on top of the scheduler RPC API with a 1:1 matching, so that the Gantt project would step up by itself. I don't think it's a hard stuff, provided I already did that stuff for Climate (providing Pecan/WSME API). What do you think about it ? Even if it's not top priority, that's a quickwin. -Sylvain -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 *From:* Sylvain Bauza [mailto:sylvain.ba...@gmail.com] *Sent:* Monday, February 17, 2014 4:26 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [Nova] Meetup Summary Hi Russell and Don, 2014-02-17 23:41 GMT+01:00 Russell Bryant rbry...@redhat.com: Greetings, 2) Gantt - We discussed the progress of the Gantt effort. After discussing the problems encountered so far and the other scheduler work going on, the consensus was that we really need to focus on decoupling the scheduler from the rest of Nova while it's still in the Nova tree. Don was still interested in working on the existing gantt tree to learn what he can about the coupling of the scheduler to the rest of Nova. Nobody had a problem with that, but it doesn't sound like we'll be ready to regenerate the gantt tree to be the real gantt tree soon. We probably need another cycle of development before it will be ready. As a follow-up to this, I wonder if we should rename the current gantt repository from openstack/gantt to stackforge/gantt to avoid any possible confusion. We should make it clear that we don't expect the current repo to be used yet. There is currently no precise meeting timeslot for Gantt but the one with Nova scheduler subteam. Would it be possible to have a status on the current path for Gantt so that people interested in joining the effort would be able to get in ? There is currently a discussion on how Gantt and Nova should interact, in particular regarding HostState and how Nova Computes could update their status so as Gantt would be able to filter on them. There are also other discussions about testing, API, etc. so I'm just wondering how to help and where. On a side note, if Gantt is becoming a Stackforge project planning to have Nova scheduling first, could we also assume that we could also implement this service for being used by other projects (such as Climate) in parallel with Nova ? The current utilization-aware-scheduling blueprint is nearly done so that it can be used for other queries than just Nova scheduling, but unfortunately as the scheduler is still part of Nova and without a REST API, it can't be leveraged on third-party projects. Thanks, -Sylvain [1] : https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-dev] [neutron] [ml2] Neutron and ML2 - adding new network type
Hello, Thanks for an answear. I want to add own network type which will be very similiar to flat network (in type_driver I think it will be the same) but will assign IPs to instances in different way (not exactly with some L2 protocol). I want to add own network because I want to have own name of this network that I can distinguish it. Maybe there is other reason to do that. -- Best regards Sławek Kapłoński Dnia wtorek, 18 lutego 2014 10:08:50 piszesz: [Moving to -dev list] On Feb 18, 2014, at 9:12 AM, Sławek Kapłoński sla...@kaplonski.pl wrote: Hello, I'm trying to make something with neutron and ML2 plugin. Now I need to add my own external network type (as there are Flat, VLAN, GRE and so on). I searched for manuals for that but I can't found anything. Can someone of You explain me how I should do that? Is it enough to add own type_driver and mechanism_driver to ML2? Or I should do something else also? Hi Sławek: Can you explain more about what you’re looking to achieve here? I’m just curious how the existing TypeDrivers won’t cover your use case. ML2 was designed to remove segmentation management from the MechanismDrivers so they could all share segment types. Perhaps understanding what you’re trying to achieve would help better understand the approach to take here. Thanks, Kyle Thanks in advance -- Sławek Kapłoński sla...@kaplonski.pl ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] VPC Proposal
On Tue, Feb 18, 2014 at 1:01 PM, Martin, JC jch.mar...@gmail.com wrote: Joe, See my comments in line. On Feb 18, 2014, at 12:26 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Feb 18, 2014 at 10:03 AM, Martin, JC jch.mar...@gmail.com wrote: There was a lot of emails on that thread, but I am not seeing the discussion converging. I would like to reiterate my concerns: - We are trying to implement an API on a feature that is not supported by openstack I don't see it that way. I see this BP as converting neutron calls into AWS VPC calls with a little glue (which can be seen here https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support). But I am no networking expert, so take that with a large grain of salt. If we had the supporting constructs, I would be in favor of implementing the AWS VPC features. - As a result, the implementation is overloading existing construct without implementing full AWS capabilities and semantic (e.g. Shared network isolation from VPC, or Floating IP scoping to VPC). A partial implementation is better then no implementation, that being said if we want to overhaul OpenStack's VPC capabilities I think the partial implementation would have to be thrown out. I agree too. However, given the choice, I would have preferred that we first augment the neutron network access and sharing model before the building the API. It stills qualify as partial implementation, but at least in the right order. - Dependent blueprints are not implemented and have been deferred, resulting in the broken implementation (e.g. https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron) That is a requirement for phase 3, and shouldn't matter for phase 1 and 2. And only phase 1 is up for review. My point is that it does matter a it gives users the feeling that they get parity in term of isolation but they are not because of the missing isolation and sharing constructs. - this feature is only available through EC2 API, which is likely going to result in another implementation for general use. - users adopting the VPC model proposed through EC2 API will be stuck in an upgrade mess when the proper implementation comes along. This point concerns me the most, can you elaborate. First, while AWS does not support projects they do support, trough IAM, very flexible policies for VPC resources access, see http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_IAM.html If we wanted to reproduce this in openstack, we could map this to levels in the multi tenant hierarchy that Vish is proposing. However, because this project put all the resources in the same VPC project, it's not possible anymore to implement the access policies without moving resources between projects or recreating the VPC. Thanks for the clarification. - There are new constructs in work that are better suited for implementing this concept properly (Multi tenant hierarchy and domains). All that being said, it sounds like there are two separate efforts to get VPC into OpenStack, one by supporting AWS specs, and a second native OpenStack version. It sounds like further discussion is needed between these two efforts, so I am unapproving https://blueprints.launchpad.net/nova/+spec/aws-vpc-support as it needs further discussion. The last thing we want is to merge a controversial blueprint before all the questions can be resolved. We should keep the discussion going as I'm sure that we can get to a better proposal. agreed. JC ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance
Thanks Dong for the great help, it does worked with command line! This seems not available via dashboard, right? Thanks, Jay 2014-02-19 1:11 GMT+08:00 Dong Liu willowd...@gmail.com: Hi Jay, In neutron API, you could create port with specified mac_address and fix_ip, and then create vm with this port. But the mapping of them need to manage by yourself. 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com 写道: Greetings, Not sure if it is suitable to ask this question in openstack-dev list. Here come a question related to network and want to get some input or comments from you experts. My case is as this: For some security issue, I want to put both MAC and internal IP address to a pool and when create VM, I can get MAC and its mapped IP address and assign the MAC and IP address to the VM. For example, suppose I have following MAC and IP pool: 1) 78:2b:cb:af:78:b0, 192.168.0.10 2) 78:2b:cb:af:78:b1, 192.168.0.11 3) 78:2b:cb:af:78:b2, 192.168.0.12 4) 78:2b:cb:af:78:b3, 192.168.0.13 Then I can create four VMs using above MAC and IP address, each row in above can be mapped to a VM. Does any of you have any idea for the solution of this? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [solum] Question about solum-minimal-cli BP
On 18/02/14 14:19 +, Shaunak Kashyap wrote: Thanks Angus and Devdatta. I think I understand. Angus -- what you said seems to mirror the Heroku CLI usage: a) User runs app/plan create (to create the remote repo), then b) user runs git push ... (which pushes the code to the remote repo and creates 1 assembly, resulting in a running application). If this is the intended flow for the user, it makes sense to me. Just to be clear, I am not totally sure we are going to glue git repo generation to create plan (it *could* be part of create assembly). One follow up question: under what circumstances will the user need to explicitly run assembly create? Would it be used exclusively for adding more assemblies to an already running app? If you are not using the git-push mechanism, but the git-pull. Here you have your own repo (say on github) and there is not a git-repo-generation phase. -Angus Thanks, Shaunak From: Angus Salkeld [angus.salk...@rackspace.com] Sent: Monday, February 17, 2014 5:54 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP On 17/02/14 21:47 +, Shaunak Kashyap wrote: Hey folks, I was reading through https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation and have a question. If I’m understanding “app create” and “assembly create” correctly, the user will have to run “app create” first, followed by “assembly create” to have a running application. Is this correct? If so, what is the reason for “app create” not automatically creating one assembly as well? On that page it seems that app create is the same as plan create. The only reason I can see for seperating the plan from the assembly is when you have git-push. Then you need to have something create the git repo for you. 1 plan create (with a reference to a git-push requirement) would create the remote git repo for you. 2 you clone and populate the repo with your app code 3 you push, and that causes the assembly create/update. Adrian might want to correct my here tho' -Angus Thanks, Shaunak ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev