Re: [openstack-dev] [Ceilometer] Policy Issue
Hi : ZhiQiang , Found this discussion after I filed the bug report. https://bugs.launchpad.net/ceilometer/+bug/1314372 Sorry for that. More than happy to work with you on following BP to implement a more advance and user friendly policy settings to ceilometer. https://blueprints.launchpad.net/ceilometer/+spec/advanced-policy-rule In your BP, For now, a non-admin user can delete alarm created by other user in same tenant, which seems not so good, after this bp is implemented, we can change the default behavior very easily if we want. I was thinking, at least we need role-based rules and generic rules (Ex: tenant_id:%(tenant_id)s) support in policy.json. let me know your plans and also if you need any help. Best regards, Sampath -Original Message- From: Julien Danjou [mailto:jul...@danjou.info] Sent: Monday, February 10, 2014 7:42 PM To: ZhiQiang Fan Cc: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Ceilometer] Policy Issue On Mon, Feb 10 2014, ZhiQiang Fan wrote: So, is this loose policy limit designed purposely, or it just a simple implementation for policy? It's just nobody stepped up to implement a more complete one, indeed. So, is there any opportunity to implement more strict policy check, for i.e. a normal user can read resources created by other users (to be stricter, may disable this too), but read+write for his own? I'd like to get some help or advise before create a blueprint Yep, go ahead and create a blueprint. :) If you need help, just ask on this list or on IRC. -- Julien Danjou -- Free Software hacker - independent consultant -- http://julien.danjou.info ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Cinder] About store faults info for volumes
Hi stackers: I found when a instance status become error, I will see the detailed fault info at times when I show the detail of Instance. And it is very convenient for me to find the failed reason. Indeed, there is a nova.instance_faults which stores the fault info. Maybe it is helpful for users if Cinder also introduces the similar mechanism. Any advice? -- zhangleiqiang (Trump) Best Regards ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO][Tuskar] Feedback on init-keystone spec
Hi, I'm looking at moving init-keystone from tripleo-incubator to os-cloud-config, and I've drafted a spec at https://etherpad.openstack.org/p/tripleo-init-keystone-os-cloud-config . Feedback welcome. Cheers, -- Steve I hate temporal mechanics! - Chief Miles O'Brien, Deep Space Nine ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon][dashboards] Running the 'profile' specific unit tests
On 24/04/14 16:05, Abishek Subramanian (absubram) wrote: Just to add to this - Akihiro did mention the usage of @override_settings I've seen examples of this in existing unit tests and I can implement something similar. This will mean however that we will have the test with the default setting and then the same test again with the override which accepts the profile_id yes? That's fine. We do this in places already, for example for testing with and without domains [1] or with different services or extensions enabled [2] [3]. Julie [1] https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/admin/users/tests.py [2] https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/project/overview/tests.py#n321 [3] https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/project/overview/tests.py#n275 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Accessibility
Hi Doug, On 24/04/14 16:06, Douglas Fish wrote: I've proposed a design session for accessibility for the Juno summit, and I'd like to get a discussion started on the work that needs to be done. (Thanks Julie P for pushing that!) Thanks for starting the conversation! I've started to add information to the wiki that Joonwon Lee created: https://wiki.openstack.org/wiki/Horizon/WebAccessibility I think that's going to be a good place to gather material. I'd like to see additional information added about what tools we can use to verify accessibility. I'd like to try to summarize the WCAG guidelines into some broad areas where Horzion needs work. I expect to add a checklist of accessibility-related items to consider while reviewing code. This is a really good action item, the Review Checklist [1] definitely needs to be updated with Horizon-specific items. Joonwon (or anyone else with an interest in accessibility): It would be great if you could re-inspect the icehouse level code and create bugs for any issues that remain. I'll do the same for issues that I am aware of. In each bug we should include a link to the WCAG guideline that has been violated. Also, we should describe the testing technique: How was this bug discovered? How could a developer (or reviewer) determine the issue has actually been fixed? We should probably put that in each bug at first, but as we go they should be gathered up into the wiki page. That sounds very reasonable to me as well. Making it clear to developers and reviewers why something is needed (and how to check it's fixed) will be very helpful as we learn to watch out for the issues around this. There are some broad areas of need that might justify blueprints (I'm thinking of WAI-ARIA tagging, and making sure external UI widgets that we pull in are accessible). That sounds already like a good item for the Review Checklist: things to keep in mind when pulling in new Javascript libraries, since these tend to happen independently of the requirements repository. I'm thinking of a recent case where a JS library was later discovered not to be localised. Any suggestions on how to best share info, or organize bugs and blueprints are welcome! The wiki page is great work already, with specific links to different types of issues and why they're important. Definitely should be linked from the review checklist (perhaps once we have better foundations for accessibility?). From your understanding of the work to be done based on earlier inspections of the code, do you think we should start with a largeish blueprint that gets us to a reasonable-though-not-yet-perfect state, or will this be workable as a series of independent bugs? Either way it'd probably be nice to have an overall blueprint that links to the different bugs so interested parties can track the work and know where to help if they wish to. I'm looking forward to the session! I'll be doing some reading before that, thanks for linking again to that wiki page. Julie [1] https://wiki.openstack.org/wiki/ReviewChecklist Doug Fish IBM STG Cloud Solution Development T/L 553-6879, External Phone 507-253-6879 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ceilometer-client] query option discard special characters
Hi, developers, I find that ceilometer-client applies strict rule on query option, particularly, some special characters are not supported in key:value pairs, such as ~, !, and etc the regular expression used in ceilometerclient/v2/options.py is: r'([[a-zA-Z0-9_.]+)([!]=)([^ -,\t\n\r\f\v]+)' which means, key and value cannot have spaces and special characters. But, glance and nova (may be more projects) allow special characters in names and/or metadatas, please read bug report https://bugs.launchpad.net/python-ceilometerclient/+bug/1314544 for more detail So, if we create resources in glance and nova, it may not be able to be filtered out in ceilometer for i.e. # ceilometer sample-list -m instance -q 'metadata.display_name=special!key' # ceilometer sample-list -m instance -q 'metadata.display_name=vm name' the worse case is ceilometer-client will send wrong field since the parser cannot work, so it will return 400 error! I think ceilometer-client should not be so strict. Any opinion? Thanks ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
As the PTL for Barbican, I¹m happy to discuss this more here or at the Summit. Not sure if this is an option, but could you store the entire OpenVPN config file in Barbican rather than just the key? Not sure if you are generating those on demand or not, but we¹ve had several teams inside Rackspace just storing entire config files rather than trying to separate out individual keys or passwords. Jarret On 4/30/14, 12:11 AM, Nachi Ueno na...@ntti3.com wrote: Hi Clint Thank you for your suggestion. Your point get taken :) Kyle This is also a same discussion for LBaaS Can we discuss this in advanced service meeting? Zang Could you join the discussion? 2014-04-29 15:48 GMT-07:00 Clint Byrum cl...@fewbar.com: Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700: Hi Kyle 2014-04-29 10:52 GMT-07:00 Kyle Mestery mest...@noironetworks.com: On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno na...@ntti3.com wrote: Hi Zang Thank you for your contribution on this! The private key management is what I want to discuss in the summit. Has the idea of using Barbican been discussed before? There are many reasons why using Barbican for this may be better than developing key management ourselves. No, however I'm +1 for using Barbican. Let's discuss this in certificate management topic in advanced service session. Just a suggestion: Don't defer that until the summit. Sounds like you've already got some consensus, so you don't need the summit just to rubber stamp it. I suggest discussing as much as you can right now on the mailing list, and using the time at the summit to resolve any complicated issues including any a or b things that need crowd-sourced idea making. You can also use the summit time to communicate your requirements to the Barbican developers. Point is: just because you'll have face time, doesn't mean you should use it for what can be done via the mailing list. smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] QA Summit Meet-up Atlanta
Hi folks, last time we met one day before the Summit started for a short meet-up. Should we do the same this time? I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be fine for me. Regards, Marc ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Tuskar] Feedback on init-keystone spec
Hello Steve, the spec looks correct to me. Thanks for picking this up, it's very needed, especially for Tuskar. Let me know if you will need some help with testing it, or with anything else. Kind Regards, Ladislav On 04/30/2014 09:02 AM, Steve Kowalik wrote: Hi, I'm looking at moving init-keystone from tripleo-incubator to os-cloud-config, and I've drafted a spec at https://etherpad.openstack.org/p/tripleo-init-keystone-os-cloud-config . Feedback welcome. Cheers, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] How to write script to delete a port from openvswitch
HI, I want to write a script to add/delete a port from openvswitch. sudo ovs-ofctl show br-int -- lists all ports on openvswitch. sudo ovs-vsctl del-port br-int qbraeedg : deletes the linux bridge interface from openvswitch How can I automate this process.? Each VM connects to Openvswitch using linux bridge VM===Bridge==Openvswitch I want to statically map each openvswitch port to particular mac address. I want to automate this process. Could anyone please help how to write scripts to add/delete ports on Openvswitch and linux bridge.? Thanks Shiva ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] Run migrations for service plugins every time
Hi everyone! I'm working on blueprint https://blueprints.launchpad.net/neutron/+spec/neutron-robust-db-migrations. This topic was discussed earlier by Salvatore in ML thread Migrations, service plugins and Grenade jobs. I'm researching how to make migrations for service plugins run unconditionally. In fact it is enough to change should_run method for migrations - to make it return True if this migration is for service plugin https://review.openstack.org/#/c/91323/1/neutron/db/migration/__init__.py. It is almost working except for conflicts of VNPaaS service plugin with Brocade and Nuage plugins http://paste.openstack.org/show/77946/. 1) Migration for Brocade plugin fails, because 2c4af419145b_l3_support doesn't run (it adds necessary table 'routers') It can be fixed by adding Brocade plugin to list migration_for_plugins in 2c4af419145b_l3_support. 2) Migration for Nuage plugin fails, because e766b19a3bb_nuage_initial runs after 52ff27f7567a_support_for_vpnaas, and there is no necessary table 'routers'. It can be fixed by adding Nuage plugin to list migration_for_plugins in 2c4af419145b_l3_support and removing addition of these tables in e766b19a3bb_nuage_initial. Also I researched opportunity to make all migrations run unconditionally, but this could not be done as there is going to be a lot of conflicts http://paste.openstack.org/show/77902/. Mostly this conflicts take place in initial migrations for core plugins as they create basical tables that was created before. For example, mlnx_initial create tables securitygroups, securitygroupportbindings, networkdhcpagentbindings, portbindingports that were already created in 3cb5d900c5de_securitygroups, 4692d074d587_agent_scheduler, 176a85fc7d79_add_portbindings_db migrations. Besides I was thinking about 'making migrations smart' - to make them check whether all necessary migrations has been run or not, but the main problem about that is we need to store information about applied migrations, so this should have been planned from the very beginning. I look forward for any suggestions or thoughts on this topic. Regards, Ann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs
Hi, We have compared the API the is in the blue print to the one described in Stephen documents. Follows the differences we have found: 1) L7PolicyVipAssoc is gone, this means that L7 policy reuse is not possible. I have added use cases 42 and 43 to show where such reuse makes sense. 2) There is a mix between L7 content switching and L7 content modification, the API in the blue print only addresses L7 content switching. I think that we should separate the APIs from each other. I think that we should review/add use cases targeting L7 content modifications to the use cases document. a. You can see this in L7Policy: APPEND_HEADER, DELETE_HEADER actions 3) The action to redirect to a URL is missing in Stephen’s document. The 'redirect' action in Stephen’s document is equivalent to the “pool” action in the blue print/code. 4) All the objects have their parent id as an optional argument (L7Rule.l7_policy_id, L7Policy.listener_id), is this a mistake? 5) There is also the additional behavior based on L3 information (matching the client/source IP to a subnet). This is addressed by L7Rule.type with a value of 'CLIENT_IP' and L7Rule.compare_type with a value of 'SUBNET'. I think that using Layer 3 type information should not be part of L7 content switching as the use cases I am aware of, might require more than just selecting a different pool (ex: user with ip from internet browsing to an https based application, might need to be secured using 2K SSL keys while internal users could use weaker keys) I would like to state that although the WIKI describes the solution from a high level it is not totally in sync with the actual code. The key thing which is missing is that, L7 Policies in a specific listener/vip are ordered (ordered list) and are processed in order so that the 1st policy that has a match will be activated and traversal of the L7 policy list is topped as the processing is final (ex: redirect, pool, reject). This in effect means that L7 Policy form an ‘or’ condition between them. L7 Policies have an ordered list of L7 Rules, L7 Rules are processed by this order and also form an ‘or’ condition. Regards, -Avishay, Evgeny and Sam From: Samuel Bercovici [mailto:samu...@radware.com] Sent: Sunday, April 27, 2014 1:53 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS]SSL and L7 conent switching APIs Hi, The work to design the APIs concerning L7 content switching and SSL termination has started a bit before the Icehouse summit, it involved the ML in a very active fashion. The ML was silent on this because we have completed the discussion and moved to implementation. We got to a very advanced state in completing the code which got stopped due to the discussion about the core model (VIPs, Pools, etc.) The blue prints WIKIs and code are public (https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules and https://blueprints.launchpad.net/neutron/+spec/lbaas-ssl-termination ). Please take the time to review and discuss on ML if something is missing so we can talk about this at the summit. -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt
So by running ping while instance interface update we can see ~10-20 sec of connectivity downtime. Here is a tcp capture during update (pinging ext net gateway): *05:58:41.020791 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 10, length 64* *05:58:41.020866 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 10, length 64* *05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward, Agreement]* *05:58:42.022785 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 11, length 64* *05:58:42.022832 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 11, length 64* *[vm interface updated..]* *05:58:43.023310 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 12, length 64* *05:58:44.024042 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 13, length 64* *05:58:45.025760 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 14, length 64* *05:58:46.026260 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 15, length 64* *05:58:47.027813 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 16, length 64* *05:58:48.028229 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 17, length 64* *05:58:49.029881 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 18, length 64* *05:58:50.029952 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 19, length 64* *05:58:51.031380 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 20, length 64* *05:58:52.032012 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 21, length 64* *05:58:53.033456 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 22, length 64* *05:58:54.034061 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 23, length 64* *05:58:55.035170 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 24, length 64* *05:58:56.035988 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 25, length 64* *05:58:57.037285 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 26, length 64* *05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:58:58.038245 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 27, length 64* *05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:58:59.040143 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 28, length 64* *05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:59:00.040789 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 29, length 64* *05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui Unknown), length 28* *05:59:01.043471 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 30, length 64* *05:59:01.063176 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 30, length 64* *05:59:02.042699 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 31, length 64* *05:59:02.042840 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 31, length 64* However this connectivity downtime can be significally reduced by restarting network service on the instance right after interface update. On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery mest...@noironetworks.comwrote: On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev obonda...@mirantis.com wrote: On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery mest...@noironetworks.com wrote: On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com wrote: Yeah, I also saw in docs that update-device is supported since 0.8.0 version, not sure why it didn't work in my setup. I installed latest libvirt 1.2.3 and now update-device works just fine and I am able to move instance tap device from one bridge to another with no downtime and no reboot! I'll try to investigate why it didn't work on 0.9.8 and which is the minimal libvirt version for this. Wow, cool! This is really good news. Thanks for driving this! By chance did you notice if there was a drop in connectivity at all, or if the guest detected the move at all? Didn't check it yet. What in your opinion would be the best way of testing this? The simplest way would to have a ping running when you run update-device and see if any packets are dropped. We can do more thorough testing after that, but that would give us a good approximation of connectivity while swapping the underlying device. Kyle Thanks, Oleg On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery mest...@noironetworks.com wrote:
Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta
2014-04-30 19:11 GMT+09:00 Koderer, Marc m.kode...@telekom.de: Hi folks, last time we met one day before the Summit started for a short meet-up. Should we do the same this time? I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be fine for me. I may be in the jet lag Sunday, but the meet-up would be nice for me;-) Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Question of necessary queries for Event implemented on HBase
Hello Igor! Could you clarify, please, Why do we need event_id + reversed_timestamp row key? Isn't event_id identify row? On Tue, Apr 29, 2014 at 11:08 AM, Igor Degtiarov idegtia...@mirantis.comwrote: Hi, everybody. I’ve started to work on implementation of Event in ceilometer on HBase backend in the edges of blueprint https://blueprints.launchpad.net/ceilometer/+spec/hbase-events-feature By now Events has been implemented only in SQL. You know, using SQL we can build any query we need. With HBase it is another story. The data structure is built basing on queries we need, so to construct the structure of Event on HBase, it is very important to answer the question what queries should be implemented to retrieve events from storage. I registered bp https://blueprints.launchpad.net/ceilometer/+spec/hbase-events-structurefor discussion Events structure in HBase. For today it is prepared preliminary structure of Events in HBase: table: Events - rowkey: event_id + reversed_timestamp - column: event_type = string with description of event - [list of columns: trait_id + trait_desc + trait_type= trait_data] Structure that is proposed will support next queries: - event’s generation time - event id - event type - trait: id, description, type Any thoughts about additional queries that are necessary for Events. I’ll publish the patch with current implementation soon. Sincerely, Igor Degtiarov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Dmitriy Ukhlov Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt
I think it's better to test with some tcp connection (ssh session?) rather then with ping. Eugene. On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev obonda...@mirantis.comwrote: So by running ping while instance interface update we can see ~10-20 sec of connectivity downtime. Here is a tcp capture during update (pinging ext net gateway): *05:58:41.020791 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 10, length 64* *05:58:41.020866 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 10, length 64* *05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward, Agreement]* *05:58:42.022785 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 11, length 64* *05:58:42.022832 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 11, length 64* *[vm interface updated..]* *05:58:43.023310 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 12, length 64* *05:58:44.024042 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 13, length 64* *05:58:45.025760 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 14, length 64* *05:58:46.026260 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 15, length 64* *05:58:47.027813 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 16, length 64* *05:58:48.028229 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 17, length 64* *05:58:49.029881 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 18, length 64* *05:58:50.029952 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 19, length 64* *05:58:51.031380 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 20, length 64* *05:58:52.032012 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 21, length 64* *05:58:53.033456 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 22, length 64* *05:58:54.034061 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 23, length 64* *05:58:55.035170 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 24, length 64* *05:58:56.035988 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 25, length 64* *05:58:57.037285 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 26, length 64* *05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:58:58.038245 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 27, length 64* *05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:58:59.040143 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 28, length 64* *05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:59:00.040789 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 29, length 64* *05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui Unknown), length 28* *05:59:01.043471 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 30, length 64* *05:59:01.063176 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 30, length 64* *05:59:02.042699 IP 10.0.0.4 172.24.4.1 http://172.24.4.1: ICMP echo request, id 29954, seq 31, length 64* *05:59:02.042840 IP 172.24.4.1 10.0.0.4 http://10.0.0.4: ICMP echo reply, id 29954, seq 31, length 64* However this connectivity downtime can be significally reduced by restarting network service on the instance right after interface update. On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery mest...@noironetworks.comwrote: On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev obonda...@mirantis.com wrote: On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery mest...@noironetworks.com wrote: On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com wrote: Yeah, I also saw in docs that update-device is supported since 0.8.0 version, not sure why it didn't work in my setup. I installed latest libvirt 1.2.3 and now update-device works just fine and I am able to move instance tap device from one bridge to another with no downtime and no reboot! I'll try to investigate why it didn't work on 0.9.8 and which is the minimal libvirt version for this. Wow, cool! This is really good news. Thanks for driving this! By chance did you notice if there was a drop in connectivity at all, or if the guest detected the move at all? Didn't check it yet. What in your opinion would be the best way of testing this? The simplest way would to have a ping running when you run update-device and see if any packets are dropped. We can do more
Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt
Agreed, ping was a good first tool to verify downtime, but trying with something using TCP at this point would be useful as well. On Wed, Apr 30, 2014 at 8:39 AM, Eugene Nikanorov enikano...@mirantis.com wrote: I think it's better to test with some tcp connection (ssh session?) rather then with ping. Eugene. On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev obonda...@mirantis.com wrote: So by running ping while instance interface update we can see ~10-20 sec of connectivity downtime. Here is a tcp capture during update (pinging ext net gateway): 05:58:41.020791 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 10, length 64 05:58:41.020866 IP 172.24.4.1 10.0.0.4: ICMP echo reply, id 29954, seq 10, length 64 05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward, Agreement] 05:58:42.022785 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 11, length 64 05:58:42.022832 IP 172.24.4.1 10.0.0.4: ICMP echo reply, id 29954, seq 11, length 64 [vm interface updated..] 05:58:43.023310 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 12, length 64 05:58:44.024042 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 13, length 64 05:58:45.025760 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 14, length 64 05:58:46.026260 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 15, length 64 05:58:47.027813 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 16, length 64 05:58:48.028229 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 17, length 64 05:58:49.029881 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 18, length 64 05:58:50.029952 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 19, length 64 05:58:51.031380 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 20, length 64 05:58:52.032012 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 21, length 64 05:58:53.033456 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 22, length 64 05:58:54.034061 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 23, length 64 05:58:55.035170 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 24, length 64 05:58:56.035988 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 25, length 64 05:58:57.037285 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 26, length 64 05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28 05:58:58.038245 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 27, length 64 05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28 05:58:59.040143 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 28, length 64 05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28 05:59:00.040789 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 29, length 64 05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28 05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui Unknown), length 28 05:59:01.043471 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 30, length 64 05:59:01.063176 IP 172.24.4.1 10.0.0.4: ICMP echo reply, id 29954, seq 30, length 64 05:59:02.042699 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 31, length 64 05:59:02.042840 IP 172.24.4.1 10.0.0.4: ICMP echo reply, id 29954, seq 31, length 64 However this connectivity downtime can be significally reduced by restarting network service on the instance right after interface update. On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery mest...@noironetworks.com wrote: On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev obonda...@mirantis.com wrote: On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery mest...@noironetworks.com wrote: On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com wrote: Yeah, I also saw in docs that update-device is supported since 0.8.0 version, not sure why it didn't work in my setup. I installed latest libvirt 1.2.3 and now update-device works just fine and I am able to move instance tap device from one bridge to another with no downtime and no reboot! I'll try to investigate why it didn't work on 0.9.8 and which is the minimal libvirt version for this. Wow, cool! This is really good news. Thanks for driving this! By chance did you notice if there was a drop in connectivity at all, or if the guest detected the move at all? Didn't check it yet. What in your opinion would be the best way of testing this? The simplest way would to have a ping running when you run update-device and see if any packets are dropped. We can do more thorough testing after that, but that would give us a good approximation of connectivity while swapping the underlying device. Kyle Thanks, Oleg On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery mest...@noironetworks.com wrote: According to this page [1], update-device is
Re: [openstack-dev] [Neutron] Run migrations for service plugins every time
Anna, It's good to see progress being made on this blueprint. I have some comments inline. Also, I would recommend keeping in mind the comments Mark had regarding migration generation and plugin configuration in hist post on the email thread I started. Salvatore On 30 April 2014 14:16, Anna Kamyshnikova akamyshnik...@mirantis.comwrote: Hi everyone! I'm working on blueprint https://blueprints.launchpad.net/neutron/+spec/neutron-robust-db-migrations. This topic was discussed earlier by Salvatore in ML thread Migrations, service plugins and Grenade jobs. I'm researching how to make migrations for service plugins run unconditionally. In fact it is enough to change should_run method for migrations - to make it return True if this migration is for service plugin https://review.openstack.org/#/c/91323/1/neutron/db/migration/__init__.py . I think running migrations unconditionally for service plugins is an advancement but not yet a final solution. I would insist on the path you've been pursuing of running unconditionally all migrations. We should strive to solve the issues you found there so far It is almost working except for conflicts of VNPaaS service plugin with Brocade and Nuage plugins http://paste.openstack.org/show/77946/. 1) Migration for Brocade plugin fails, because 2c4af419145b_l3_support doesn't run (it adds necessary table 'routers') It can be fixed by adding Brocade plugin to list migration_for_plugins in 2c4af419145b_l3_support. 2) Migration for Nuage plugin fails, because e766b19a3bb_nuage_initial runs after 52ff27f7567a_support_for_vpnaas, and there is no necessary table 'routers'. It can be fixed by adding Nuage plugin to list migration_for_plugins in 2c4af419145b_l3_support and removing addition of these tables in e766b19a3bb_nuage_initial. I noticed that too in the past. However there are two aspects to this fix. The one you're mentioning is for fixing migrations in new deployments; on the other hand migrations should be fixed also for existing deployments. This kind of problem seems to me more concerning the work for removing schema auto-generation. Indeed the root problem here is probably that l3 schemas for these two plugins are being created only because of schema auto-generation. Also I researched opportunity to make all migrations run unconditionally, but this could not be done as there is going to be a lot of conflicts http://paste.openstack.org/show/77902/. Mostly this conflicts take place in initial migrations for core plugins as they create basical tables that was created before. For example, mlnx_initial create tables securitygroups, securitygroupportbindings, networkdhcpagentbindings, portbindingports that were already created in 3cb5d900c5de_securitygroups, 4692d074d587_agent_scheduler, 176a85fc7d79_add_portbindings_db migrations. Besides I was thinking about 'making migrations smart' - to make them check whether all necessary migrations has been run or not, but the main problem about that is we need to store information about applied migrations, so this should have been planned from the very beginning. I agree that just changing migrations to run unconditionally is not a viable solution. Rather than changing the existing migration path, I was thinking more about adding corrective unconditional migrations to make the database state the same regardless of the plugin configuration. The difficult part here is that these migrations would need to be smart enough to understand whether a previous migration was executed or skipped; this might also break offline migrations (but perhaps this might be tolerable). I look forward for any suggestions or thoughts on this topic. Regards, Ann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno na...@ntti3.com wrote: Hi Clint Thank you for your suggestion. Your point get taken :) Kyle This is also a same discussion for LBaaS Can we discuss this in advanced service meeting? Yes! I think we should definitely discuss this in the advanced services meeting today. I've added it to the agenda [1]. Thanks, Kyle [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_for_next_meeting Zang Could you join the discussion? 2014-04-29 15:48 GMT-07:00 Clint Byrum cl...@fewbar.com: Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700: Hi Kyle 2014-04-29 10:52 GMT-07:00 Kyle Mestery mest...@noironetworks.com: On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno na...@ntti3.com wrote: Hi Zang Thank you for your contribution on this! The private key management is what I want to discuss in the summit. Has the idea of using Barbican been discussed before? There are many reasons why using Barbican for this may be better than developing key management ourselves. No, however I'm +1 for using Barbican. Let's discuss this in certificate management topic in advanced service session. Just a suggestion: Don't defer that until the summit. Sounds like you've already got some consensus, so you don't need the summit just to rubber stamp it. I suggest discussing as much as you can right now on the mailing list, and using the time at the summit to resolve any complicated issues including any a or b things that need crowd-sourced idea making. You can also use the summit time to communicate your requirements to the Barbican developers. Point is: just because you'll have face time, doesn't mean you should use it for what can be done via the mailing list. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] SR-IOV summit session
Hi John, With the summit around the corner, please advise how we should run this session: http://summit.openstack.org/cfp/details/248 We are currently working on this nova spec, https://review.openstack.org/#/c/86606/. I guess its content will be a candidate to be presented in the session. Thanks, Robert ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept
Sorry for being late to the party. Since we follow mostly DynamoDB, it makes sense not to deviate too much away from DynamoDB’s consistency mode. From what I read about DynamoDB, READ consistency is defined to be either strong consistency or eventual consistency. ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead: boolean”, ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax If set to true, then the operation uses strongly consistent reads; otherwise, eventually consistent reads are used. Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with ConsistentRead set to true, you will receive an error message. Type: Boolean Required: No http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html WRITE consistency is not clearly defined anywhere. From what Werner Vogel’s description, it seems to indicate writes are replicated across availability zones/data centers synchronously. I guess inside data center, writes are replicated asynchronously. And the API doesn’t allow user to specify WRITE consistency level. http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html Considering the above factors and what Cassandra’s capabilities, I propose we use the following model. READ: * Strong consistency (synchronously replicate to all, maps to Cassandra READ All consistency level) * Eventual consistency (quorum read, maps to Cassandra READ Quorum) * Weak consistency (not in DynamoDB, maps to Cassandra READ ONE) WRITE: * Strong consistency (synchronously replicate to all, maps to Cassandra WRITE All consistency level) * Eventual consistency (quorum write, maps to Cassandra WRITE Quorum) * Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY) For conditional writes (conditional putItem/deletItem), only strong and eventual consistency should be supported. Thoughts? Thanks, Charles From: Dmitriy Ukhlov dukh...@mirantis.commailto:dukh...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Tuesday, April 29, 2014 at 10:43 AM To: Illia Khudoshyn ikhudos...@mirantis.commailto:ikhudos...@mirantis.com Cc: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept Hi Illia, WEAK/QUORUM instead of true/false it is ok for me. But we have also STRONG. What does STRONG mean? In current concept we a using QUORUM and say that it is strong. I guess it is confusing (at least for me) and can have different behavior for different backends. I believe that from user point of view only 4 usecases exist: write and read with consistency or not. For example if we use QUORUM for write what is usecase to use read with STRONG one? QUORUM read is enought to get consistent data. Or if we use WEAK (ONE) for consistent write what is the use case to use read from QUORUM? we need to read from ALL. But we can to use different kinds of backend's abilities to implement consistent and incosistent operation. To provide the best flexibility of backend specific features I propose to use backend specific configuration section in table schema. In this case you can get much more then in initial concept. For example specify consistensy level ANY instead of ONE for WEAK consistency if you want concentrate on performance of TWO if you want to provide more fault tolerant behavior. With my proposal we will have only one limitation in comparison with first proposal - We have maximally flexible consistency, but per table, not per request. We have only 2 choices to specify consistensy per request (true or false). But I believe that it is enough to cover user usecases On Tue, Apr 29, 2014 at 6:16 AM, Illia Khudoshyn ikhudos...@mirantis.commailto:ikhudos...@mirantis.com wrote: Hi all, Dima, I think I understand your reasoning but I have some issues with that. I agree that binary logic is much more straightforward and easy to understand and use. But following that logic, having the only one hardcoded consistency level is even easier and more understandable. As I can see, the idea of the proposal is to provide user a more fine-grained control on consistency to leverage backend features AND at the same time to not bound ourselves with only this concrete backend's features. In scope of Maksym's proposal choice between WEAK/QUORUM for me is pretty much the same as your FALSE/TRUE. But I'd prefer to have more. PS Eager to see your new index design On Tue, Apr 29, 2014 at 7:44 AM, Dmitriy Ukhlov dukh...@mirantis.commailto:dukh...@mirantis.com wrote: Hello Maksym, Thank you for your work! I
Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt
I've tried updating interface while running ssh session from guest to host and it was dropped :( *07:27:58.676570 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 44:88, ack 61, win 2563, options [nop,nop,TS val 4539607 ecr 24227108], length 44* *07:27:58.677161 IP 172.18.76.80.22 10.0.0.4.52556: Flags [P.], seq 61:121, ack 88, win 277, options [nop,nop,TS val 24227149 ecr 4539607], length 60* *07:27:58.677720 IP 10.0.0.4.52556 172.18.76.80.22: Flags [.], ack 121, win 2563, options [nop,nop,TS val 4539608 ecr 24227149], length 0* *07:27:59.087582 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 88:132, ack 121, win 2563, options [nop,nop,TS val 4539710 ecr 24227149], length 44* *07:27:59.088140 IP 172.18.76.80.22 10.0.0.4.52556: Flags [P.], seq 121:181, ack 132, win 277, options [nop,nop,TS val 24227251 ecr 4539710], length 60* *07:27:59.088487 IP 10.0.0.4.52556 172.18.76.80.22: Flags [.], ack 181, win 2563, options [nop,nop,TS val 4539710 ecr 24227251], length 0* *[vm interface updated..]* *07:28:17.157594 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4544228 ecr 24227251], length 44* *07:28:17.321060 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 176:220, ack 181, win 2563, options [nop,nop,TS val 4544268 ecr 24227251], length 44* *07:28:17.361835 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4544279 ecr 24227251], length 44* *07:28:17.769935 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4544381 ecr 24227251], length 44* *07:28:18.585887 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4544585 ecr 24227251], length 44* *07:28:20.221797 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4544994 ecr 24227251], length 44* *07:28:23.493540 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4545812 ecr 24227251], length 44* *07:28:30.037927 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4547448 ecr 24227251], length 44* *07:28:35.045733 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *07:28:36.045388 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *07:28:37.045900 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *07:28:43.063118 IP 0.0.0.0.68 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:ec:eb:a4, length 280* *07:28:43.084384 IP 10.0.0.3.67 10.0.0.4.68: BOOTP/DHCP, Reply, length 323* *07:28:43.085038 ARP, Request who-has 10.0.0.3 tell 10.0.0.4, length 28* *07:28:43.099463 ARP, Reply 10.0.0.3 is-at fa:16:3e:79:9b:9c, length 28* *07:28:43.099841 IP 10.0.0.4 10.0.0.3 http://10.0.0.3: ICMP 10.0.0.4 udp port 68 unreachable, length 359* *07:28:43.125379 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28* *07:28:43.125626 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa, length 28* *07:28:43.125907 IP 10.0.0.4.52556 172.18.76.80.22: Flags [P.], seq 132:176, ack 181, win 2563, options [nop,nop,TS val 4550720 ecr 24227251], length 44* *07:28:43.132650 IP 172.18.76.80.22 10.0.0.4.52556: Flags [R], seq 369316248, win 0, length 0* *07:28:48.148853 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length 28* *07:28:48.149377 ARP, Reply 10.0.0.4 is-at fa:16:3e:ec:eb:a4, length 28* On Wed, Apr 30, 2014 at 5:50 PM, Kyle Mestery mest...@noironetworks.comwrote: Agreed, ping was a good first tool to verify downtime, but trying with something using TCP at this point would be useful as well. On Wed, Apr 30, 2014 at 8:39 AM, Eugene Nikanorov enikano...@mirantis.com wrote: I think it's better to test with some tcp connection (ssh session?) rather then with ping. Eugene. On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev obonda...@mirantis.com wrote: So by running ping while instance interface update we can see ~10-20 sec of connectivity downtime. Here is a tcp capture during update (pinging ext net gateway): 05:58:41.020791 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 10, length 64 05:58:41.020866 IP 172.24.4.1 10.0.0.4: ICMP echo reply, id 29954, seq 10, length 64 05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward, Agreement] 05:58:42.022785 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 11, length 64 05:58:42.022832 IP 172.24.4.1 10.0.0.4: ICMP echo reply, id 29954, seq 11, length 64 [vm interface updated..] 05:58:43.023310 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 12, length 64 05:58:44.024042 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 13, length 64 05:58:45.025760 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 14, length 64 05:58:46.026260 IP 10.0.0.4 172.24.4.1: ICMP echo request, id 29954, seq 15, length 64 05:58:47.027813 IP
[openstack-dev] [Neutron][ML2] No ML2 sub-team meeting today (4/30/2014)
Today's ML2 sub-team meeting is cancelled. -Bob ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Juno-Summit] availability of the project project pod rooms on Monday May 12th?
IIRC Thierry said that pods will be available starting from Monday. Thanks Sergey, in the absence of any other indications to the contrary, I'm gonna assume that's the case :) Cheers, Eoghan On Tue, Apr 29, 2014 at 2:56 PM, Eoghan Glynn egl...@redhat.com wrote: ... the $subject says it all. Wondering about dedicated spaces for a cores meet-up prior to the design summit proper kicking off on the Tuesday. Thanks! Eoghan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt
On 30 April 2014 16:30, Oleg Bondarev obonda...@mirantis.com wrote: I've tried updating interface while running ssh session from guest to host and it was dropped :( The drop is not great, but ok if the instance is still able to be communicated to after the arp tables refresh and the connection is re-established. If the drop can't be avoided, there is comfort in knowing that there is no need for an instance reboot, suspend/resume or any manual actions. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit
Yep - I know I'll join in :) Thanks, Kiall On Tue, 2014-04-29 at 14:09 -0600, Carl Baldwin wrote: The design summit discussion topic I submitted [1] for my DNS blueprints [2][3][4] and this one [5] just missed the cut for the design session schedule. It stung a little to be turned down but I totally understand the time and resource constraints that drove the decision. I feel this is an important subject to discuss because the end result will be a better cloud user experience overall. The design summit could be a great time to bring together interested parties from Neutron, Nova, and Designate to discuss the integration that I propose in these blueprints. DNS for IPv6 in Neutron is also something I would like to discuss. Mostly, I'd like to get a good sense for where this is at currently with the current Neutron dns implementation (dnsmasq) and how it will fit in. I've created an etherpad to help us coordinate [6]. If you are interested, please go there and help me flesh it out. Carl Baldwin Neutron L3 Subteam [1] http://summit.openstack.org/cfp/details/403 [2] https://blueprints.launchpad.net/neutron/+spec/internal-dns-resolution [3] https://blueprints.launchpad.net/nova/+spec/internal-dns-resolution [4] https://blueprints.launchpad.net/neutron/+spec/external-dns-resolution [5] https://blueprints.launchpad.net/neutron/+spec/dns-subsystem [6] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] SR-IOV summit session
On 04/30/2014 10:06 AM, Robert Li (baoli) wrote: Hi John, With the summit around the corner, please advise how we should run this session: http://summit.openstack.org/cfp/details/248 We are currently working on this nova spec, https://review.openstack.org/#/c/86606/. I guess its content will be a candidate to be presented in the session. Thanks, Robert ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Don't worry too much about running the session.Be prepared to moderate a discussion, and come with your ideas clear, well thought out, and defensible, but with an open mind to alternatives. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt
On 30 April 2014 17:28, Jesse Pretorius jesse.pretor...@gmail.com wrote: On 30 April 2014 16:30, Oleg Bondarev obonda...@mirantis.com wrote: I've tried updating interface while running ssh session from guest to host and it was dropped :( Please allow me to tell you I told you so! ;) The drop is not great, but ok if the instance is still able to be communicated to after the arp tables refresh and the connection is re-established. If the drop can't be avoided, there is comfort in knowing that there is no need for an instance reboot, suspend/resume or any manual actions. I agree with Jesse's point. I think it will be reasonable to say that the migration will trigger a connection reset for all existing TCP connections. However, what are exactly the changes we're making on the data plane? Are you testing with migrating VIF from a linux bridge instance to an Open vSwitch obne? Salvatore ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit
Yup - me too. Graham On Wed, 2014-04-30 at 15:30 +, Mac Innes, Kiall wrote: Yep - I know I'll join in :) Thanks, Kiall On Tue, 2014-04-29 at 14:09 -0600, Carl Baldwin wrote: The design summit discussion topic I submitted [1] for my DNS blueprints [2][3][4] and this one [5] just missed the cut for the design session schedule. It stung a little to be turned down but I totally understand the time and resource constraints that drove the decision. I feel this is an important subject to discuss because the end result will be a better cloud user experience overall. The design summit could be a great time to bring together interested parties from Neutron, Nova, and Designate to discuss the integration that I propose in these blueprints. DNS for IPv6 in Neutron is also something I would like to discuss. Mostly, I'd like to get a good sense for where this is at currently with the current Neutron dns implementation (dnsmasq) and how it will fit in. I've created an etherpad to help us coordinate [6]. If you are interested, please go there and help me flesh it out. Carl Baldwin Neutron L3 Subteam [1] http://summit.openstack.org/cfp/details/403 [2] https://blueprints.launchpad.net/neutron/+spec/internal-dns-resolution [3] https://blueprints.launchpad.net/nova/+spec/internal-dns-resolution [4] https://blueprints.launchpad.net/neutron/+spec/external-dns-resolution [5] https://blueprints.launchpad.net/neutron/+spec/dns-subsystem [6] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Infra] Updates for SPF Record needed
Hi all, Ever since the gerrit upgrade, emails from rev...@openstack.org have been going into my Junk folder, so I started looking at the headers and related information to see if I could find any problems. One thing I encountered is that the current SPF record: $ host -t TXT openstack.org openstack.org descriptive text v=spf1 include:sendgrid.net ~all fails anything but mail sent via sendgrid. This excludes mail sent from rev...@openstack.org directly off the gerrit server, and causes SPF to softfail. Note that this SPF record does *not* impact the mailing lists, as those are on a separate domain (lists.openstack.org) which has no SPF record set whatsoever. AFAICT, there are a limited number of servers that send mail with From: addresses containing openstack.org, these include: emailsrvr.com (the MX provider for openstack.org) and review.openstack.org. jeblair mentioned on IRC that there may also be an 'openstackid-dev' email sending account, but I was unable to find any email in my personal account from that server. There are two possible solutions: 1) Remove or drastically open the SPF record. Removing the record would cause all email to resolve spf=none (like lists.o.o does currently), but prevent openstack.org from gaining any protection against malicious senders via SPF. Drastically opening the SPF record would be changing the ~all to a +all which would cause all sent email to pass SPF. 2) Make the SPF record accurate: v=spf1 include:emailsrvr.com include:sendgrid.net a:review.openstack.org ~all. For any additional services that send mail for openstack.org, an additional a:my.host.name.openstack.org would be added to the SPF record. Using a: syntax for the records also ensures that in the case of something like the recent gerrit migration, the SPF record would remain valid without any modification. There's obviously also a hybrid approach, where we add the known senders of mail but change ~all to +all. I strongly recommend we pursue option 2 -- this would mean if you know of any other devices sending mail to @openstack.org, please reply to this thread with the information so we can draft a valid SPF record. Thanks, Jay Faulkner signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder
Sorry for being late. I was busy with something else these days. It'll be great to have a dedicated image transferring library that provides both pre-copying and zero-copying sematics, and we are glad to have VMThunder integrated in it. Before that library is done, however, we plan to propose a blue print that solely focus on integrate VMThunder into Open Stack, as a plug-in of course. Then we can move VMThunder into the newly created transferring library with a refactory process. Does this plan make sense? BTW, I'll not be able to goto the summit. It's too far away. Pity. At 2014-04-28 11:01:13,Sheng Bo Hou sb...@cn.ibm.com wrote: Jay, Huiba, Chris, Solly, Zhiyan, and everybody else, I am so excited that two of the proposals: Image Upload Plugin(http://summit.openstack.org/cfp/details/353) and Data transfer service Plugin(http://summit.openstack.org/cfp/details/352) have been merged together and scheduled in the coming design summit. If you show up in Atlanta, please come this session(http://junodesignsummit.sched.org/event/c00119362c07e4cb203d1c4053add187) and start our discussion, on Wednesday, May 14 • 11:50am - 12:30pm. I will propose a common image transfer library for all the OpenStack projects to to upload and download the images. If it is approved, with this library, Huiba, you can feel free to implement the transfer protocols you like. Best wishes, Vincent Hou (侯胜博) Staff Software Engineer, Open Standards and Open Source Team, Emerging Technology Institute, IBM China Software Development Lab Tel: 86-10-82450778 Fax: 86-10-82453660 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing, P.R.C.100193 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 | Sheng Bo Hou/China/IBM@IBMCN 2014/04/27 22:33 | Please respond to OpenStack Development Mailing List \(not for usage questions\) openstack-dev@lists.openstack.org | | | To | OpenStack Development Mailing List \(not for usage questions\) openstack-dev@lists.openstack.org, | | cc | | | Subject | Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder | | | | | I have done a little test for the image download and upload. I created an API for the image access, containing copyFrom and sendTo. I moved the image download and upload code from XenApi into the implementation for Http with some modifications, and the code worked for libvirt as well. copyFrom means to download the image and return the image data, and different hypervisors can choose to save it in a file or import it to the datastore; sendTo is used to upload the image and the image data is passed in as a parameter. I also did an investigation about how each hypervisor is doing the image upload and download. For the download: libvirt, hyper-v and baremetal use the code image_service.download to download the image and save it into a file. vmwareapi uses the code image_service.download to download the image and import it into the datastore. XenAPi uses image_service.download to download the image for VHD image. For the upload: They use image_service.upload to upload the image. I think we can conclude that it is possible to have a common image transfer library with different implementations for different protocols. This is a small demo code for the library: https://review.openstack.org/#/c/90601/(Jay, is it close to the library as you mentioned?). I just replaced the upload and download part with the http implementation for the imageapi and it worked fine. Best wishes, Vincent Hou (侯胜博) Staff Software Engineer, Open Standards and Open Source Team, Emerging Technology Institute, IBM China Software Development Lab Tel: 86-10-82450778 Fax: 86-10-82453660 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing, P.R.C.100193 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 | Solly Ross sr...@redhat.com 2014/04/25 01:46 | Please respond to OpenStack Development Mailing List \(not for usage questions\) openstack-dev@lists.openstack.org | | | To | OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, | | cc | | | Subject | Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder | | | | | Something to be aware of when planing an image transfer library is that individual drivers might have optimized support for image transfer in certain cases (especially when dealing with transferring between different formats, like raw to qcow2, etc). This builds on what Christopher was saying -- there's actually a reason why we have code for each driver. While having a common image copying library would be nice, I think a better way to do it would be to have some sort of
[openstack-dev] [mistral] Mistral 0.0.2 is released!
I’m glad to announce release 0.0.2 of Mistral, Workflow Service for OpenStack, which delivers all the functionality that we planned to deliver in PoC development phase. Below are all the most important links related to this release that will help you understand Mistral ideas. It is recommended to start with a 15 min screencast (the first link) and then explore other links. Screencast - http://www.youtube.com/watch?v=x-zqz1CRVkI Tarballs can be found at https://launchpad.net/mistral/icehouse/0.0.2 Release description on wiki - https://wiki.openstack.org/wiki/Mistral/Releases/0.0.2 Workflow examples: Create VM - https://github.com/stackforge/mistral-extra/tree/master/examples/create_vm Webhooks scheduling - https://github.com/stackforge/mistral-extra/tree/master/examples/webhooks Running a job on VM - https://github.com/stackforge/mistral-extra/tree/master/examples/vm_job Wiki - https://wiki.openstack.org/wiki/Mistral Launchpad (blueprints, bugs, downloads) - https://launchpad.net/mistral Git repositories: Mistral Core - https://github.com/stackforge/mistral Mistral Extras (examples) - https://github.com/stackforge/mistral-extra Mistral Client - https://github.com/stackforge/python-mistralclient Thanks! Renat Akhmerov @ Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal
I have seen many use casses added lately with me also adding additional 2 for L7 and will probably add a few more tomorrow for SSL termination -Original Message- From: Eichberger, German [mailto:german.eichber...@hp.com] Sent: Tuesday, April 29, 2014 12:12 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal Sam, The use cases where pretty complete the last time I checked so let's move them to gerrit so we can all vote. Echoing Kyle I would love to see us focusing on getting things ready for the summit. German -Original Message- From: Samuel Bercovici [mailto:samu...@radware.com] Sent: Monday, April 28, 2014 11:44 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal Hi, I was just working to push the use cases into the new format .rst but I agree that using google doc would be more intuitive. Let me know what you prefer to do with the use cases document: 1. leave it at google docs at - https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1 2. Move it to the new format under - http://git.openstack.org/cgit/openstack/neutron-specs, I have already files a blue print https://blueprints.launchpad.net/neutron/+spec/lbaas-use-cases and can complete the .rst process by tomorrow. Regards, -Sam. -Original Message- From: Kyle Mestery [mailto:mest...@noironetworks.com] Sent: Monday, April 28, 2014 4:33 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal Folks, sorry for the top post here, but I wanted to make sure to gather people's attention in this thread. I'm very happy to see all the passion around LBaaS in Neutron for this cycle. As I've told a few people, seeing all the interest from operators and providers is fantastic, as it gives us valuable input from that side of things before we embark on designing and coding. I've also attended the last few LBaaS IRC meetings, and I've been catching up on the LBaaS documents and emails. There is a lot of great work and passion by many people. However, the downside of what I've seen is that there is a logjam around progress here. Given we're two weeks out from the Summit, I'm going to start running the LBaaS meetings with Eugene to try and help provide some focus there. Hopefully we can use this week and next week's meetings to drive to a consistent Summit agenda and lay the groundwork for LBaaS in Juno and beyond. Also, while our new neutron-specs BP repository has been great so far for developers, based on feedback from operators, it may not be ideal for those who are not used to contributing using gerrit. I don't want to lose the voice of those people, so I'm pondering what to do. This is really affecting the LBaaS discussion at the moment. I'm thinking that we should ideally try to use Google Docs for these initial discussions and then move the result of that into a BP on neutron-specs. What do people think of that? If we go down this path, we need to decide on a single Google Doc for people to collaborate on. I don't want to put Stephen on the spot, but his document may be a good starting point. I'd like to hear what others think on this plan as well. Thanks, Kyle On Sun, Apr 27, 2014 at 6:06 PM, Eugene Nikanorov enikano...@mirantis.com wrote: Hi, You knew from the action items that came out of the IRC meeting of April 17 that my team would be working on an API revision proposal. You also knew that this proposal was to be accompanied by an object model diagram and glossary, in order to clear up confusion. You were in that meeting, you saw the action items being created. Heck, you even added the to prepare API for SSL and L7 directive for my team yourself! The implied but not stated assumption about this work was that it would be fairly evaluated once done, and that we would be given a short window (ie. about a week) in which to fully prepare and state our proposal. Your actions, though, were apparently to produce your own version of the same in blueprint form without notifying anyone in the group that you were going to be doing this, let alone my team. How could you have given my API proposal a fair shake prior to publishing your blueprint, if both came out on the same day? (In fact, I'm lead to believe that you and other Neutron LBaaS developers hadn't even looked at my proposal before the meeting on 4/24, where y'all started determining product direction, apparently by edict.) Therefore, looking honestly at your actions on this and trying to give you the benefit of the doubt, I still must assume that you never intended to seriously consider our proposal. That's strange to hear because the spec on review is a part of what is proposed
Re: [openstack-dev] [Neutron][LBaaS] Use Case Question
Hi, As stated, this could either be handled by SSL session ID persistency or by SSL termination and using cookie based persistency options. If there is no need to inspect the content hence to terminate the SSL connection on the load balancer for this sake, than using SSL session ID based persistency is obviously a much more efficient way. The reference to source client IP changing was to negate the use of source IP as the stickiness algorithm. -Sam. From: Trevor Vardeman [mailto:trevor.varde...@rackspace.com] Sent: Thursday, April 24, 2014 7:26 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Neutron][LBaaS] Use Case Question Hey, I'm looking through the use-cases doc for review, and I'm confused about one of them. I'm familiar with HTTP cookie based session persistence, but to satisfy secure-traffic for this case would there be decryption of content, injection of the cookie, and then re-encryption? Is there another session persistence type that solves this issue already? I'm copying the doc link and the use case specifically; not sure if the document order would change so I thought it would be easiest to include both :) Use Cases: https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis Specific Use Case: A project-user wants to make his secured web based application (HTTPS) highly available. He has n VMs deployed on the same private subnet/network. Each VM is installed with a web server (ex: apache) and content. The application requires that a transaction which has started on a specific VM will continue to run against the same VM. The application is also available to end-users via smart phones, a case in which the end user IP might change. The project-user wishes to represent them to the application users as a web application available via a single IP. -Trevor Vardeman ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][L3] No Team Meeting Thursday
Since this is the official week off, I will not hold a team meeting this week. See you next week. If you have a chance, please review/update the topics on the team page [1]. There are gerrit topics under review and I'd like to also call attention to the etherpad about having a DNS discussion in Atlanta [2]. Cheers, Carl Baldwin Neutron L3 Subteam [1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam [2] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] SSL VPN Implemenatation
Jarret Thanks! Currently, the config will be generated on demand by the agent. What's merit storing entire config in the Barbican? Kyle Thanks! 2014-04-30 7:05 GMT-07:00 Kyle Mestery mest...@noironetworks.com: On Tue, Apr 29, 2014 at 6:11 PM, Nachi Ueno na...@ntti3.com wrote: Hi Clint Thank you for your suggestion. Your point get taken :) Kyle This is also a same discussion for LBaaS Can we discuss this in advanced service meeting? Yes! I think we should definitely discuss this in the advanced services meeting today. I've added it to the agenda [1]. Thanks, Kyle [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices#Agenda_for_next_meeting Zang Could you join the discussion? 2014-04-29 15:48 GMT-07:00 Clint Byrum cl...@fewbar.com: Excerpts from Nachi Ueno's message of 2014-04-29 10:58:53 -0700: Hi Kyle 2014-04-29 10:52 GMT-07:00 Kyle Mestery mest...@noironetworks.com: On Tue, Apr 29, 2014 at 12:42 PM, Nachi Ueno na...@ntti3.com wrote: Hi Zang Thank you for your contribution on this! The private key management is what I want to discuss in the summit. Has the idea of using Barbican been discussed before? There are many reasons why using Barbican for this may be better than developing key management ourselves. No, however I'm +1 for using Barbican. Let's discuss this in certificate management topic in advanced service session. Just a suggestion: Don't defer that until the summit. Sounds like you've already got some consensus, so you don't need the summit just to rubber stamp it. I suggest discussing as much as you can right now on the mailing list, and using the time at the summit to resolve any complicated issues including any a or b things that need crowd-sourced idea making. You can also use the summit time to communicate your requirements to the Barbican developers. Point is: just because you'll have face time, doesn't mean you should use it for what can be done via the mailing list. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [openstack-php-sdk] Transport layer work
Hey everyone, So I've been working on the transport layer today and wanted to run some thing past everyone to make sure we're all on the same page. There's no new major new work: just tweeking some existing stuff and moving functionality into new files where I thought it would make things clearer. 1. Minor updates `ClientInterface` I agree with Matt: we need to keep this interface as simple as possible for new client classes to implement. There's two discrete pieces of functionality the interface should care about: requests and configuration. Here's an outline of what I thought might work: https://gist.github.com/jamiehannaford/01bbf366f5f3da07bbcd I tried to keep things as simple as possible - can people lend me their thoughts on it? One of the main differences with the current version is that it separates out the process of creating a request from sending it. Right now we do it in one method, which is disadvantageous for two reasons: 1. It prevents users from modifying the Request object before sending it. Matt raised a very good point that you can pass in extra headers, etc. into the signature itself - but what I'm talking about is things like event subscribers. In the lifecycle of a cURL transaction (a Request being created, prepared, and sent; a response being parsed, etc.) different events are emitted. These events are incredibly useful and empowering for end-users because they enable people to add in their own custom logic to processes which are usually unavailable (unless you actually extend the SDK code). By having a createRequest method that actually returns a Request rather than just sending it, we're allowing users to add their own subscribers. Here are a few use-cases: a user might want to add a subscriber which logs everything for that 1 request; a user might want to use a subscriber to handle exceptions differently for that 1 request; a user might want to retry the Request if a particular status code is returned, etc. 2. Most common HTTP frameworks, like Guzzle and ZF2, use these two methods (createRequest and send). It makes sense to stick with convention so end-users and contributors are not confused. 2. Moving Guzzle stuff to a sub-directory Right now we're using Guzzle by default, but we also want to support the ability for users to inject their own HTTP client in. So to make this clearer, I've moved Guzzle-specific files to their own Guzzle sub-directory (under the OpenStack\Common\Transport\Guzzle namespace). I've also changed the GuzzleClient classname to GuzzleAdapter because it's actually adapting an existing client, not serving as one. All it's doing is wrapping Guzzle and implementing our standard interface - so I wanted to make that clearer and avoid confusion. 3. HTTP exceptions I also discovered a really cool way to handle exceptions. Right now, we're throwing exceptions in the adapter - which adds a lot of exception-specific logic to a base class. So instead, I copied all of this existing logic to a new RequestException class. All the GuzzleClient/GuzzleAdapter needs to do is call RequestException::create($request, $response) and all of the logic for finding the right class for the right HTTP status code is determined inside RequestException like this: https://gist.github.com/jamiehannaford/0a085d4b1507308b0190 Another thing I wanted to do is make it easier for devs to debug the problem once an exception is thrown - so I made the Request and Response objects (for a failed request) available in the exception. So instead of printing out a generic message, we also allow them the ability to retrieve the offending request/response with $e-getRequest() and $e-getResponse() --- That's about it, basically :) Does anybody have any questions or concerns with any of this? Jamie Jamie Hannaford Software Developer III - CH [experience Fanatical Support] Tel:+41434303908 Mob:+41791009767 [Rackspace] Rackspace International GmbH a company registered in the Canton of Zurich, Switzerland (company identification number CH-020.4.047.077-1) whose registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace International GmbH privacy policy can be viewed at www.rackspace.co.uk/legal/swiss-privacy-policy - Rackspace Hosting Australia PTY LTD a company registered in the state of Victoria, Australia (company registered number ACN 153 275 524) whose registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW 2000, Australia. Rackspace Hosting Australia PTY LTD privacy policy can be viewed at www.rackspace.com.au/company/legal-privacy-statement.php - Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United States of America Rackspace US, Inc privacy policy can be viewed at www.rackspace.com/information/legal/privacystatement - Rackspace Limited is a company registered in England Wales (company registered number 03897010) whose registered office is at 5
[openstack-dev] [climate] Friday Meeting
Folks, o/ I finally got my dates for the US trip, and I have to say, that I won't be able to attend our closest Friday meeting as I'll be flying at this moment) Sylvain, will you be able to hold the meeting? Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] About store faults info for volumes
On 06:49 Wed 30 Apr , Zhangleiqiang (Trump) wrote: Hi stackers: I found when a instance status become error, I will see the detailed fault info at times when I show the detail of Instance. And it is very convenient for me to find the failed reason. Indeed, there is a nova.instance_faults which stores the fault info. Maybe it is helpful for users if Cinder also introduces the similar mechanism. Any advice? -- zhangleiqiang (Trump) Best Regards There are some discussions that started a couple of weeks ago about using sub states like Nova to know more clearly what happened when a volume is in an 'error' state. Unfortunately I'm not sure if that'll be in a formal session at the summit, but it'll definitely be discussed while we have the team together. Maybe John Griffith can comment since he's approving the sessions. -- Mike Perez ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS
Hi Everyone, During the last few days I have looked into the different LBaaS API proposals. I have also looked on the API style used in Neutron. I wanted to see how Neutron APIs addressed tree like object models. Follows my observation: 1. Security groups - http://docs.openstack.org/api/openstack-network/2.0/content/security-groups-ext.html) - a. security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html are children of security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html, the capability to create a security group with its children in a single call is not possible. b. The capability to create security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html is not supported c.The capability to update security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html/{SGR-ID} is not supported d. The notion of creating security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html (child object) without providing the parent {SG-ID} is not supported 2. Firewall as a service - http://docs.openstack.org/api/openstack-network/2.0/content/fwaas_ext.html - the API to manage firewall_policy and firewall_rule which have parent child relationships behaves the same way as Security groups 3. Group Policy - this is work in progress - https://wiki.openstack.org/wiki/Neutron/GroupPolicy - If I understand correctly, this API has a complex object model while the API adheres to the way other neutron APIs are done (ex: flat model, granular api, etc.) How critical is it to preserve a consistent API style for LBaaS? Should this be a consideration when evaluating API proposals? Regards, -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta
+1 but I don't get in until late Sunday :-( Any chance you could do this sometime Monday? I'd like to meet the people behind the IRC names and email addresses. --Rocky -Original Message- From: Ken'ichi Ohmichi [mailto:ken1ohmi...@gmail.com] Sent: Wednesday, April 30, 2014 6:29 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta 2014-04-30 19:11 GMT+09:00 Koderer, Marc m.kode...@telekom.de: Hi folks, last time we met one day before the Summit started for a short meet-up. Should we do the same this time? I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be fine for me. I may be in the jet lag Sunday, but the meet-up would be nice for me;-) Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS
Sam, I think it's important to keep the Neutron API style consistent. It would be odd if LBaaS uses a different style than the rest of the Neutron APIs. Youcef From: Samuel Bercovici [mailto:samu...@radware.com] Sent: Wednesday, April 30, 2014 10:59 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS Hi Everyone, During the last few days I have looked into the different LBaaS API proposals. I have also looked on the API style used in Neutron. I wanted to see how Neutron APIs addressed tree like object models. Follows my observation: 1. Security groups - http://docs.openstack.org/api/openstack-network/2.0/content/security-groups-ext.html) - a. security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html are children of security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html, the capability to create a security group with its children in a single call is not possible. b. The capability to create security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html is not supported c.The capability to update security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html/{SGR-ID} is not supported d. The notion of creating security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html (child object) without providing the parent {SG-ID} is not supported 2. Firewall as a service - http://docs.openstack.org/api/openstack-network/2.0/content/fwaas_ext.html - the API to manage firewall_policy and firewall_rule which have parent child relationships behaves the same way as Security groups 3. Group Policy - this is work in progress - https://wiki.openstack.org/wiki/Neutron/GroupPolicy - If I understand correctly, this API has a complex object model while the API adheres to the way other neutron APIs are done (ex: flat model, granular api, etc.) How critical is it to preserve a consistent API style for LBaaS? Should this be a consideration when evaluating API proposals? Regards, -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta
On 04/30/2014 02:22 PM, Rochelle.RochelleGrober wrote: +1 but I don't get in until late Sunday :-( Any chance you could do this sometime Monday? I'd like to meet the people behind the IRC names and email addresses. --Rocky Same here. -Original Message- From: Ken'ichi Ohmichi [mailto:ken1ohmi...@gmail.com] Sent: Wednesday, April 30, 2014 6:29 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta 2014-04-30 19:11 GMT+09:00 Koderer, Marc m.kode...@telekom.de: Hi folks, last time we met one day before the Summit started for a short meet-up. Should we do the same this time? I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be fine for me. I may be in the jet lag Sunday, but the meet-up would be nice for me;-) Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?
Hi, In Neutron I see SystemExit() being raised in some cases. Is this preferred over calling sys.exit()? I ask, because I recall having a TOX failure where all I was getting was the return code, with no traceback or indication at all of where the failure occurred. In that case, I changed from SystemExit() to sys.exit() and I then got the traceback and was able to see what was going wrong in the test case (it’s been weeks, so I don’t recall where this was at). I see currently, there is some changes to use of SystemExit() being reviewed (https://review.openstack.org/91185), and it reminded me of the concern I had. Can anyone enlighten me? Thanks! PCM (Paul Michali) MAIL …..…. p...@cisco.commailto:p...@cisco.com IRC ……..… pcm_ (irc.freenode.comhttp://irc.freenode.com) TW ………... @pmichali GPG Key … 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [climate] Friday Meeting
Hi Dina, I forgot yesterday to mention it was my last day at Bull, so the end of week was off-work until Monday. As a corollar, I won't be able to attend Friday meeting. Let's cancel this meeting and raise topics in mailing-list if needed. -Sylvain 2014-04-30 19:17 GMT+02:00 Dina Belova dbel...@mirantis.com: Folks, o/ I finally got my dates for the US trip, and I have to say, that I won't be able to attend our closest Friday meeting as I'll be flying at this moment) Sylvain, will you be able to hold the meeting? Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression
On 04/29/2014 03:34 PM, Steven Kaufer wrote: Jay Pipes jaypi...@gmail.com wrote on 04/29/2014 02:26:42 PM: From: Jay Pipes jaypi...@gmail.com To: openstack-dev@lists.openstack.org, Date: 04/29/2014 02:27 PM Subject: Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression On 04/29/2014 02:16 AM, Zhangleiqiang (Trump) wrote: Currently, Nova API achieve this feature based on the database’s REGEX support. Do you have advice on alternative way to achieve it? Hi Trump, Unfortunately, REGEXP support in databases is almost always ridiculously slow compared to prefix searches (WHERE col LIKE 'foo%'). Lately, I've been considering that a true tagging system for Nova would allow for better-performing and more user-friendly search/winnow functions in the Nova API. I'll post a blueprint specification for this and hopefully have some time to implement in Juno... Jay, I am interested in this design, please add me as a reviewer when the blueprint is created. Even better, I listed you as a co-contributor :) https://review.openstack.org/91444 Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [climate] Friday Meeting
+1 On Wed, Apr 30, 2014 at 11:41 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote: Hi Dina, I forgot yesterday to mention it was my last day at Bull, so the end of week was off-work until Monday. As a corollar, I won't be able to attend Friday meeting. Let's cancel this meeting and raise topics in mailing-list if needed. -Sylvain 2014-04-30 19:17 GMT+02:00 Dina Belova dbel...@mirantis.com: Folks, o/ I finally got my dates for the US trip, and I have to say, that I won't be able to attend our closest Friday meeting as I'll be flying at this moment) Sylvain, will you be able to hold the meeting? Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [solum] Solum Milestone 2 Demo Preview
Solum community, I am pleased to announce an early preview of the Solum M2 Demo via Vagrant for those who want to take a look. We created detailed instructions to install the demo here: https://wiki.openstack.org/wiki/Solum/solum_m2_demo Disclaimers: Please examine the look and feel right now as the functionality is being rapidly added. * We are adding Horizon look/feel to the demo. * We are adding the capability to read/write real data to/from Solum now. * Many of the screens don't flow together well yet (in the middle of a big integration push). * There are various bugs and issues that are being rapidly fixed. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] [cinder] Do we now require schema response validation in tempest clients?
There have been a lot of patches that add the validation of response dicts. We need a policy on whether this is required or not. For example, this patch https://review.openstack.org/#/c/87438/5 is for the equivalent of 'cinder service-list' and is a basically a copy of the nova test which now does the validation. So two questions: Is cinder going to do this kind of checking? If so, should new tests be required to do it on submission? -David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday 14-00 UTC
Hi, The agenda for the next meeting (Thursday, 1st of May, 14-00 UTC) is the following: 1) Stephen's API proposal: https://docs.google.com/document/d/129Da7zEk5p437_88_IKiQnNuWXzWaS_CpQVm4JBeQnc/edit#heading=h.hgpfh6kl7j7a The document proposes the API that covers pretty much of the features that we've identified on the requirements page. The use cases are being addressed also. We need to converge on general approach proposed there. 2) Summit agenda. We have two sessions at the neutron track. It makes sense to focus on the topics that will benefit most from face-to-face discussion. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [designate] Summit Design Session Agenda
I started an etherpad to collect agenda items: https://etherpad.openstack.org/p/DesignateAtlantaDesignSession Please add your topics in the described format (requires a blueprint). Thanks! Joe McBride Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept
Discussed further with Dima. Our consensus is to have WRITE consistency level defined in table schema, and READ consistency control at data item level. This should satisfy our use cases for now. For example, user defined table has Eventual Consistency (Quorum). After user writes data using the consistency level defined in table schema, when user tries to read data back asking for Strong consistency, MagnetoDB can do a READ Eventual Consistency (Quorum) to satisfy user's Strong consistency requirement. Thanks, Charles From: Charles Wang charles_w...@symantec.commailto:charles_w...@symantec.com Date: Wednesday, April 30, 2014 at 10:19 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, Illia Khudoshyn ikhudos...@mirantis.commailto:ikhudos...@mirantis.com Cc: Keith Newstadt keith_newst...@symantec.commailto:keith_newst...@symantec.com Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept Sorry for being late to the party. Since we follow mostly DynamoDB, it makes sense not to deviate too much away from DynamoDB’s consistency mode. From what I read about DynamoDB, READ consistency is defined to be either strong consistency or eventual consistency. ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead: boolean”, ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax If set to true, then the operation uses strongly consistent reads; otherwise, eventually consistent reads are used. Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with ConsistentRead set to true, you will receive an error message. Type: Boolean Required: No http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html WRITE consistency is not clearly defined anywhere. From what Werner Vogel’s description, it seems to indicate writes are replicated across availability zones/data centers synchronously. I guess inside data center, writes are replicated asynchronously. And the API doesn’t allow user to specify WRITE consistency level. http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html Considering the above factors and what Cassandra’s capabilities, I propose we use the following model. READ: * Strong consistency (synchronously replicate to all, maps to Cassandra READ All consistency level) * Eventual consistency (quorum read, maps to Cassandra READ Quorum) * Weak consistency (not in DynamoDB, maps to Cassandra READ ONE) WRITE: * Strong consistency (synchronously replicate to all, maps to Cassandra WRITE All consistency level) * Eventual consistency (quorum write, maps to Cassandra WRITE Quorum) * Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY) For conditional writes (conditional putItem/deletItem), only strong and eventual consistency should be supported. Thoughts? Thanks, Charles From: Dmitriy Ukhlov dukh...@mirantis.commailto:dukh...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Tuesday, April 29, 2014 at 10:43 AM To: Illia Khudoshyn ikhudos...@mirantis.commailto:ikhudos...@mirantis.com Cc: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept Hi Illia, WEAK/QUORUM instead of true/false it is ok for me. But we have also STRONG. What does STRONG mean? In current concept we a using QUORUM and say that it is strong. I guess it is confusing (at least for me) and can have different behavior for different backends. I believe that from user point of view only 4 usecases exist: write and read with consistency or not. For example if we use QUORUM for write what is usecase to use read with STRONG one? QUORUM read is enought to get consistent data. Or if we use WEAK (ONE) for consistent write what is the use case to use read from QUORUM? we need to read from ALL. But we can to use different kinds of backend's abilities to implement consistent and incosistent operation. To provide the best flexibility of backend specific features I propose to use backend specific configuration section in table schema. In this case you can get much more then in initial concept. For example specify consistensy level ANY instead of ONE for WEAK consistency if you want concentrate on performance of TWO if you want to provide more fault tolerant behavior. With my proposal we will have only one limitation in comparison with first proposal - We have maximally flexible consistency, but per table, not per request. We have only 2 choices to specify consistensy per
[openstack-dev] Unit test test_ovs_neutron_agent fails with dependency errors
Hi, I 've been trying to run test_ovs_neutron_agent.py unit test from openstack neutron master 'tip', and am hitting dependency errors for novaclient as here: Do we need to clone python-novaclient repo as well and point PYTHONPATH to it and try re-running the same? Please help. == ERROR: neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_tunnel_update -- Empty attachments: pythonlogging:'' pythonlogging:'neutron.api.extensions' Traceback (most recent call last): File neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py, line 89, in setUp notifier_cls = notifier_p.start() File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1396, in start result = self.__enter__() File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1252, in __enter__ self.target = self.getter() File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1414, in lambda getter = lambda: _importer(target) File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1102, in _importer thing = _dot_lookup(thing, comp, import_path) File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1091, in _dot_lookup __import__(import_path) File neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 29, in module from neutron.db import agents_db File neutron/db/agents_db.py, line 24, in module from neutron.extensions import agent as ext_agent File neutron/extensions/agent.py, line 20, in module from neutron.api.v2 import base File neutron/api/v2/base.py, line 30, in module from neutron.notifiers import nova File neutron/notifiers/nova.py, line 19, in module from novaclient.v1_1.contrib import server_external_events ImportError: cannot import name server_external_events == ERROR: neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_update_ports_returns_changed_vlan -- Empty attachments: pythonlogging:'' pythonlogging:'neutron.api.extensions' Traceback (most recent call last): File neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py, line 89, in setUp notifier_cls = notifier_p.start() File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1396, in start result = self.__enter__() File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1252, in __enter__ self.target = self.getter() File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1414, in lambda getter = lambda: _importer(target) File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1102, in _importer thing = _dot_lookup(thing, comp, import_path) File /usr/local/lib/python2.7/dist-packages/mock-1.0.1-py2.7.egg/mock.py, line 1091, in _dot_lookup __import__(import_path) File neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 29, in module from neutron.db import agents_db File neutron/db/agents_db.py, line 24, in module from neutron.extensions import agent as ext_agent File neutron/extensions/agent.py, line 20, in module from neutron.api.v2 import base File neutron/api/v2/base.py, line 30, in module from neutron.notifiers import nova File neutron/notifiers/nova.py, line 19, in module from novaclient.v1_1.contrib import server_external_events ImportError: cannot import name server_external_events Ran 52 tests in 0.239s FAILED (failures=46) -- Thanks, Vivek ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO][Summit] Dev/Test Environment and Unit Test Etherpads
Hi, As requested, I have put together some etherpads for my summit sessions. The first is for the TripleO development and test environment discussion: https://etherpad.openstack.org/p/juno-summit-tripleo-environment And for the unit testing topic I added to the bottom of Derek's etherpad for CI: https://etherpad.openstack.org/p/juno-summit-tripleo-ci Please take a look and let me know if you have any thoughts that should be addressed before the summit. Thanks. -Ben ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?
Chris Friesen chris.frie...@windriver.com wrote on 04/28/2014 10:44:46 AM: On 04/26/2014 09:41 PM, Jay Lau wrote: ... My idea is that can we add a new field such as PlacemenetPolicy to AutoScalingGroup? If the value is affinity, then when heat engine create the AutoScalingGroup, it will first create a server group with affinity policy, then when create VM instance for the AutoScalingGroup, heat engine will transfer the server group id as scheduler hints so as to make sure all the VM instances in the AutoScalingGroup can be created with affinity policy. While I personally like this sort of idea from the perspective of simplifying things for heat users, I see two problems. First, my impression is that heat tries to provide a direct mapping of nova resources to heat resources. That matches my understanding too. But autoscaling groups (all three kinds) are already broken in that regard: they are not Nova resources, nor resources of any other service, but purely creatures of Heat's creation. There is a blueprint for fixing this, but it is only very partially implemented at the moment. Using a property of a heat resource to trigger the creation of a nova resource would not fit that model. For the sake of your argument, let's pretend that the new ASG blueprint has been fully implemented. That means an ASG is an ordinary virtual resource. In all likelihood the implementation will generate templates and make nested stacks. I think it is fairly natural to suppose that the generated template could include a Nova server group. Second, it seems less well positioned for exposing possible server group enhancements in nova. For example, one enhancement that has been discussed is to add a server group option to make the group scheduling policy a weighting factor if it can't be satisfied as a filter. With the server group as an explicit resource there is a natural way to extend it. Abstractly an autoscaling group is a sub-class of group of servers (ignoring the generalization of server in the relevant cases), so it would seem natural to me that the properties of an autoscaling group would include the properties of a server group. As the latter evolves, so would the former. OTOH, I find nothing particularly bad with doing it as you suggest. BTW, this is just the beginning. What about resources of type AWS::CloudFormation::Stack? What about Trove and Sahara? Regards, Mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Timestamp formats in the REST API
On Tue, 2014-04-29 at 10:39 -0400, Doug Hellmann wrote: On Tue, Apr 29, 2014 at 9:48 AM, Mark McLoughlin mar...@redhat.com wrote: Hey In this patch: https://review.openstack.org/83681 by Ghanshyam Mann, we encountered an unusual situation where a timestamp in the returned XML looked like this: 2014-04-08 09:00:14.399708+00:00 What appeared to be unusual was that the timestamp had both sub-second time resolution and timezone information. It was felt that this wasn't a valid timestamp format and then some debate about how to 'fix' it: https://review.openstack.org/87563 Anyway, this lead me down a bit of a rabbit hole, so I'm going to attempt to document some findings. Firstly, some definitions: - Python's datetime module talk about datetime objects being 'naive' or 'aware' https://docs.python.org/2.7/library/datetime.html A datetime object d is aware if d.tzinfo is not None and d.tzinfo.utcoffset(d) does not return None. If d.tzinfo is None, or if d.tzinfo is not None but d.tzinfo.utcoffset(d) returns None, d is naive. (Most people will have encountered this already, but I'm including it for completeness) - The ISO8601 time and date format specifies timestamps like this: 2014-04-29T11:37:00Z with many variations. One distinguishing aspect of the ISO8601 format is the 'T' separating date and time. RFC3339 is very closely related and serves as easily accessible documentation of the format: http://www.ietf.org/rfc/rfc3339.txt - The Python iso8601 library allows parsing this time format, but also allows subtle variations that don't conform to the standard like omitting the 'T' separator: import iso8601 iso8601.parse_date('2014-04-29 11:37:00Z') datetime.datetime(2014, 4, 29, 11, 37, tzinfo=iso8601.iso8601.Utc object at 0x214b050) Presumably this is for the pragmatic reason that when you stringify a datetime object, the resulting string uses ' ' as a separator: import datetime str(datetime.datetime(2014, 4, 29, 11, 37)) '2014-04-29 11:37:00' And now some observations on what's going on in Nova: - We don't store timezone information in the database, but all our timestamps are relative to UTC nonetheless. - The objects code automatically adds the UTC to naive datetime objects: if value.utcoffset() is None: value = value.replace(tzinfo=iso8601.iso8601.Utc()) so code that is ported to objects may now be using aware datetime objects where they were previously using naive objects. - Whether we store sub-second resolution timestamps in the database appears to be database specific. In my quick tests, we store that information in sqlite but not MySQL. - However, timestamps added by SQLAlchemy when you do e.g. save() do include sub-second information, so some DB API calls may return sub-second timestamps even when that information isn't stored in the database. In our REST APIs, you'll essentially see one of three time formats. I'm calling them 'isotime', 'strtime' and 'xmltime': - 'isotime' - this is the result from timeutils.isotime(). It includes timezone information (i.e. a 'Z' prefix) but not microseconds. You'll see this in places where we stringify the datetime objects in the API layer using isotime() before passing them to the JSON/XML serializers. - 'strtime' - this is the result from timeutils.strtime(). It doesn't include timezone information but does include decimal seconds. This is what jsonutils.dumps() uses when we're serializing API responses - 'xmltime' or 'str(datetime)' format - this is just what you get when you stringify a datetime using str(). If the datetime is tz aware or includes non-zero microseconds, then that information will be included in the result. This is a significant different versus the other two formats where it is clear whether tz and microsecond information is included in the string. but there are some caveats: - I don't know how significant it is these days, but timestamps will be serialized to strtime format when going over RPC, but won't be de-serialized on the remote end. This could lead to a situation where the API layer tries and stringify a strtime formatted string using timeutils.isotime(). (see below for a description of those formats) - In at least one place - e.g. the 'updated' timestamp for v2 extensions - we hardcode the timestamp as strings in the code and don't currently use one of the formats above. My conclusions from all that: 1) This sucks 2) At the very least, we should be clear in our API samples tests which of the three
Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?
On 04/30/2014 03:41 PM, Mike Spreitzer wrote: Chris Friesen chris.frie...@windriver.com wrote on 04/28/2014 10:44:46 AM: Using a property of a heat resource to trigger the creation of a nova resource would not fit that model. For the sake of your argument, let's pretend that the new ASG blueprint has been fully implemented. That means an ASG is an ordinary virtual resource. In all likelihood the implementation will generate templates and make nested stacks. I think it is fairly natural to suppose that the generated template could include a Nova server group. Second, it seems less well positioned for exposing possible server group enhancements in nova. For example, one enhancement that has been discussed is to add a server group option to make the group scheduling policy a weighting factor if it can't be satisfied as a filter. With the server group as an explicit resource there is a natural way to extend it. Abstractly an autoscaling group is a sub-class of group of servers (ignoring the generalization of server in the relevant cases), so it would seem natural to me that the properties of an autoscaling group would include the properties of a server group. As the latter evolves, so would the former. OTOH, I find nothing particularly bad with doing it as you suggest. BTW, this is just the beginning. What about resources of type AWS::CloudFormation::Stack? What about Trove and Sahara? If we go with what Zane suggested (using the already-exposed scheduler_hints) then by implementing a single server group resource we basically get support for server groups for free in any resource that exposes scheduler hints. That seems to me to be an excellent reason to go that route rather than modifying all the different group-like resources that heat supports. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit
Carl, Let me ask you something... If my cloud is IPv6-Only based (that's my intention), which blueprint will fit on it (internal-dns-resolution or external-dns-resolution) ? Since IPv6 is all public, don't you think that we (might) need a new blueprint for IPv6-Only, like just dns-resolution? BTW, maybe this dns-resolution for IPv6-Only networks (if desired) might also handle the IPv4 Floating IPs (in a NAT46 fashion)... My plan is to have IPv4 only at the border (i.e. only at the qg-* interface within the Namespace router (NAT46 will happen here)), so, the old internet infrastructure will be able to reach a IPv6-Only project subnet using a well know FQDN DNS IPv4 entry... Best! Thiago On 29 April 2014 17:09, Carl Baldwin c...@ecbaldwin.net wrote: The design summit discussion topic I submitted [1] for my DNS blueprints [2][3][4] and this one [5] just missed the cut for the design session schedule. It stung a little to be turned down but I totally understand the time and resource constraints that drove the decision. I feel this is an important subject to discuss because the end result will be a better cloud user experience overall. The design summit could be a great time to bring together interested parties from Neutron, Nova, and Designate to discuss the integration that I propose in these blueprints. DNS for IPv6 in Neutron is also something I would like to discuss. Mostly, I'd like to get a good sense for where this is at currently with the current Neutron dns implementation (dnsmasq) and how it will fit in. I've created an etherpad to help us coordinate [6]. If you are interested, please go there and help me flesh it out. Carl Baldwin Neutron L3 Subteam [1] http://summit.openstack.org/cfp/details/403 [2] https://blueprints.launchpad.net/neutron/+spec/internal-dns-resolution [3] https://blueprints.launchpad.net/nova/+spec/internal-dns-resolution [4] https://blueprints.launchpad.net/neutron/+spec/external-dns-resolution [5] https://blueprints.launchpad.net/neutron/+spec/dns-subsystem [6] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][LBaaS] Thoughts on current process
Hey everyone, I agree that we need to be preparing for the summit. Using Google docs mixed with Openstack wiki works for me right now. I need to become more familiar the gerrit process and I agree with Samuel that it is not conducive to large design discussions. That being said I'd like to add my thoughts on how I think we can most effectively get stuff done. As everyone knows there are many new players from across the industry that have an interest in Neutron LBaaS. Companies I currently see involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, eBay/Paypal and Rackspace. We also have individuals involved as well. I echo Kyle's sentiment on the passion everyone is bringing to the project! Coming into this project a few months ago I saw that a few things needed to be done. Most notably, I realized that gathering everyone's expectations on what they wanted Neutron LBaaS to be was going to be crucial. Hence, I created the requirements document. Written requirements are important within a single organization. They are even more important when multiple organizations are working together because everyone is spread out across the world and every organization has a different development process. Again, my goal with the requirements document is to make sure that everyone's voice in the community is taken into consideration. The benefit I've seen from this document is that we ask Why? to each other, iterate on the document and in the end have a clear understanding of everyone's motives. We also learn from each other by doing this which is one of the great benefits of open source. Now that we have a set of requirements the next question to ask is, How doe we prioritize requirements so that we can start designing and implementing them? If this project were a completely new piece of software I would argue that we iterate on individual features based on anecdotal information. In essence I would argue an agile approach. However, most of the companies involved have been operating LBaaS for a while now. Rackspace, for example, has been operating LBaaS for the better part of 4 years. We have a clear understanding of what features our customers want and how to operate at scale. I believe other operators of LBaaS have the same understanding of their customers and their operational needs. I guess my main point is that, collectively, we have data to back up which requirements we should be working on. That doesn't mean we preclude requirements based on anecdotal information (i.e. Our customers are saying they want new shiny feature X). At the end of the day I want to prioritize the community's requirements based on factual data and anecdotal information. Assuming requirements are prioritized (which as of today we have a pretty good idea of these priorities) the next step is to design before laying down any actual code. I agree with Samuel that pushing the cart before the horse is a bad idea in this case (and it usually is the case in software development), especially since we have a pretty clear idea on what we need to be designing for. I understand that the current code base has been worked on by many individuals and the work done thus far is the reason why so many new faces are getting involved. However, we now have a completely updated set of requirements that the community has put together and trying to fit the requirements to existing code may or may not work. In my experience, I would argue that 99% of the time duct-taping existing code to fit in new requirements results in buggy software. That being said, I usually don't like to rebuild a project from scratch. If I can I try to refactor as much as possible first. However, in this case we have a particular set of requirements that changes the game. Particularly, operator requirements have not been given the attention they deserve. I think of Openstack as being cloud software that is meant to operate at scale and have the necessary operator tools to do so. Otherwise, why do we have so many companies interested in Openstack if you can't operate a cloud that scales? In the case of LBaaS, user/feature requirements and operator requirements are not necessarily mutually exclusive. How you design the system in regards to one set of requirements affects the design of the system in regards to the other set of requirements. SSL termination, for example, affects the ability to scale since it is CPU intensive. As an operator, I need to know how to provision load balancer instances efficiently so that I'm not having to order new hardware more than I have to. With this in mind, I am assuming that most of us are vendor-agnostic and want to cooperate in developing an open source driver while letting vendors create their own drivers. If this is not the case then perhaps a lot of the debates we have been having are moot since we can separate efforts depending on what driver we want to work on. The only item of Neutron LBaaS that we need to have consensus on then is the
Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process
Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as you initials so I got confused. :) Cheers, --Jorge On 4/30/14 5:17 PM, Jorge Miramontes jorge.miramon...@rackspace.com wrote: Hey everyone, I agree that we need to be preparing for the summit. Using Google docs mixed with Openstack wiki works for me right now. I need to become more familiar the gerrit process and I agree with Samuel that it is not conducive to large design discussions. That being said I'd like to add my thoughts on how I think we can most effectively get stuff done. As everyone knows there are many new players from across the industry that have an interest in Neutron LBaaS. Companies I currently see involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, eBay/Paypal and Rackspace. We also have individuals involved as well. I echo Kyle's sentiment on the passion everyone is bringing to the project! Coming into this project a few months ago I saw that a few things needed to be done. Most notably, I realized that gathering everyone's expectations on what they wanted Neutron LBaaS to be was going to be crucial. Hence, I created the requirements document. Written requirements are important within a single organization. They are even more important when multiple organizations are working together because everyone is spread out across the world and every organization has a different development process. Again, my goal with the requirements document is to make sure that everyone's voice in the community is taken into consideration. The benefit I've seen from this document is that we ask Why? to each other, iterate on the document and in the end have a clear understanding of everyone's motives. We also learn from each other by doing this which is one of the great benefits of open source. Now that we have a set of requirements the next question to ask is, How doe we prioritize requirements so that we can start designing and implementing them? If this project were a completely new piece of software I would argue that we iterate on individual features based on anecdotal information. In essence I would argue an agile approach. However, most of the companies involved have been operating LBaaS for a while now. Rackspace, for example, has been operating LBaaS for the better part of 4 years. We have a clear understanding of what features our customers want and how to operate at scale. I believe other operators of LBaaS have the same understanding of their customers and their operational needs. I guess my main point is that, collectively, we have data to back up which requirements we should be working on. That doesn't mean we preclude requirements based on anecdotal information (i.e. Our customers are saying they want new shiny feature X). At the end of the day I want to prioritize the community's requirements based on factual data and anecdotal information. Assuming requirements are prioritized (which as of today we have a pretty good idea of these priorities) the next step is to design before laying down any actual code. I agree with Samuel that pushing the cart before the horse is a bad idea in this case (and it usually is the case in software development), especially since we have a pretty clear idea on what we need to be designing for. I understand that the current code base has been worked on by many individuals and the work done thus far is the reason why so many new faces are getting involved. However, we now have a completely updated set of requirements that the community has put together and trying to fit the requirements to existing code may or may not work. In my experience, I would argue that 99% of the time duct-taping existing code to fit in new requirements results in buggy software. That being said, I usually don't like to rebuild a project from scratch. If I can I try to refactor as much as possible first. However, in this case we have a particular set of requirements that changes the game. Particularly, operator requirements have not been given the attention they deserve. I think of Openstack as being cloud software that is meant to operate at scale and have the necessary operator tools to do so. Otherwise, why do we have so many companies interested in Openstack if you can't operate a cloud that scales? In the case of LBaaS, user/feature requirements and operator requirements are not necessarily mutually exclusive. How you design the system in regards to one set of requirements affects the design of the system in regards to the other set of requirements. SSL termination, for example, affects the ability to scale since it is CPU intensive. As an operator, I need to know how to provision load balancer instances efficiently so that I'm not having to order new hardware more than I have to. With this in mind, I am assuming that most of us are vendor-agnostic and want to cooperate in developing an open source driver while letting vendors create their own drivers. If this is not the case then
Re: [openstack-dev] SR-IOV summit session
Yes, exactly. Don't write a presentation, come with a plan in bullet points in an etherpad, and be prepared to have an active discussion about how that plan might change... If you'd like I am sure people here would be happy to pre-review an etherpad to make sure you're on the right track. Cheers, Michael On Thu, May 1, 2014 at 1:30 AM, Adam Young ayo...@redhat.com wrote: On 04/30/2014 10:06 AM, Robert Li (baoli) wrote: Hi John, With the summit around the corner, please advise how we should run this session: http://summit.openstack.org/cfp/details/248 We are currently working on this nova spec, https://review.openstack.org/#/c/86606/. I guess its content will be a candidate to be presented in the session. Thanks, Robert ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Don't worry too much about running the session.Be prepared to moderate a discussion, and come with your ideas clear, well thought out, and defensible, but with an open mind to alternatives. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic] should we have an IRC meeting next week ?
Hi all, Just a reminder that May 5th is our next scheduled meeting day, but I probably won't make it, because I'll be just getting back from one trip and start two consecutive weeks of conference travel early the next morning. Chris Krelle (nobodycam) has offered to chair that meeting in my absence. The agenda looks pretty light at this point, and any serious discussions should just be punted to the summit anyway, so if folks want to cancel the meeting, I think that's fine. Also, if there are summit or scheduling related matters that anyone needs to discuss with me, please use email (either direct to me, or on this list) and I will respond, as my IRC availability for the next ~10 days will be limited due to travel. We won't have a meeting on May 12th... because we'll all be in Atlanta :) Regards, Devananda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?
On 2014-04-29 18:01:44 + (+), Jeremy Stanley wrote: [...] I'll check with the team handling venue logistics right now and update this thread with options. I'm also inquiring about the availability of a projector if we get one of the non-design-session breakout rooms, and I can bring a digital micro/macroscope which ought to do a fair job of showing everyone's photo IDs through it if so (which would enable us to use The 'Sassman-Projected' Method and significantly increase our throughput via parallelization). I was able to confirm a dedicated location (room 405, just one floor up from the design summit sessions) Wednesday 10:30-11:00am with a projector and seating for 50. We're getting close to that many on the sign-up sheet already... chances are we'll be able to put each ID up on the projector for 15-20 seconds, taking buffer time into account at the beginning for walking to the room and reading the master list checksum aloud. I've asked the organizers to list this as OpenStack Web of Trust: Key Signing and have updated https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit accordingly. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS
I agree it may be odd, but is that a strong argument? To me, following RESTful style/constructs is the main thing to consider. If people can specify everything in the parent resource then let them (i.e. single call). If they want to specify at a more granular level then let them do that too (i.e. multiple calls). At the end of the day the API user can choose the style they want. Cheers, --Jorge From: Youcef Laribi youcef.lar...@citrix.commailto:youcef.lar...@citrix.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Wednesday, April 30, 2014 1:35 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS Sam, I think it’s important to keep the Neutron API style consistent. It would be odd if LBaaS uses a different style than the rest of the Neutron APIs. Youcef From: Samuel Bercovici [mailto:samu...@radware.com] Sent: Wednesday, April 30, 2014 10:59 AM To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS Hi Everyone, During the last few days I have looked into the different LBaaS API proposals. I have also looked on the API style used in Neutron. I wanted to see how Neutron APIs addressed “tree” like object models. Follows my observation: 1. Security groups - http://docs.openstack.org/api/openstack-network/2.0/content/security-groups-ext.html) – a. security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html are children of security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html, the capability to create a security group with its children in a single call is not possible. b. The capability to create security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html is not supported c.The capability to update security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html/{SGR-ID} is not supported d. The notion of creating security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html (child object) without providing the parent {SG-ID} is not supported 2. Firewall as a service - http://docs.openstack.org/api/openstack-network/2.0/content/fwaas_ext.html - the API to manage firewall_policy and firewall_rule which have parent child relationships behaves the same way as Security groups 3. Group Policy – this is work in progress - https://wiki.openstack.org/wiki/Neutron/GroupPolicy - If I understand correctly, this API has a complex object model while the API adheres to the way other neutron APIs are done (ex: flat model, granular api, etc.) How critical is it to preserve a consistent API style for LBaaS? Should this be a consideration when evaluating API proposals? Regards, -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][Nova][Designate][L3][IPv6] Discussion about Cross-Project Integration of DNS at Summit
Thiago, Throwing IPv6 in the mix does blur the distinction between internal and external. In the blueprints, internal and external have more to do with whether we're dealing with the name/IP mapping internally to Neutron or externally to Neutron by integrating with an external DNS service. In other words, are DNS entries being consumed by other VMs on the same network from the dnsmasq server or are they being consumed external to the network from DNSaaS. This is the type of question that I was looking forward to to help flesh out the blueprints to make them IPv6 friendly. Thanks for asking. I don't think that we need a separate blueprint as I think that IPv6 will be worked in to the current Neutron architecture. Sean Collins made one comment on my blueprint that IPv6 addresses are being inserted in to the dnsmasq host file. Thoughts? Carl On Wed, Apr 30, 2014 at 4:10 PM, Martinx - ジェームズ thiagocmarti...@gmail.com wrote: Carl, Let me ask you something... If my cloud is IPv6-Only based (that's my intention), which blueprint will fit on it (internal-dns-resolution or external-dns-resolution) ? Since IPv6 is all public, don't you think that we (might) need a new blueprint for IPv6-Only, like just dns-resolution? BTW, maybe this dns-resolution for IPv6-Only networks (if desired) might also handle the IPv4 Floating IPs (in a NAT46 fashion)... My plan is to have IPv4 only at the border (i.e. only at the qg-* interface within the Namespace router (NAT46 will happen here)), so, the old internet infrastructure will be able to reach a IPv6-Only project subnet using a well know FQDN DNS IPv4 entry... Best! Thiago On 29 April 2014 17:09, Carl Baldwin c...@ecbaldwin.net wrote: The design summit discussion topic I submitted [1] for my DNS blueprints [2][3][4] and this one [5] just missed the cut for the design session schedule. It stung a little to be turned down but I totally understand the time and resource constraints that drove the decision. I feel this is an important subject to discuss because the end result will be a better cloud user experience overall. The design summit could be a great time to bring together interested parties from Neutron, Nova, and Designate to discuss the integration that I propose in these blueprints. DNS for IPv6 in Neutron is also something I would like to discuss. Mostly, I'd like to get a good sense for where this is at currently with the current Neutron dns implementation (dnsmasq) and how it will fit in. I've created an etherpad to help us coordinate [6]. If you are interested, please go there and help me flesh it out. Carl Baldwin Neutron L3 Subteam [1] http://summit.openstack.org/cfp/details/403 [2] https://blueprints.launchpad.net/neutron/+spec/internal-dns-resolution [3] https://blueprints.launchpad.net/nova/+spec/internal-dns-resolution [4] https://blueprints.launchpad.net/neutron/+spec/external-dns-resolution [5] https://blueprints.launchpad.net/neutron/+spec/dns-subsystem [6] https://etherpad.openstack.org/p/juno-dns-neutron-nova-designate ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?
Awesome! thanks @fungi -- dims On Wed, Apr 30, 2014 at 6:47 PM, Jeremy Stanley fu...@yuggoth.org wrote: On 2014-04-29 18:01:44 + (+), Jeremy Stanley wrote: [...] I'll check with the team handling venue logistics right now and update this thread with options. I'm also inquiring about the availability of a projector if we get one of the non-design-session breakout rooms, and I can bring a digital micro/macroscope which ought to do a fair job of showing everyone's photo IDs through it if so (which would enable us to use The 'Sassman-Projected' Method and significantly increase our throughput via parallelization). I was able to confirm a dedicated location (room 405, just one floor up from the design summit sessions) Wednesday 10:30-11:00am with a projector and seating for 50. We're getting close to that many on the sign-up sheet already... chances are we'll be able to put each ID up on the projector for 15-20 seconds, taking buffer time into account at the beginning for walking to the room and reading the master list checksum aloud. I've asked the organizers to list this as OpenStack Web of Trust: Key Signing and have updated https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit accordingly. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: http://davanum.wordpress.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Turbo-Hipster bad votes
Hi all, Unfortunately turbo-hipster has been leaving bad votes on nova db migrations. I've disabled voting and we're looking into the problem. Sorry for the inconvenience. If you have any questions please feel free to ask. Regards, Josh ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Infra] Updates for SPF Record needed
[Cc'ing the infra ML in case some concerned parties aren't subbed] On 2014-04-30 08:52:32 -0700 (-0700), Jay Faulkner wrote: [...] 2) Make the SPF record accurate: v=spf1 include:emailsrvr.com include:sendgrid.net a:review.openstack.org ~all. For any additional services that send mail for openstack.org, an additional a:my.host.name.openstack.org would be added to the SPF record. [...] I strongly recommend we pursue option 2 -- this would mean if you know of any other devices sending mail to @openstack.org, please reply to this thread with the information so we can draft a valid SPF record. I completely agree that #2 is the best path forward (and have gone ahead and added a:review.openstack.org and given the foundation admins with whom we share control of that domain a heads up). This should be a nondisruptive addition and ought to improve deliverability of messages coming from the new IP addresses. We can certainly add further exceptions as we identify them, but for now this should help avoid some pain. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?
Excerpts from Jeremy Stanley's message of 2014-04-30 15:47:21 -0700: On 2014-04-29 18:01:44 + (+), Jeremy Stanley wrote: [...] I'll check with the team handling venue logistics right now and update this thread with options. I'm also inquiring about the availability of a projector if we get one of the non-design-session breakout rooms, and I can bring a digital micro/macroscope which ought to do a fair job of showing everyone's photo IDs through it if so (which would enable us to use The 'Sassman-Projected' Method and significantly increase our throughput via parallelization). I was able to confirm a dedicated location (room 405, just one floor up from the design summit sessions) Wednesday 10:30-11:00am with a projector and seating for 50. We're getting close to that many on the sign-up sheet already... chances are we'll be able to put each ID up on the projector for 15-20 seconds, taking buffer time into account at the beginning for walking to the room and reading the master list checksum aloud. I've asked the organizers to list this as OpenStack Web of Trust: Key Signing and have updated https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit accordingly. Fantastic. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS
On 04/30/2014 01:59 PM, Samuel Bercovici wrote: Hi Everyone, During the last few days I have looked into the different LBaaS API proposals. I have also looked on the API style used in Neutron. I wanted to see how Neutron APIs addressed “tree” like object models. Please, please don't follow the Nova v2 API's example on anything. Frankly, there are so many annoyances about it -- proliferation of extensions, lack of REST payload schema discoverability, use of the project in the URL for everything, use of XML, redundant subresource names, etc -- that it's just about the worst thing to emulate IMO. Follows my observation: 1.Security groups - http://docs.openstack.org/api/openstack-network/2.0/content/security-groups-ext.html) – a.security-group-rules http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html are children of security-groups http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html, the capability to create a security group with its children in a single call is not possible. Agreed. It should be possible to construct a security group and all of its rules in a single request payload. b.The capability to create security-group-rules http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groups http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-rules http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html is not supported Agreed, though as noted above, there's no reason to repeat security-group in security-group-rules. We already know it's related to a security group, because the word security-group is already in the darn URL. So, just make it: GET /security-group/$UUID/rules c.The capability to update security-group-rules http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html using the following URI path v2.0/security-groups http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html/{SG-ID}/security-group-rules http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html/{SGR-ID} is not supported Agreed, it would be good to support: PUT/DELETE /security-group/$UUID/rules/$RULE_ID d.The notion of creating security-group-rules http://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html (child object) without providing the parent {SG-ID} is not supported I'm on the fence on this one. What is a security group rule without a security group? Do you mean you'd like to support easily being able to add some common protocol/port/ingress/egress rules to new security groups without having to select port ranges and protocols each time you create a security group? Best, -jay *2.*Firewall as a service - http://docs.openstack.org/api/openstack-network/2.0/content/fwaas_ext.html - the API to manage *firewall_policy and firewall_rule which have parent child relationships behaves the same way as Security groups*** 3.Group Policy – this is work in progress - https://wiki.openstack.org/wiki/Neutron/GroupPolicy - If I understand correctly, this API has a complex object model while the API adheres to the way other neutron APIs are done (ex: flat model, granular api, etc.) How critical is it to preserve a consistent API style for LBaaS? Should this be a consideration when evaluating API proposals? Regards, -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS
It's also worth stating that coding a web UI to deploy a new service is often easier done when single-call is an option. (ie. only one failure scenario to deal with.) I don't see a strong reason we shouldn't allow both single-call creation of whole bunch of related objects, as well as a workflow involving the creation of these objects individually. On Wed, Apr 30, 2014 at 3:50 PM, Jorge Miramontes jorge.miramon...@rackspace.com wrote: I agree it may be odd, but is that a strong argument? To me, following RESTful style/constructs is the main thing to consider. If people can specify everything in the parent resource then let them (i.e. single call). If they want to specify at a more granular level then let them do that too (i.e. multiple calls). At the end of the day the API user can choose the style they want. Cheers, --Jorge From: Youcef Laribi youcef.lar...@citrix.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: Wednesday, April 30, 2014 1:35 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS Sam, I think it’s important to keep the Neutron API style consistent. It would be odd if LBaaS uses a different style than the rest of the Neutron APIs. Youcef *From:* Samuel Bercovici [mailto:samu...@radware.com samu...@radware.com] *Sent:* Wednesday, April 30, 2014 10:59 AM *To:* openstack-dev@lists.openstack.org *Subject:* [openstack-dev] [Neutron][LBaaS]Conforming to Open Stack API style in LBaaS Hi Everyone, During the last few days I have looked into the different LBaaS API proposals. I have also looked on the API style used in Neutron. I wanted to see how Neutron APIs addressed “tree” like object models. Follows my observation: 1. Security groups - http://docs.openstack.org/api/openstack-network/2.0/content/security-groups-ext.html) – a. security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.htmlare children of security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html, the capability to create a security group with its children in a single call is not possible. b. The capability to create security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.htmlusing the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html /{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.htmlis not supported c.The capability to update security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.htmlusing the following URI path v2.0/security-groupshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroups_v2.0_security-groups_security-groups-ext.html /{SG-ID}/security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html/{SGR-ID} is not supported d. The notion of creating security-group-ruleshttp://docs.openstack.org/api/openstack-network/2.0/content/GET_security-groups-v2.0_listSecGroupRules_v2.0_security-group-rules_security-groups-ext.html(child object) without providing the parent {SG-ID} is not supported *2. *Firewall as a service - http://docs.openstack.org/api/openstack-network/2.0/content/fwaas_ext.html- the API to manage *firewall_policy and firewall_rule which have parent child relationships behaves the same way as Security groups* 3. Group Policy – this is work in progress - https://wiki.openstack.org/wiki/Neutron/GroupPolicy - If I understand correctly, this API has a complex object model while the API adheres to the way other neutron APIs are done (ex: flat model, granular api, etc.) How critical is it to preserve a consistent API style for LBaaS? Should this be a consideration when evaluating API proposals? Regards, -Sam. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___
Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process
Hi Jorge! +1 to everything you just said. In fact, I think you said essentially what I was trying to last week with 100% less drama. I'll work to add workflows to my proposal, but please note it's late on a Wednesday and tomorrow's IRC meeting is awfully early in my time zone. I probably won't have workflow documentation done in time. Thanks, Stephen On Wed, Apr 30, 2014 at 3:26 PM, Jorge Miramontes jorge.miramon...@rackspace.com wrote: Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as you initials so I got confused. :) Cheers, --Jorge On 4/30/14 5:17 PM, Jorge Miramontes jorge.miramon...@rackspace.com wrote: Hey everyone, I agree that we need to be preparing for the summit. Using Google docs mixed with Openstack wiki works for me right now. I need to become more familiar the gerrit process and I agree with Samuel that it is not conducive to large design discussions. That being said I'd like to add my thoughts on how I think we can most effectively get stuff done. As everyone knows there are many new players from across the industry that have an interest in Neutron LBaaS. Companies I currently see involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, eBay/Paypal and Rackspace. We also have individuals involved as well. I echo Kyle's sentiment on the passion everyone is bringing to the project! Coming into this project a few months ago I saw that a few things needed to be done. Most notably, I realized that gathering everyone's expectations on what they wanted Neutron LBaaS to be was going to be crucial. Hence, I created the requirements document. Written requirements are important within a single organization. They are even more important when multiple organizations are working together because everyone is spread out across the world and every organization has a different development process. Again, my goal with the requirements document is to make sure that everyone's voice in the community is taken into consideration. The benefit I've seen from this document is that we ask Why? to each other, iterate on the document and in the end have a clear understanding of everyone's motives. We also learn from each other by doing this which is one of the great benefits of open source. Now that we have a set of requirements the next question to ask is, How doe we prioritize requirements so that we can start designing and implementing them? If this project were a completely new piece of software I would argue that we iterate on individual features based on anecdotal information. In essence I would argue an agile approach. However, most of the companies involved have been operating LBaaS for a while now. Rackspace, for example, has been operating LBaaS for the better part of 4 years. We have a clear understanding of what features our customers want and how to operate at scale. I believe other operators of LBaaS have the same understanding of their customers and their operational needs. I guess my main point is that, collectively, we have data to back up which requirements we should be working on. That doesn't mean we preclude requirements based on anecdotal information (i.e. Our customers are saying they want new shiny feature X). At the end of the day I want to prioritize the community's requirements based on factual data and anecdotal information. Assuming requirements are prioritized (which as of today we have a pretty good idea of these priorities) the next step is to design before laying down any actual code. I agree with Samuel that pushing the cart before the horse is a bad idea in this case (and it usually is the case in software development), especially since we have a pretty clear idea on what we need to be designing for. I understand that the current code base has been worked on by many individuals and the work done thus far is the reason why so many new faces are getting involved. However, we now have a completely updated set of requirements that the community has put together and trying to fit the requirements to existing code may or may not work. In my experience, I would argue that 99% of the time duct-taping existing code to fit in new requirements results in buggy software. That being said, I usually don't like to rebuild a project from scratch. If I can I try to refactor as much as possible first. However, in this case we have a particular set of requirements that changes the game. Particularly, operator requirements have not been given the attention they deserve. I think of Openstack as being cloud software that is meant to operate at scale and have the necessary operator tools to do so. Otherwise, why do we have so many companies interested in Openstack if you can't operate a cloud that scales? In the case of LBaaS, user/feature requirements and operator requirements are not necessarily mutually exclusive. How you design the system in regards to one set of requirements
Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?
Chris Friesen chris.frie...@windriver.com wrote on 04/30/2014 06:07:49 PM: If we go with what Zane suggested (using the already-exposed scheduler_hints) then by implementing a single server group resource we basically get support for server groups for free in any resource that exposes scheduler hints. That seems to me to be an excellent reason to go that route rather than modifying all the different group-like resources that heat supports. Yes, I agree there is economy of implementation in that approach. However it seems (to me, anyway) a little awkward from the point of view of the template author. Not terribly so, but a little. I am generally wary of analogies, but I will try to make one that is not much of a stretch. Consider a multiple-inheritance class hierarchy where class C inherits from A and B; when constructing a C, the user passes the constructor parameters of both A and B. That's fairly natural and usable. Now consider an alternative language that forbids multiple inheritance of classes; class A knows that it might work together with a B to accomplish what the forbidden C would do; to use the combined functionality the user has to construct a B and pass it to the constructor of A. That works, but the language with multiple inheritance is more convenient to use. Trove and Sahara are not implemented by Heat. They are going to be more interesting cases. Regards, Mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process
Jorge, In looking over your API proposal linked above, have things significantly changed there since you sent it out two weeks ago? (And if so, which parts should I take a look at again?) Thanks, Stephen On Wed, Apr 30, 2014 at 5:07 PM, Stephen Balukoff sbaluk...@bluebox.netwrote: Hi Jorge! +1 to everything you just said. In fact, I think you said essentially what I was trying to last week with 100% less drama. I'll work to add workflows to my proposal, but please note it's late on a Wednesday and tomorrow's IRC meeting is awfully early in my time zone. I probably won't have workflow documentation done in time. Thanks, Stephen On Wed, Apr 30, 2014 at 3:26 PM, Jorge Miramontes jorge.miramon...@rackspace.com wrote: Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as you initials so I got confused. :) Cheers, --Jorge On 4/30/14 5:17 PM, Jorge Miramontes jorge.miramon...@rackspace.com wrote: Hey everyone, I agree that we need to be preparing for the summit. Using Google docs mixed with Openstack wiki works for me right now. I need to become more familiar the gerrit process and I agree with Samuel that it is not conducive to large design discussions. That being said I'd like to add my thoughts on how I think we can most effectively get stuff done. As everyone knows there are many new players from across the industry that have an interest in Neutron LBaaS. Companies I currently see involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, eBay/Paypal and Rackspace. We also have individuals involved as well. I echo Kyle's sentiment on the passion everyone is bringing to the project! Coming into this project a few months ago I saw that a few things needed to be done. Most notably, I realized that gathering everyone's expectations on what they wanted Neutron LBaaS to be was going to be crucial. Hence, I created the requirements document. Written requirements are important within a single organization. They are even more important when multiple organizations are working together because everyone is spread out across the world and every organization has a different development process. Again, my goal with the requirements document is to make sure that everyone's voice in the community is taken into consideration. The benefit I've seen from this document is that we ask Why? to each other, iterate on the document and in the end have a clear understanding of everyone's motives. We also learn from each other by doing this which is one of the great benefits of open source. Now that we have a set of requirements the next question to ask is, How doe we prioritize requirements so that we can start designing and implementing them? If this project were a completely new piece of software I would argue that we iterate on individual features based on anecdotal information. In essence I would argue an agile approach. However, most of the companies involved have been operating LBaaS for a while now. Rackspace, for example, has been operating LBaaS for the better part of 4 years. We have a clear understanding of what features our customers want and how to operate at scale. I believe other operators of LBaaS have the same understanding of their customers and their operational needs. I guess my main point is that, collectively, we have data to back up which requirements we should be working on. That doesn't mean we preclude requirements based on anecdotal information (i.e. Our customers are saying they want new shiny feature X). At the end of the day I want to prioritize the community's requirements based on factual data and anecdotal information. Assuming requirements are prioritized (which as of today we have a pretty good idea of these priorities) the next step is to design before laying down any actual code. I agree with Samuel that pushing the cart before the horse is a bad idea in this case (and it usually is the case in software development), especially since we have a pretty clear idea on what we need to be designing for. I understand that the current code base has been worked on by many individuals and the work done thus far is the reason why so many new faces are getting involved. However, we now have a completely updated set of requirements that the community has put together and trying to fit the requirements to existing code may or may not work. In my experience, I would argue that 99% of the time duct-taping existing code to fit in new requirements results in buggy software. That being said, I usually don't like to rebuild a project from scratch. If I can I try to refactor as much as possible first. However, in this case we have a particular set of requirements that changes the game. Particularly, operator requirements have not been given the attention they deserve. I think of Openstack as being cloud software that is meant to operate at scale and have the necessary operator
[openstack-dev] [designate] sink listeners for fixed and floating IP addresses
designate-sink currently ships with a nova_fixed handler, which listens for nova events compute.instance.create.end and compute.instance.delete.start, and a neutron_floatingip for events floatingip.update.end and floatingip.delete.start. 1) is it correct to say that nova_fixed is for internal IP addresses (for intra-cloud networking) and that neutron_floatingip is for external IP addresses (for access from outside the cloud)? 2) nova can also assign and remove floating IP addresses (nova add-floating-ip/remove-floating-ip) - should we have listeners for nova network.floating_ip.associate and network.floating_ip.disassociate events? 3) is there a difference between nova and neutron floating IP addresses? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][ML2] No ML2 sub-team meeting today (4/30/2014)
Today's ML2 sub-team meeting is cancelled. -Bob will the next one be held normally? i want to hear details of modular l2 agent idea before the summit. sooner is better because it's blocking ofagent works. YAMAMOTO Takashi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova API] No Meeting this Week
Hi, Since this is the scheduled off week, I'm going to cancel this week's Nova API meeting. The next meeting will be on May 8th and will be at 00:00 UTC. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process
Hi , As Stephen +1 to everything Jorge said. Another concern is that some decisions impacting LBaaS operator requirements (e.g SSL, flavor framework, service chaining) are discussed/decided in the advanced services group (see https://wiki.openstack.org/wiki/Neutron/AdvancedServices). Vijay did an excellent job involving us in LBaaS in the SSL discussion. Thanks, Vijay! I am wondering how the process is supposed to work between the two groups. There is a lot of overlap right now and decisions in the areas currently discussed in Advanced Services impact our operator requirements for load balancing a great deal. Kyle, I am wondering if you can provide some guidance what your vision is how LBaaS operator requirements get considered in other parts of Neutron. Thanks, German From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] Sent: Wednesday, April 30, 2014 5:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] Thoughts on current process Hi Jorge! +1 to everything you just said. In fact, I think you said essentially what I was trying to last week with 100% less drama. I'll work to add workflows to my proposal, but please note it's late on a Wednesday and tomorrow's IRC meeting is awfully early in my time zone. I probably won't have workflow documentation done in time. Thanks, Stephen On Wed, Apr 30, 2014 at 3:26 PM, Jorge Miramontes jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote: Oops! Everywhere I said Samuel I meant Stephen. Sorry you both have SB as you initials so I got confused. :) Cheers, --Jorge On 4/30/14 5:17 PM, Jorge Miramontes jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote: Hey everyone, I agree that we need to be preparing for the summit. Using Google docs mixed with Openstack wiki works for me right now. I need to become more familiar the gerrit process and I agree with Samuel that it is not conducive to large design discussions. That being said I'd like to add my thoughts on how I think we can most effectively get stuff done. As everyone knows there are many new players from across the industry that have an interest in Neutron LBaaS. Companies I currently see involved/interested are Mirantis, Blue Box Group, HP, PNNL, Citrix, eBay/Paypal and Rackspace. We also have individuals involved as well. I echo Kyle's sentiment on the passion everyone is bringing to the project! Coming into this project a few months ago I saw that a few things needed to be done. Most notably, I realized that gathering everyone's expectations on what they wanted Neutron LBaaS to be was going to be crucial. Hence, I created the requirements document. Written requirements are important within a single organization. They are even more important when multiple organizations are working together because everyone is spread out across the world and every organization has a different development process. Again, my goal with the requirements document is to make sure that everyone's voice in the community is taken into consideration. The benefit I've seen from this document is that we ask Why? to each other, iterate on the document and in the end have a clear understanding of everyone's motives. We also learn from each other by doing this which is one of the great benefits of open source. Now that we have a set of requirements the next question to ask is, How doe we prioritize requirements so that we can start designing and implementing them? If this project were a completely new piece of software I would argue that we iterate on individual features based on anecdotal information. In essence I would argue an agile approach. However, most of the companies involved have been operating LBaaS for a while now. Rackspace, for example, has been operating LBaaS for the better part of 4 years. We have a clear understanding of what features our customers want and how to operate at scale. I believe other operators of LBaaS have the same understanding of their customers and their operational needs. I guess my main point is that, collectively, we have data to back up which requirements we should be working on. That doesn't mean we preclude requirements based on anecdotal information (i.e. Our customers are saying they want new shiny feature X). At the end of the day I want to prioritize the community's requirements based on factual data and anecdotal information. Assuming requirements are prioritized (which as of today we have a pretty good idea of these priorities) the next step is to design before laying down any actual code. I agree with Samuel that pushing the cart before the horse is a bad idea in this case (and it usually is the case in software development), especially since we have a pretty clear idea on what we need to be designing for. I understand that the current code base has been worked on by many individuals and the work done thus far is the reason why so many new faces are
Re: [openstack-dev] [qa] [cinder] Do we now require schema response validation in tempest clients?
Hi David, 2014-05-01 5:44 GMT+09:00 David Kranz dkr...@redhat.com: There have been a lot of patches that add the validation of response dicts. We need a policy on whether this is required or not. For example, this patch https://review.openstack.org/#/c/87438/5 is for the equivalent of 'cinder service-list' and is a basically a copy of the nova test which now does the validation. So two questions: Is cinder going to do this kind of checking? If so, should new tests be required to do it on submission? I'm not sure someone will add the similar validation, which we are adding to Nova API tests, to Cinder API tests also. but it would be nice for Cinder and Tempest. The validation can be applied to the other projects(Cinder, etc) easily because the base framework is implemented in common rest client of Tempest. When adding new tests like https://review.openstack.org/#/c/87438 , I don't have strong opinion for including the validation also. These schemas will be large sometimes and the combination in the same patch would make reviews difficult. In current Nova API test implementations, we are separating them into different patches. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [openstack-sdk-php] discussion: json schema to define apis
Hi, 2014-04-29 10:28 GMT+09:00 Matthew Farina m...@mattfarina.com: *3. Where would JSON schemas come from?* It depends on each OpenStack service. Glance and Marconi (soon) offer schemas directly through the API - so they are directly responsible for maintaining this - we'd just consume it. We could probably cache a local version to minimize requests. For services that do not offer schemas yet, we'd have to use local schema files. There's a project called Tempest which does integration tests for OpenStack clusters, and it uses schema files. So there might be a possibility of using their files in the future. If this is not possible, we'd write them ourselves. It took me 1-2 days to write the entire Nova API. Once a schema file has been fully tested and signed off as 100% operational, it can be frozen as a set version. Can we convert the schema files from Tempest into something we can use? just FYI Now Tempest contains schemas for Nova API only, and the schemas of request and response are stored into different directories. We can see request schema: https://github.com/openstack/tempest/tree/master/etc/schemas/compute response schema: https://github.com/openstack/tempest/tree/master/tempest/api_schema/compute In the future, the way to handle these schemas in Tempest is one of the topics in the next summit. http://junodesignsummit.sched.org/event/e3999a28ec02aa14b69ad67848be570a Nova also contains request schema under https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas/v3 These schemas are used only for Nova v3 API, there is nothing for v2 API(current) because v2 API does not use jsonschema. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev