Re: [openstack-dev] [tempest] who builds other part of test environment
Hi Gareth, Tempest shouldn't touch the test that you are looking at. The Jenkins job should execute using the tox.ini in the rally repository so that part should be the same as your local environment. This is the relevant part loaded onto the Jenkin's slaves: https://github.com/openstack-infra/config/blob/master/modules/jenkins/files/slave_scripts/run-pep8.sh. What is the specific difference you were concerned with? Cheers, Josh Rackspace Australia On 3/6/14 1:33 PM, Gareth wrote: Hi Here a test result: http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.html and its result is different from in my local environment. So I want to check some details of official test environment, for example /home/jenkins/workspace/gate-rally-pep8/tox.ini. I guess it is in tempest repo but it isn't. I didn't find any test tox.ini file in tempest repo. So it should be hosted in another repo. Which is that one? thanks -- Gareth /Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball/ /OpenStack contributor, kun_huang@freenode/ /My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me / /and I'll donate $1 or ?1 to an open organization you specify./ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Incubation Request: Murano
On 03/05/2014 02:16 AM, Thomas Spatzier wrote: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 05/03/2014 00:32:08: From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 05/03/2014 00:34 Subject: Re: [openstack-dev] Incubation Request: Murano Hi Thomas, Zane, Thank you for bringing TOSCA to the discussion. I think this is important topic as it will help to find better alignment or even future merge of Murano DSL and Heat templates. Murano DSL uses YAML representation too, so we can easily merge use constructions from Heat and probably any other YAML based TOSCA formats. I will be glad to join TOSCA TC. Is there any formal process for that? The first part is that your company must be a member of OASIS. If that is the case, I think you can simply go to the TC page [1] and click a button to join the TC. If your company is not yet a member, you could get in touch with the TC chairs Paul Lipton and Simon Moser and ask for the best next steps. We recently had people from GigaSpaces join the TC, and since they are also doing very TOSCA aligned implementation in Cloudify, their input will probably help a lot to advance TOSCA. I also would like to use this opportunity and start conversation with Heat team about Heat roadmap and feature set. As Thomas mentioned in his previous e-mail TOSCA topology story is quite covered by HOT. At the same time there are entities like Plans which are covered by Murano. We had discussion about bringing workflows to Heat engine before HK summit and it looks like that Heat team has no plans to bring workflows into Heat. That is actually why we mentioned Orchestration program as a potential place for Murano DSL as Heat+Murano together will cover everything which is defined by TOSCA. I remember the discussions about whether to bring workflows into Heat or not. My personal opinion is that workflows are probably out of the scope of Heat (i.e. everything but the derived orchestration flows the Heat engine implements). So there could well be a layer on-top of Heat that lets Heat deal with all topology-related declarative business and adds workflow-based orchestration around it. TOSCA could be a way to describe the respective overarching models and then hand the different processing tasks to the right engine to deal with it. My general take is workflow would fit in the Orchestration program, but not be integrated into the heat repo specifically. It would be a different repo, managed by the same orchestration program just as we have heat-cfntools and other repositories. Figuring out how to handle the who is the core team of people responsible for program's individual repositories is the most difficult aspect of making such a merge. For example, I'd not desire a bunch of folks from Murano +2/+A heat specific repos until they understood the code base in detail, or atleast the broad architecture. I think the same think applies in reverse from the Murano perspective. Ideally folks that are core on a specific program would need to figure out how to learn how to broadly review each repo (meaning the heat devs would have to come up to speed on murano and murano devs would have to come up to speed on heat. Learning a new code base is a big commitment for an already overtaxed core team. I believe expanding our scope in this way would require TC approval. The main reason I don't want workflow in the heat repo specifically is because it adds complication to Heat itself. We want Heat to be one nice tidy small set of code that does one thing really well. This makes it easy to improve, easy to deploy, and easy to learn! These reasons are why, for example, we are continuing to push the autoscaling implementation out of Heat and into a separate repository over the next 1 to 2 cycles This on the other hand won't be an expansion of scope of the Orchestration program, because we already do autoscaling, we just want to make it more consumable. Regards, -steve I think TOSCA initiative can be a great place to collaborate. I think it will be possible then to use Simplified TOSCA format for Application descriptions as TOSCA is intended to provide such descriptions. Is there a team who are driving TOSCA implementation in OpenStack community? I feel that such team is necessary. We started to implement a TOSCA YAML to HOT converter and our team member Sahdev (IRC spzala) has recently submitted code for a new stackforge project [2]. This is very initial, but could be a point to collaborate. [1] https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca [2] https://github.com/stackforge/heat-translator Regards, Thomas Thanks Georgy On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote: Excerpt from Zane Bitter's message on 04/03/2014 23:16:21: From: Zane Bitter zbit...@redhat.com To:
Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core
On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote: I'd like to nominate Radomir Dopieralski to Horizon Core. I find his reviews very insightful and more importantly have come to rely on their quality. He has contributed to several areas in Horizon and he understands the code base well. Radomir is also very active in tuskar-ui both contributing and reviewing. David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev As someone who benefits from his insightful reviews, I second the nomination. -- Jason E. Rist Senior Software Engineer OpenStack Management UI Red Hat, Inc. +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming
Renat, Thanks for the detailed explanation. It is quite instructive and helps frame the question accurately by providing the necessary context. - Is the approach itself reasonable? [Manas] I think so. A repeatable workflow requires these sort of separation i.e. a model in which to save the specification as supplied by the user and a model in which to save the execution state - current and past. - Do we have better ideas on how to work with DSL? A good mental exercise here would be to imagine that we have more than one DSL, not only YAML but say XML. How would it change the picture? [Manas] As long as we form an abstraction between the DSL format i.e. YAML/XML and it consumption we will be able to move between various formats. (wishful) My personal preference is to not even have DSL show up anywhere in Mistral code apart from take it as input and transforming into this first level specification model - I know this is not the current state. - How can we clearly distinguish between these two models so that it wouldn't be confusing? - Do we have a better naming in mind? [Manas] I think we all would agree that the best approach is to have precise naming. I see your point of de-normalizing the dsl data into respective db model objects. In a previous email I suggested using *Spec. I will try to build on this - 1. Everything specified via the YAML input is a specification or definition or template. Therefore I suggest we suffix all these types with Spec/Definition/Template. So TaskSpec/TaskDefinition/TaskTemplate etc.. As per the latest change these are TaskData ... ActionData. 2. As per current impl the YAML is stored as a key-value in the DB. This is fine since that is front-ended by objects that Nikolay has introduced. e.g. TaskData, ActionData etc. 3. As per my thinking a model object that ends up in the DB or a model object that is in memory all can reside in the same module. I view persistence as an orthogonal concern so no real reason to distinguish the module names of the two set of models. If we do choose to distinguish as per latest change i.e. mistral/workbook that works too. @Nikolay - I am generally ok with the approach. I hope that this helps clarify my thinking and perception. Please ask more questions. Overall I like the approach of formalizing the 2 models. I am ok with current state of the review and have laid out my preferences. Thanks, Manas On Wed, Mar 5, 2014 at 3:39 AM, Nikolay Makhotkin nmakhot...@mirantis.comwrote: I think today and I have a good name for package (instead of 'mistral/model') How do you think about to name it 'mistral/workbook'? I.e., It means that it contains modules for work with workbook representation - tasks, services, actions and workflow. This way we able to get rid of any confusing. Best Regards, Nikolay On Wed, Mar 5, 2014 at 8:50 AM, Renat Akhmerov rakhme...@mirantis.comwrote: I think we forgot to point to the commit itself. Here it is: https://review.openstack.org/#/c/77126/ Manas, can you please provide more details on your suggestion? For now let me just describe the background of Nikolay's question. Basically, we are talking about how we are working with data inside Mistral. So far, for example, if a user sent a request to Mistral start workflow then Mistral would do the following: - Get workbook DSL (YAML) from the DB (given that it's been already persisted earlier). - Load it into a dictionary-like structure using standard 'yaml' library. - Based on this dictionary-like structure create all necessary DB objects to track the state of workflow execution objects and individual tasks. - Perform all the necessary logic in engine and so on. The important thing here is that DB objects contain corresponding DSL snippets as they are described in DSL (e.g. tasks have property task_dsl) to reduce the complexity of relational model that we have in DB. Otherwise it would be really complicated and most of the queries would contain lots of joins. The example of non-trivial relation in DSL is task-action name-service-service actions-action, as you can see it would be hard to navigate to action in the DB from a task if our relational model matches to what we have in DSL. this approach leads to the problem of addressing any dsl properties using hardcoded strings which are spread across the code and that brings lots of pain when doing refactoring, when trying to understand the structure of the model we describe in DSL, it doesn't allow to do validation easily and so on. So what we have in DB we've called model so far and we've called just dsl the dictionary structure coming from DSL. So if we got a part of the structure related to a task we would call it dsl_task. So what Nikolay is doing now is he's reworking the approach how we work with DSL. Now we assume that after we parsed a workbook DSL we get some model. So that we
Re: [openstack-dev] [tempest] who builds other part of test environment
The difference is that: in my local environment, running tox -epep8 failed but succeeded in Jenkins. So I guess the tox.ini may be different. On Thu, Mar 6, 2014 at 11:38 AM, Joshua Hesketh joshua.hesk...@rackspace.com wrote: Hi Gareth, Tempest shouldn't touch the test that you are looking at. The Jenkins job should execute using the tox.ini in the rally repository so that part should be the same as your local environment. This is the relevant part loaded onto the Jenkin's slaves: https://github.com/openstack-infra/config/blob/master/modules/jenkins/files/slave_scripts/run-pep8.sh . What is the specific difference you were concerned with? Cheers, Josh Rackspace Australia On 3/6/14 1:33 PM, Gareth wrote: Hi Here a test result: http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.htmland its result is different from in my local environment. So I want to check some details of official test environment, for example /home/jenkins/workspace/gate-rally-pep8/tox.ini. I guess it is in tempest repo but it isn't. I didn't find any test tox.ini file in tempest repo. So it should be hosted in another repo. Which is that one? thanks -- Gareth *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball* *OpenStack contributor, kun_huang@freenode* *My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me * *and I'll donate $1 or ¥1 to an open organization you specify.* ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gareth *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball* *OpenStack contributor, kun_huang@freenode* *My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me * *and I'll donate $1 or ¥1 to an open organization you specify.* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tempest] who builds other part of test environment
fixed! The reason is that .gitignore contains 'build/' but pep8 checking doesn't ignore it. So there are many mistakes in build//*.py. After removing build dir, everything runs well On Thu, Mar 6, 2014 at 11:55 AM, Gareth academicgar...@gmail.com wrote: The difference is that: in my local environment, running tox -epep8 failed but succeeded in Jenkins. So I guess the tox.ini may be different. On Thu, Mar 6, 2014 at 11:38 AM, Joshua Hesketh joshua.hesk...@rackspace.com wrote: Hi Gareth, Tempest shouldn't touch the test that you are looking at. The Jenkins job should execute using the tox.ini in the rally repository so that part should be the same as your local environment. This is the relevant part loaded onto the Jenkin's slaves: https://github.com/openstack-infra/config/blob/master/modules/jenkins/files/slave_scripts/run-pep8.sh . What is the specific difference you were concerned with? Cheers, Josh Rackspace Australia On 3/6/14 1:33 PM, Gareth wrote: Hi Here a test result: http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.htmland its result is different from in my local environment. So I want to check some details of official test environment, for example /home/jenkins/workspace/gate-rally-pep8/tox.ini. I guess it is in tempest repo but it isn't. I didn't find any test tox.ini file in tempest repo. So it should be hosted in another repo. Which is that one? thanks -- Gareth *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball* *OpenStack contributor, kun_huang@freenode* *My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me * *and I'll donate $1 or ¥1 to an open organization you specify.* ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gareth *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball* *OpenStack contributor, kun_huang@freenode* *My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me * *and I'll donate $1 or ¥1 to an open organization you specify.* -- Gareth *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball* *OpenStack contributor, kun_huang@freenode* *My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me * *and I'll donate $1 or ¥1 to an open organization you specify.* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Incubation Request: Murano
Hi Steve, Thank you for sharing your thoughts. I believe that is what we were trying to receive as a feedback form TC. The current definition of program actually suggest the scenario you described. A new project will appear under Orchestration umbrella. Let say there will be two project one is Heat and another is Workflow (no specific name here, probably some part of Murano). Program will have one PTL (current Heat PTL) and two separate code team for each project. That was our understanding of what we want. I am not sure that this was enough stressed out on TC meeting. There were no any intentions to add anything to Heat. Not at all. We just discussed a possibility to split current Murano App Catalog into two parts. Catalog part would go to Catalog program to land App Catalog code near Glance project and integrate them as Glance will store application packages for Murano App Catalog service. The second part of Murano related to environment processing (deployment, life cycle management, events) would go to Orchestration program as a new project like Murano workflows or Murano environment control or anything else. As I mentioned in one of the previous e-mails, we already discussed with the heat team workflows Heat before HK summit. We understand this very well that workflows will not fit Heat and we perfectly understand reasons why. I think that the good result of last TC was the official mandate to discuss alignment and integration between projects Glance, Heat, Murano and probably other projects. Right now we consider the following: 1) Continue discussion around Catalog program mission and how Murano App Catalog will fit into this program. 2) Start conversation with the Heat team in two directions: a) TOSCA and its implementation. How Murano can extend TOSCA and how TOSCA can help Murano to define an application package. Murano should reuse as much as possible from TOSCA to implement this open standard b) Define the alignment between Heat and Murano. How workflows can coexist with HOT. What will be the best way to develop both Heat and Workflows within Orchestration program. 3) Explore Application space for OpenStack. As Thierry mentioned on TC meeting, there are concerns that it is probably to early for OpenStack to make a new step up to the stack. Thanks, Georgy On Wed, Mar 5, 2014 at 7:47 PM, Steven Dake sd...@redhat.com wrote: On 03/05/2014 02:16 AM, Thomas Spatzier wrote: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 05/03/2014 00:32:08: From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 05/03/2014 00:34 Subject: Re: [openstack-dev] Incubation Request: Murano Hi Thomas, Zane, Thank you for bringing TOSCA to the discussion. I think this is important topic as it will help to find better alignment or even future merge of Murano DSL and Heat templates. Murano DSL uses YAML representation too, so we can easily merge use constructions from Heat and probably any other YAML based TOSCA formats. I will be glad to join TOSCA TC. Is there any formal process for that? The first part is that your company must be a member of OASIS. If that is the case, I think you can simply go to the TC page [1] and click a button to join the TC. If your company is not yet a member, you could get in touch with the TC chairs Paul Lipton and Simon Moser and ask for the best next steps. We recently had people from GigaSpaces join the TC, and since they are also doing very TOSCA aligned implementation in Cloudify, their input will probably help a lot to advance TOSCA. I also would like to use this opportunity and start conversation with Heat team about Heat roadmap and feature set. As Thomas mentioned in his previous e-mail TOSCA topology story is quite covered by HOT. At the same time there are entities like Plans which are covered by Murano. We had discussion about bringing workflows to Heat engine before HK summit and it looks like that Heat team has no plans to bring workflows into Heat. That is actually why we mentioned Orchestration program as a potential place for Murano DSL as Heat+Murano together will cover everything which is defined by TOSCA. I remember the discussions about whether to bring workflows into Heat or not. My personal opinion is that workflows are probably out of the scope of Heat (i.e. everything but the derived orchestration flows the Heat engine implements). So there could well be a layer on-top of Heat that lets Heat deal with all topology-related declarative business and adds workflow-based orchestration around it. TOSCA could be a way to describe the respective overarching models and then hand the different processing tasks to the right engine to deal with it. My general take is workflow would fit in the Orchestration program, but not be integrated into the heat repo specifically. It would be a different repo,
Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming
Alright, good input Manas, appreciate. My comments are below... On 06 Mar 2014, at 10:47, Manas Kelshikar ma...@stackstorm.com wrote: Do we have better ideas on how to work with DSL? A good mental exercise here would be to imagine that we have more than one DSL, not only YAML but say XML. How would it change the picture? [Manas] As long as we form an abstraction between the DSL format i.e. YAML/XML and it consumption we will be able to move between various formats. (wishful) My personal preference is to not even have DSL show up anywhere in Mistral code apart from take it as input and transforming into this first level specification model - I know this is not the current state. Totally agree with your point. That’s what we’re trying to achieve. How can we clearly distinguish between these two models so that it wouldn’t be confusing? Do we have a better naming in mind? [Manas] I think we all would agree that the best approach is to have precise naming. I see your point of de-normalizing the dsl data into respective db model objects. In a previous email I suggested using *Spec. I will try to build on this - 1. Everything specified via the YAML input is a specification or definition or template. Therefore I suggest we suffix all these types with Spec/Definition/Template. So TaskSpec/TaskDefinition/TaskTemplate etc.. As per the latest change these are TaskData ... ActionData. After all the time I spent thinking about it I would choose Spec suffix since it’s short and expresses the intention well enough. In conjunction with “workbook” package name it would look very nice (basically we get specification of workbook which is what we’re talking about, right?) So if you agree then let’s change to TaskSpec, ActionSpec etc. Nikolay, sorry for making you change this patch again and again :) But it’s really important and going to have a long-term effect at the entire system. 2. As per current impl the YAML is stored as a key-value in the DB. This is fine since that is front-ended by objects that Nikolay has introduced. e.g. TaskData, ActionData etc. Yep, right. The only thing I would suggest is to avoid DB fields like “task_dsl” like we have now. The alternative could be “task_spec”. 3. As per my thinking a model object that ends up in the DB or a model object that is in memory all can reside in the same module. I view persistence as an orthogonal concern so no real reason to distinguish the module names of the two set of models. If we do choose to distinguish as per latest change i.e. mistral/workbook that works too. Sorry, I believe I wasn’t clear enough on this thing. I think we shouldn’t have these two models in the same package since what I meant by “DB model” is actually “execution” and “task” that carry workflow runtime information and refer to a particular execution (we could also call it “session”). So my point is that these are fundamentally different types of models. The best analogy that comes to my mind is the relationship “class - instance” where in our case “class = Specification (TaskSpec etc.) and “instance = Execution/Task”. Does it make any sense? @Nikolay - I am generally ok with the approach. I hope that this helps clarify my thinking and perception. Please ask more questions. Overall I like the approach of formalizing the 2 models. I am ok with current state of the review and have laid out my preferences. I like the current state of this patch. The only thing I would do is renaming “Data” to “Spec”. Thank you. Renat Akhmerov @ Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova][Cinder] Feature about volume delete protection
Hi all, Current openstack provide the delete volume function to the user. But it seems there is no any protection for user's delete operation miss. As we know the data in the volume maybe very important and valuable. So it's better to provide a method to the user to avoid the volume delete miss. Such as: We can provide a safe delete for the volume. User can specify how long the volume will be delay deleted(actually deleted) when he deletes the volume. Before the volume is actually deleted, user can cancel the delete operation and find back the volume. After the specified time, the volume will be actually deleted by the system. Any thoughts? Welcome any advices. Best regards to you. -- zhangleiqiang Best Regards ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] FFE Request: Adds PCI support for the V3 API (just one patch in novaclient)
On Thu, Mar 6, 2014 at 12:26 PM, Tian, Shuangtai shuangtai.t...@intel.com wrote: Hi, I would like to make a request for FFE for one patch in novaclient for PCI V3 API : https://review.openstack.org/#/c/75324/ [snip] BTW the PCI Patches in V2 will defer to Juno. I'm confused. If this isn't landing in v2 in icehouse I'm not sure we should do a FFE for v3. I don't think right at this moment we want to be encouraging users to user v3 so why does waiting matter? Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][ML2]
Hi All, I have a question regarding ML2 plugin in neutron: My understanding is that, 'Ml2Plugin' is the default core_plugin for neutron ML2. We can use either the default plugin or our own plugin (i.e. my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as core_plugin. Is my understanding correct? Regards, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?
At Wed, 05 Mar 2014 15:42:54 +0100, Miguel Angel Ajo wrote: 3) I also find 10 minutes a long time to setup 192 networks/basic tenant structures, I wonder if that time could be reduced by conversion of system process calls into system library calls (I know we don't have libraries for iproute, iptables?, and many other things... but it's a problem that's probably worth looking at.) Try benchmarking $ sudo ip netns exec qfoobar /bin/echo Network namespace switching costs almost as much as a rootwrap execution, IIRC. Execution coalesceing is not enough in this case and we would need to change how Neutron issues commands, IMO. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core
+1 On Thu, Mar 6, 2014 at 5:47 AM, Jason Rist jr...@redhat.com wrote: On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote: I'd like to nominate Radomir Dopieralski to Horizon Core. I find his reviews very insightful and more importantly have come to rely on their quality. He has contributed to several areas in Horizon and he understands the code base well. Radomir is also very active in tuskar-ui both contributing and reviewing. David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev As someone who benefits from his insightful reviews, I second the nomination. -- Jason E. Rist Senior Software Engineer OpenStack Management UI Red Hat, Inc. +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Tihomir Trifonov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core
On 03/06/2014 04:47 AM, Jason Rist wrote: On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote: I'd like to nominate Radomir Dopieralski to Horizon Core. I find his reviews very insightful and more importantly have come to rely on their quality. He has contributed to several areas in Horizon and he understands the code base well. Radomir is also very active in tuskar-ui both contributing and reviewing. David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev As someone who benefits from his insightful reviews, I second the nomination. I agree, Radomir has been doing excellent reviews and patches in both projects. -- Jason E. Rist Senior Software Engineer OpenStack Management UI Red Hat, Inc. +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer][wsme][pecan] Need help for ceilometer havana alarm issue
Hi, On Thu, Mar 06, 2014 at 10:44:25AM +0800, ZhiQiang Fan wrote: I already check the stable/havana and master branch via devstack, the problem is still in havana, but master branch is not affected I think it is important to fix it for havana too, since some high level application may depends on the returned faultstring. Currently, I'm not sure mater branch fix it in pecan or wsme module, or in ceilometer itself Is there anyone can help with this problem? This is a duplicate bug of https://bugs.launchpad.net/ceilometer/+bug/1260398 This one have already been fixed, I have marked havana as affected, to think about it if we cut a new havana version. Feel free to prepare the backport. Regards, -- Mehdi Abaakouk mail: sil...@sileht.net irc: sileht signature.asc Description: Digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][ML2]
Hi Nader, Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one plugin in another. I'm guessing you probably wire a driver that ML2 can use though it's hard to tell from the information you've provided what you're trying to do. Best, Aaron On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti nader.laho...@gmail.comwrote: Hi All, I have a question regarding ML2 plugin in neutron: My understanding is that, 'Ml2Plugin' is the default core_plugin for neutron ML2. We can use either the default plugin or our own plugin (i.e. my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as core_plugin. Is my understanding correct? Regards, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev