Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-05 Thread Luke Gorrie
On 4 March 2014 17:07, Jay Pipes jaypi...@gmail.com wrote:

 I would advise dropping the custom CI setup and going with a method that
 specifically uses the upstream openstack-dev/devstack and
 openstack-infra/devstack-gate projects.


This sounds great to me. Thank you for all the work you are doing on
simplifying the baseline CI setup.

The ideal situation from my perspective would be to use a standard upstream
script to create a working CI that can make real tempest runs and vote with
my account based on the results (to the sandbox initially). Then I'd branch
this script to do the setup that's specific for my driver and to
selectively disable tests that are not relevant (if needed). Then once it's
looking good the votes could move from the sandbox to the mainline.

In future cycles when the CI requirements change I would pull the new
upstream scripts and rebase my branch onto them. This could perhaps run in
parallel into the sandbox before taking over the mainline work.

This seems to be the direction that you are taking things and that sounds
wonderful to me.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-03-05 Thread Khanh-Toan Tran
Well I use a 8 cores 128G RAM physical host :) I did not see much of the
CPU consumption for these 100 containers, so I suspect we can use less
resources.

 -Message d'origine-
 De : David Peraza [mailto:david_per...@persistentsys.com]
 Envoyé : lundi 3 mars 2014 20:27
 À : OpenStack Development Mailing List (not for usage questions)
 Objet : Re: [openstack-dev] [nova] Simulating many fake nova compute
nodes
 for scheduler testing

 Thanks Khanh,

 I see the potential issue with using threads. Thanks for pointing out.
On using
 containers, that sounds like a cool configuration but that should have a
bigger
 footprint on the host resources than just a separate service instance
like I'm
 doing. I have to admit that 100 fake computes per physical host is good
though.
 How big is your physical host. I'm running a 4 Gig 4 CPU VM. I suspect
your
 physical system is much more equipped.

 Regards,
 David Peraza | Openstack Solutions Architect
david_per...@persistentsys.com |
 Cell: (305)766-2520 Persistent Systems Inc. | Partners in Innovation
 | www.persistentsys.com

 -Original Message-
 From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
 Sent: Tuesday, February 25, 2014 3:49 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute
nodes
 for scheduler testing

   I could do that but I think I need to be able to scale more without
   the need to use this much resources. I will like to simulate a cloud
   of 100 maybe
   1000 compute nodes that do nothing (Fake driver) this should not
   take this much memory. Anyone knows of a more efficient way to
   simulate many computes? I was thinking changing the Fake driver to
   report many compute services in different threads instead of having
   to spawn a process per compute service. Any other ideas?

 I'm not sure using threads is a good idea. We need a dedicated resources
pool
 for each compute. If the threads share the same resources pool, then
every new
 VM will change the available resources on all computes, which may lead
to
 unexpected  unpredicted scheduling result. For instance, RamWeigher may
 return the same compute twice instead of spreading, because at each time
it
 finds out that the computes have the same free_ram.

 Using compute inside LXC, I created 100 computes per physical host. Here
is
 what I did, it's very simple:
  -  Creating a LXC with logical volume
   - Installing a fake nova-compute inside the LXC
   - Make a booting script that modifies its nova.conf to use its IP
address  starts
 nova-compute
   - Using the LXC above as the master, clone as many compute as you
like!

 (Note that while cloning the LXC, the nova.conf is copied with the
former's IP
 address, that's why we need the booting script.)

 Best regards,

 Toan


  -Message d'origine-
  De : David Peraza [mailto:david_per...@persistentsys.com]
  Envoyé : lundi 24 février 2014 21:13
  À : OpenStack Development Mailing List (not for usage questions) Objet
  : Re: [openstack-dev] [nova] Simulating many fake nova compute
 nodes
  for scheduler testing
 
  Thanks John,
 
  I also think it is a good idea to test the algorithm at unit test
  level,
 but I will like
  to try out over amqp as well, that is, we process and threads talking
  to
 each
  other over rabbit or qpid. I'm trying to test out performance as well.
 
  Regards,
  David Peraza
 
  -Original Message-
  From: John Garbutt [mailto:j...@johngarbutt.com]
  Sent: Monday, February 24, 2014 11:51 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute
 nodes
  for scheduler testing
 
  On 24 February 2014 16:24, David Peraza
  david_per...@persistentsys.com
  wrote:
   Hello all,
  
   I have been trying some new ideas on scheduler and I think I'm
   reaching a resource issue. I'm running 6 compute service right on my
   4 CPU 4 Gig VM, and I started to get some memory allocation issues.
   Keystone and Nova are already complaining there is not enough
memory.
   The obvious solution to add more candidates is to get another VM and
 set
  another 6 Fake compute service.
   I could do that but I think I need to be able to scale more without
   the need to use this much resources. I will like to simulate a cloud
   of 100 maybe
   1000 compute nodes that do nothing (Fake driver) this should not
   take this much memory. Anyone knows of a more efficient way to
   simulate many computes? I was thinking changing the Fake driver to
   report many compute services in different threads instead of having
   to spawn a process per compute service. Any other ideas?
 
  It depends what you want to test, but I was able to look at tuning the
 filters and
  weights using the test at the end of this file:
 

https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_cachin
 g
  _scheduler.py
 
  Cheers,
  John
 
  

Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Thomas Spatzier
Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 05/03/2014
00:32:08:

 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 05/03/2014 00:34
 Subject: Re: [openstack-dev] Incubation Request: Murano

 Hi Thomas, Zane,

 Thank you for bringing TOSCA to the discussion. I think this is
 important topic as it will help to find better alignment or even
 future merge of Murano DSL and Heat templates. Murano DSL uses YAML
 representation too, so we can easily merge use constructions from
 Heat and probably any other YAML based TOSCA formats.

 I will be glad to join TOSCA TC. Is there any formal process for that?

The first part is that your company must be a member of OASIS. If that is
the case, I think you can simply go to the TC page [1] and click a button
to join the TC. If your company is not yet a member, you could get in touch
with the TC chairs Paul Lipton and Simon Moser and ask for the best next
steps. We recently had people from GigaSpaces join the TC, and since they
are also doing very TOSCA aligned implementation in Cloudify, their input
will probably help a lot to advance TOSCA.


 I also would like to use this opportunity and start conversation
 with Heat team about Heat roadmap and feature set. As Thomas
 mentioned in his previous e-mail TOSCA topology story is quite
 covered by HOT. At the same time there are entities like Plans which
 are covered by Murano. We had discussion about bringing workflows to
 Heat engine before HK summit and it looks like that Heat team has no
 plans to bring workflows into Heat. That is actually why we
 mentioned Orchestration program as a potential place for Murano DSL
 as Heat+Murano together will cover everything which is defined by TOSCA.

I remember the discussions about whether to bring workflows into Heat or
not. My personal opinion is that workflows are probably out of the scope of
Heat (i.e. everything but the derived orchestration flows the Heat engine
implements). So there could well be a layer on-top of Heat that lets Heat
deal with all topology-related declarative business and adds workflow-based
orchestration around it. TOSCA could be a way to describe the respective
overarching models and then hand the different processing tasks to the
right engine to deal with it.


 I think TOSCA initiative can be a great place to collaborate. I
 think it will be possible then to use Simplified TOSCA format for
 Application descriptions as TOSCA is intended to provide such
descriptions.

 Is there a team who are driving TOSCA implementation in OpenStack
 community? I feel that such team is necessary.

We started to implement a TOSCA YAML to HOT converter and our team member
Sahdev (IRC spzala) has recently submitted code for a new stackforge
project [2]. This is very initial, but could be a point to collaborate.

[1] https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca
[2] https://github.com/stackforge/heat-translator

Regards,
Thomas


 Thanks
 Georgy


 On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier
thomas.spatz...@de.ibm.com
  wrote:
 Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:
  From: Zane Bitter zbit...@redhat.com
  To: openstack-dev@lists.openstack.org
  Date: 04/03/2014 23:20
  Subject: Re: [openstack-dev] Incubation Request: Murano
 
  On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
  
  It so happens that the OASIS's TOSCA technical committee are working as
  we speak on a TOSCA Simple Profile that will hopefully make things
  easier to use and includes a YAML representation (the latter is great
  IMHO, but the key to being able to do it is the former). Work is still
  at a relatively early stage and in my experience they are very much
open
  to input from implementers.

 Nice, I was probably also writing a mail with this information at about
the
 same time :-)
 And yes, we are very much interested in feedback from implementers and
open
 to suggestions. If we can find gaps and fill them with good proposals,
now
 is the right time.

 
  I would strongly encourage you to get involved in this effort (by
  joining the TOSCA TC), and also to architect Murano in such a way that
  it can accept input in multiple formats (this is something we are
making
  good progress toward in Heat). Ideally the DSL format for Murano+Heat
  should be a trivial translation away from the relevant parts of the
YAML
  representation of TOSCA Simple Profile.

 Right, having a straight-forward translation would be really desirable.
The
 way to get there can actually be two-fold: (1) any feedback we get from
the
 Murano folks on the TOSCA simple profile and YAML can help us to make
TOSCA
 capable of addressing the right use cases, and (2) on the other hand make
 sure the implementation goes in a direction that is in line with what
TOSCA
 YAML will look like.

 
  cheers,
  Zane.
 
  ___
  

Re: [openstack-dev] [nova] Questions about guest NUMA and memory binding policies

2014-03-05 Thread Wangpan
Hi Liuji, 

I'm the owner of bp support-libvirt-vcpu-topology, 
There are four main reasons that I did not continue to work on it: 
1. the design proposal has not confirmed by core developers of nova 
2. this bp is not accepted in Icehouse development stage 
3. Daniel expects that this bp should consider together with the other one 
numa-aware-cpu-binding, but I have no idea to do this for now 
4. I have no enough time to do this at this moment 

2014-03-05



Wangpan



发件人:Liuji (Jeremy) jeremy@huawei.com
发送时间:2014-03-05 15:02
主题:Re: [openstack-dev] [nova] Questions about guest NUMA and memory binding 
policies
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org
抄送:Luohao \(brian\)brian.luo...@huawei.com,Yuanjing 
\(D\)yj.y...@huawei.com

Hi Steve, 

Thanks for your reply. 

I didn't know why the blueprint numa-aware-cpu-binding seems to have no more 
progress until read the two mails mentioned in your mail. 

The use case analysis in the mails are very clear, they are also what I concern 
about. 
I agree that we shouldn't provide pCPU/vCPU mapping for the ending user and how 
to provide them for the user need more consideration.  

The use cases I concern more are the pCPU's exclusively use(pCPU:vCPU=1:1) and 
the guest numa. 


Thanks, 
Jeremy Liu 


 -Original Message- 
 From: Steve Gordon [mailto:sgor...@redhat.com] 
 Sent: Tuesday, March 04, 2014 10:29 AM 
 To: OpenStack Development Mailing List (not for usage questions) 
 Cc: Luohao (brian); Yuanjing (D) 
 Subject: Re: [openstack-dev] [nova] Questions about guest NUMA and memory 
 binding policies 
  
 - Original Message - 
  Hi, all 
  
  I search the current blueprints and old mails in the mail list, but 
  find nothing about Guest NUMA and setting memory binding policies. 
  I just find a blueprint about vcpu topology and a blueprint about CPU 
  binding. 
  
  https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topol 
  ogy https://blueprints.launchpad.net/nova/+spec/numa-aware-cpu-binding 
  
  Is there any plan for the guest NUMA and memory binding policies setting? 
  
  Thanks, 
  Jeremy Liu 
  
 Hi Jeremy, 
  
 As you've discovered there have been a few attempts at getting some work 
 started in this area. Dan Berrange outlined some of the possibilities in this 
 area 
 in a previous mailing list post [1] though it's multi-faceted, there are a 
 lot of 
 different ways to break it down. If you dig into the details you will note 
 that the 
 support-libvirt-vcpu-topology blueprint in particular got a fair way along 
 but 
 there were some concerns noted in the code reviews and on the list [2] around 
 the design. 
  
 It seems like this is an area that there is a decent amount of interest in 
 and we 
 should work on list to flesh out a design proposal, ideally this would be 
 presented for further discussion at the Juno design summit. What are your 
 particular needs/desires from a NUMA aware nova scheduler? 
  
 Thanks, 
  
 Steve 
  
 [1] 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019715.h 
 tml 
 [2] 
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022940.h 
 tml 
  
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Thomas Spatzier
Hi Stan,

thanks for sharing your thoughts about Murano and relation to TOSCA. I have
added a few comments below.

 From: Stan Lagun sla...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 05/03/2014 00:51
 Subject: Re: [openstack-dev] Incubation Request: Murano

 Hi all,

 Completely agree with Zane. Collaboration with TOSCA TC is a way to
 go as Murano is very close to TOSCA. Like Murano = 0.9 * TOSCA + UI
 + OpenStack services integration.

 Let me share my thoughts on TOSCA as I read all TOSCA docs and I'm
 also the author of initial Murano DSL design proposal so I can
 probably compare them.

 We initially considered to just implement TOSCA before going with
 own DSL. There was no YAML TOSCA out there at that time, just XML
version.

 So here's why we've wrote our own DSL:

 1. TOSCA is very complex and verbose. Considering there is no
 production-ready tooling for TOSCA users would have to type all
 those tons of XML tags and namespaces and TOSCA XMLs are really hard
 to read and write. No one gonna do this, especially outside of Java-
 enterprise world

Right, that's why we are doing the simple profile and YAML work now to
overcome those adoption issues.

 2. TOSCA has no workflow language. TOSCA draft states that the
 language is indeed needed and recommends using BPEL or BPMN for that
matter.

Right, the goal of TOSCA was not to define a new workflow language but to
refer to existing ones. This does not mean, of course that other languages
than BPEL or BPMN cannot be used. We still consider standardization of such
a language out of scope of the TC, but if there is some widely adopted flow
language being implemented, e.g. in the course of Murano, I could imagine
that a community could use such a simpler language in an OpenStack
environment. Ideally though, such a simpler workflow language would be
translatable to a standard language like BPMN (or a subset of it) so those
who have a real process engine can consume the flow descriptions.

 Earlier versions of Murano showed that some sort of workflow
 language (declarative, imperative whatever) if absolutely required
 for non-trivial cases. If you don't have workflow language then you
 have to hard-code a lot of knowledge into engine in Python. But the
 whole idea of AppCatalog was that users upload (share) their
 application templates that contain application-specific maintenance/
 deployment code that is run in on common shared server (not in any
 particular VM) and thus capable of orchestrating all activities that
 are taking place on different VMs belonging to given application
 (for complex applications with typical enterprise SOA architecture).
 Besides VMs applications can talk to OpenStack services like Heat,
 Neutron, Trove and 3rd party services (DNS registration, NNTP,
 license activation service etc). Especially with the Heat so that
 application can have its VMs and other IaaS resources. There is a
 similar problem in Heat - you can express most of the basic things
 in HOT but once you need something really complex like accessing
 external API, custom load balancing or anything tricky you need to
 resort to Python and write custom resource plugin. And then you
 required to have root access to engine to install that plugin. This
 is not a solution for Murano as in Murano any user can upload
 application manifest at any time without affecting running system
 and without admin permissions.

 Now going back to TOSCA the problem with TOSCA workflows is they are
 not part of standard. There is no standardized way how BPEL would
 access TOSCA attributes and how 2 systems need to interact. This
 alone makes any 2 TOSCA implementations incompatible with each other
 rendering the whole idea of standard useless. It is not standard if
 there is no compatibility.

We have been working on what we call a plan portability API that
describes what APIs a TOSCA container has to support so that portable flows
can access topology information. During the v1.0 time frame, though, we
focused on the declarative part (i.e. the topology model). But, yes I agree
that this part needs to be done so that also plans get portable. If you are
having experience in this area, it would be great to collaborate and see if
we can feed your input into the TOSCA effort.

 And again BPEL is heavy XML language that you don't want to have in
 OpenStack. Trust me, I spent significant time studying it. And if
 there is YAML version of TOSCA that is much more readable than XML
 one there is no such thing for BPEL. And I'm not aware of any
 adequate replacement for it

I agree that BPEL and BPMN are very heavy and hard to use without tooling,
so no objection on looking at a lightweight alternative in the OpenStack
orchestration context.

 3. It seems like nobody really using TOSCA. TOSCA standard defines
 exact TOSCA package format. TOSCA was designed so that people can
 share those packages (CSARs as TOSCA calls 

Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Thomas Spatzier
Forgot to provide the email addresses of Paul and Simon in my last mail:

paul.lip...@ca.com
smo...@de.ibm.com

Regards,
Thomas

 From: Thomas Spatzier/Germany/IBM@IBMDE
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 05/03/2014 10:21
 Subject: Re: [openstack-dev] Incubation Request: Murano

 Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 05/03/2014
 00:32:08:

  From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 05/03/2014 00:34
  Subject: Re: [openstack-dev] Incubation Request: Murano
 
  Hi Thomas, Zane,
 
  Thank you for bringing TOSCA to the discussion. I think this is
  important topic as it will help to find better alignment or even
  future merge of Murano DSL and Heat templates. Murano DSL uses YAML
  representation too, so we can easily merge use constructions from
  Heat and probably any other YAML based TOSCA formats.
 
  I will be glad to join TOSCA TC. Is there any formal process for that?

 The first part is that your company must be a member of OASIS. If that is
 the case, I think you can simply go to the TC page [1] and click a button
 to join the TC. If your company is not yet a member, you could get in
touch
 with the TC chairs Paul Lipton and Simon Moser and ask for the best next
 steps. We recently had people from GigaSpaces join the TC, and since they
 are also doing very TOSCA aligned implementation in Cloudify, their input
 will probably help a lot to advance TOSCA.

 
  I also would like to use this opportunity and start conversation
  with Heat team about Heat roadmap and feature set. As Thomas
  mentioned in his previous e-mail TOSCA topology story is quite
  covered by HOT. At the same time there are entities like Plans which
  are covered by Murano. We had discussion about bringing workflows to
  Heat engine before HK summit and it looks like that Heat team has no
  plans to bring workflows into Heat. That is actually why we
  mentioned Orchestration program as a potential place for Murano DSL
  as Heat+Murano together will cover everything which is defined by
TOSCA.

 I remember the discussions about whether to bring workflows into Heat or
 not. My personal opinion is that workflows are probably out of the scope
of
 Heat (i.e. everything but the derived orchestration flows the Heat engine
 implements). So there could well be a layer on-top of Heat that lets Heat
 deal with all topology-related declarative business and adds
workflow-based
 orchestration around it. TOSCA could be a way to describe the respective
 overarching models and then hand the different processing tasks to the
 right engine to deal with it.

 
  I think TOSCA initiative can be a great place to collaborate. I
  think it will be possible then to use Simplified TOSCA format for
  Application descriptions as TOSCA is intended to provide such
 descriptions.
 
  Is there a team who are driving TOSCA implementation in OpenStack
  community? I feel that such team is necessary.

 We started to implement a TOSCA YAML to HOT converter and our team member
 Sahdev (IRC spzala) has recently submitted code for a new stackforge
 project [2]. This is very initial, but could be a point to collaborate.

 [1] https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca
 [2] https://github.com/stackforge/heat-translator

 Regards,
 Thomas

 
  Thanks
  Georgy
 

  On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier
 thomas.spatz...@de.ibm.com
   wrote:
  Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:
   From: Zane Bitter zbit...@redhat.com
   To: openstack-dev@lists.openstack.org
   Date: 04/03/2014 23:20
   Subject: Re: [openstack-dev] Incubation Request: Murano
  
   On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
   
   It so happens that the OASIS's TOSCA technical committee are working
as
   we speak on a TOSCA Simple Profile that will hopefully make things
   easier to use and includes a YAML representation (the latter is great
   IMHO, but the key to being able to do it is the former). Work is
still
   at a relatively early stage and in my experience they are very much
 open
   to input from implementers.

  Nice, I was probably also writing a mail with this information at about
 the
  same time :-)
  And yes, we are very much interested in feedback from implementers and
 open
  to suggestions. If we can find gaps and fill them with good proposals,
 now
  is the right time.
 
  
   I would strongly encourage you to get involved in this effort (by
   joining the TOSCA TC), and also to architect Murano in such a way
that
   it can accept input in multiple formats (this is something we are
 making
   good progress toward in Heat). Ideally the DSL format for Murano+Heat
   should be a trivial translation away from the relevant parts of the
 YAML
   representation of TOSCA Simple Profile.

  Right, having a 

[openstack-dev] ALIAS for Domain Quota Management in Nova

2014-03-05 Thread Vinod Kumar Boppanna
Hi,

I have implemented the Nova V2 and V3 APIs for Domain Quota Management. But 
what i want to know whether is there any standard in using the ALIAS for the 
URLs.

For example, i thought of using domain-quota-sets or os-domain-quota-sets.

Can any body tell me whether i can use ALIAS like the above (i prefer 
os-domain-quota-sets).

Thanks  Regards,
Vinod Kumar Boppanna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-05 Thread Nikolay Makhotkin
I think today and I have a good name for package (instead of
'mistral/model')

How do you think about to name it 'mistral/workbook'? I.e., It means that
it contains modules for work with workbook representation - tasks,
services, actions and workflow.

This way we able to get rid of any confusing.


Best Regards,
Nikolay

On Wed, Mar 5, 2014 at 8:50 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 I think we forgot to point to the commit itself. Here it is:
 https://review.openstack.org/#/c/77126/

 Manas, can you please provide more details on your suggestion?

 For now let me just describe the background of Nikolay's question.

 Basically, we are talking about how we are working with data inside
 Mistral. So far, for example, if a user sent a request to Mistral start
 workflow then Mistral would do the following:

- Get workbook DSL (YAML) from the DB (given that it's been already
persisted earlier).
- Load it into a dictionary-like structure using standard 'yaml'
library.
- Based on this dictionary-like structure create all necessary DB
objects to track the state of workflow execution objects and individual
tasks.
- Perform all the necessary logic in engine and so on. The important
thing here is that DB objects contain corresponding DSL snippets as they
are described in DSL (e.g. tasks have property task_dsl) to reduce the
complexity of relational model that we have in DB. Otherwise it would be
really complicated and most of the queries would contain lots of joins. The
example of non-trivial relation in DSL is task-action
name-service-service actions-action, as you can see it would be
hard to navigate to action in the DB from a task if our relational model
matches to what we have in DSL. this approach leads to the problem of
addressing any dsl properties using hardcoded strings which are spread
across the code and that brings lots of pain when doing refactoring, when
trying to understand the structure of the model we describe in DSL, it
doesn't allow to do validation easily and so on.


 So what we have in DB we've called model so far and we've called just
 dsl the dictionary structure coming from DSL. So if we got a part of the
 structure related to a task we would call it dsl_task.

 So what Nikolay is doing now is he's reworking the approach how we work
 with DSL. Now we assume that after we parsed a workbook DSL we get some
 model. So that we don't use dsl in the code anywhere this model
 describes basically the structure of what we have in DSL and that would
 allow to address the problems I mentioned above (hardcoded strings are
 replaced with access methods, we clearly see the structure of what we're
 working with, we can easily validate it and so on). So when we need to
 access some DSL properties we would need to get workbook DSL from DB, build
 this model out of it and continue to work with it.

 Long story short, this model parsed from DSL is not the model we store in
 DB but they're both called model which may be confusing. For me this
 non-DB model more looks like domain model or something like this. So the
 question I would ask ourselves here:

- Is the approach itself reasonable?
- Do we have better ideas on how to work with DSL? A good mental
exercise here would be to imagine that we have more than one DSL, not only
YAML but say XML. How would it change the picture?
- How can we clearly distinguish between these two models so that it
wouldn't be confusing?
- Do we have a better naming in mind?


 Thanks.

 Renat Akhmerov
 @ Mirantis Inc.



 On 05 Mar 2014, at 08:56, Manas Kelshikar ma...@stackstorm.com wrote:

 Since the renaming is for types in mistral.model.*. I am thinking we
 suffix with Spec e.g.

 TaskObject - TaskSpec
 ActionObject - ActionSpec and so on.

 The Spec suggest that it is a specification of the final object that
 ends up in the DB and not the actual object. Multiple actual objects can be
 derived from these Spec objects which fits well with the current paradigm.
 Thoughts?


 On Mon, Mar 3, 2014 at 9:43 PM, Manas Kelshikar ma...@stackstorm.comwrote:

 Hi Nikolay -

 Is your concern that mistral.db.sqlalchemy.models.* and mistral.model.*
 will lead to confusion or something else?

 IMHO as per your change model seems like the appropriate usage while what
 is stored in the DB is also a model. If we pick appropriate names to
 distinguish between nature of the objects we should be able to avoid any
 confusion and whether or not model appears in the module name should not
 matter much.

 Thanks,
 Manas


 On Mon, Mar 3, 2014 at 8:43 AM, Nikolay Makhotkin 
 nmakhot...@mirantis.com wrote:

 Hi, team!

 Please look at the commit .

 Module 'mistral/model' now is responsible for object model
 representation which is used for accessing properties of actions, tasks etc.

 We have a name problem - looks like we should rename module
 'mistral/model' since we have DB models 

Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Sean Dague
On 03/04/2014 10:44 PM, Christopher Yeoh wrote:
 On Tue, 04 Mar 2014 16:09:21 -0800
 Dan Smith d...@danplanet.com wrote:
 
 What I'd like to do next is work through a new proposal that
 includes keeping both v2 and v3, but with a new added focus of
 minimizing the cost.  This should include a path away from the dual
 code bases and to something like the v2.1 proposal.

 I think that the most we can hope for is consensus on _something_. So,
 the thing that I'm hoping would mostly satisfy the largest number of
 people is:

 - Leaving v2 and v3 as they are today in the tree, and with v3 still
   marked experimental for the moment
 - We start on a v2 proxy to v3, with the first goal of fully
   implementing the v2 API on top of v3, as judged by tempest
 - We define the criteria for removing the current v2 code and marking
   the v3 code supported as:
  - The v2 proxy passes tempest
  - The v2 proxy has sign-off from some major deployers as something
they would be comfortable using in place of the existing v2 code
  - The v2 proxy seems to us to be lower maintenance and otherwise
preferable to either keeping both, breaking all our users, deleting
v3 entirely, etc
 - We keep this until we either come up with a proxy that works, or
   decide that it's not worth the cost, etc.

 I think the list of benefits here are:

 - Gives the v3 code a chance to address some of the things we have
   identified as lacking in both trees
 - Gives us a chance to determine if the proxy approach is reasonable
 or a nightmare
 - Gives a clear go/no-go line in the sand that we can ask deployers to
   critique or approve

 It doesn't address all of my concerns, but at the risk of just having
 the whole community split over this discussion, I think this is
 probably (hopefully?) something we can all get behind.

 Thoughts?

I think this is a solid plan to move forward on, and gives the proxy
idea a chance to prove itself.

 So I think this a good compromise to keep things moving. Some aspects
 that we'll need to consider:
 
 - We need more tempest coverage of Nova because it doesn't cover all of
   the Nova API yet. We've been working on increasing this as part of
   the V3 API work anyway (and V2 support is an easyish side effect).
   But more people willing to write tempest tests are always welcome :-)

100% agreed. I *highly* encourage any groups that are CDing OpenStack to
get engaged in Tempest, because that's our upstream mechanism for
blocking breaking code. Enhancements on the Nova v2 testing will ensure
that v2 surface does not change in ways that are important to people.

 - I think in practice this will probably mean that V3 API is
   realistically only a K rather than J thing - just in terms of allowing
   a reasonable timeline to not only implement the v2 compat but get
   feedback from deployers.

 - I'm not sure how this affects how we approach the tasks work. Will
   need to think about that more.
 
 But this plan is certainly something I'm happy to support.

Agreed. If we are committing to this route, I would like to see both a
POC on the proxy, and early findings at Atlanta, so we can figure out if
this is crazy or sane, and really define the exit criteria. The point of
giving the proxy idea a chance, is the assumption that it's actually a
less overhead way to evolve our API, and still keep backwards compat.

But if we don't have some good indications of that being true in
Atlanta, then I don't want us fully committing to a long slog into the
unknown here.

So work is cut out for folks that want to head down this path. But I
think the early data in Atlanta will be incredibly valuable in figuring
out how we move forward.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Status of multi-attach-volume work

2014-03-05 Thread Niklas Widell
Hi
What is the current status of the work around multi-attach-volume [1]? We have 
some cluster related use cases that would benefit from being able to attach a 
volume from several instances.

[1] https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

Best regards
Niklas Widell
Ericsson AB
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-05 Thread Steven Hardy
On Tue, Mar 04, 2014 at 02:06:16PM -0800, Clint Byrum wrote:
 Excerpts from Steven Hardy's message of 2014-03-04 09:39:21 -0800:
  Hi all,
  
  As some of you know, I've been working on the instance-users blueprint[1].
  
  This blueprint implementation requires three new items to be added to the
  heat.conf, or some resources (those which create keystone users) will not
  work:
  
  https://review.openstack.org/#/c/73978/
  https://review.openstack.org/#/c/76035/
  
  So on upgrade, the deployer must create a keystone domain and domain-admin
  user, add the details to heat.conf, as already been done in devstack[2].
  
  The changes requried for this to work have already landed in devstack, but
  it was discussed to day and Clint suggested this may be unacceptable
  upgrade behavior - I'm not sure so looking for guidance/comments.
  
  My plan was/is:
  - Make devstack work
  - Talk to tripleo folks to assist in any transition (what prompted this
discussion)
  - Document the upgrade requirements in the Icehouse release notes so the
wider community can upgrade from Havana.
  - Try to give a heads-up to those maintaining downstream heat deployment
tools (e.g stackforge/puppet-heat) that some tweaks will be required for
Icehouse.
  
  However some have suggested there may be an openstack-wide policy which
  requires peoples old config files to continue working indefinitely on
  upgrade between versions - is this right?  If so where is it documented?
  
 
 I don't think I said indefinitely, and I certainly did not mean
 indefinitely.
 
 What is required though, is that we be able to upgrade to the next
 release without requiring a new config setting.

So log a warning for one cycle, then it's OK to expect the config after
that?

I'm still unclear if there's an openstack-wide policy on this, as the whole
time-based release with release-notes (which all of openstack is structured
around and adheres to) seems to basically be an uncomfortable fit for folks
like tripleo who are trunk chasing and doing CI.

 Also as we scramble to deal with these things in TripleO (as all of our
 users are now unable to spin up new images), it is clear that it is more
 than just a setting. One must create domain users carefully and roll out
 a new password.

Such are the pitfalls of life at the bleeding edge ;)

Seriously though, apologies for the inconvenience - I have been asking for
feedback on these patches for at least a month, but clearly I should've
asked harder.

As was discussed on IRC yesterday, I think some sort of (initially non-voting)
feedback from tripleo CI to heat gerrit is pretty much essential given that
you're so highly coupled to us or this will just keep happening.

 What I'm suggesting is that we should instead _warn_ that the old
 behavior is being used and will be deprecated.
 
 At this point, out of urgency, we're landing fixes. But in the future,
 this should be considered carefully.

Ok, well I raised this bug:

https://bugs.launchpad.net/heat/+bug/1287980

So we can modify the stuff so that it falls back to the old behavior
gracefully and will solve the issue for folks on the time-based releases.

Hopefully we can work towards the tripleo gate feedback so next time this
is less of a suprise for all of us ;)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Need to fix issues with OpenStack Global Requirements

2014-03-05 Thread Timur Nurlygayanov
Hi team,

We have some issues with the requirements for different Murano components:
linkhttps://docs.google.com/a/mirantis.com/spreadsheet/pub?key=0Aiup6hoNUUUedGt3cnJIMHAxbTlHdlFDZGhxLS1yRXcoutput=html

I suggest to discuss these issues in etherpad:
https://etherpad.openstack.org/p/MuranoRequirementsToGlobalRequirements

Let's discuss and describe how we will solve issues with different
requirements and also we should have the plan 'how we will fix' these
issues: add new packages to the list of Global Open Stack Requirements or
remove this component from the list of requirements and some additional
activities to publish some components and prepare these components for
Global Requirements List.

Please, remember, that we can not just 'put them all to the Global
Requirements List', example of bad practice:
https://review.openstack.org/#/c/78158/


Thank you!
Any comments in etherpad are welcome :)


-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Irena Berezovsky
Hi Robert, Sandhya,
I have pushed the reference implementation SriovAgentMechanismDriverBase as 
part the following WIP:
https://review.openstack.org/#/c/74464/

The code is in mech_agent.py, and very simple code for mech_sriov_nic_switch.py.

Please take a look and review.

BR,
Irena

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com] 
Sent: Wednesday, March 05, 2014 9:04 AM
To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development Mailing 
List (not for usage questions); Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from  
SimpleAgentMechanismDriverBase and add there verify method for 
supported_pci_vendor_info. Specific MD will pass the list of supported 
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will call 
'supported_pci_vendor_info', and if supported continue with binding flow. 
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info support, 
should be done in order to see if MD should bind the port. If the answer is 
Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how would you 
deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage 
questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from 
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from neutron.openstack.common 
import log from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_(%s: Missing pci vendor info in profile),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_(%(func_name)s: unsupported pci vendor 
info: %(info)s),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running on the port's host.

MechanismDrivers that uses this base class and requires an agent must
pass the agent type to __init__(), and must implement
try_to_bind_segment_for_agent() and check_segment_for_agent().

MechanismDrivers that uses this base class may provide supported vendor
information, and must provide the supported vnic 

Re: [openstack-dev] [TripleO][Tuskar] JSON output values from Tuskar API

2014-03-05 Thread Petr Blaho
On Mon, Mar 03, 2014 at 09:19:34AM +0100, Radomir Dopieralski wrote:
 On 27/02/14 11:52, Petr Blaho wrote:
 
  I agree with you w/r/t to indirection when accessing data but I like the
  idea that when I look at json repsonse I see what type of resource it
  is. That wrapper element describes it. And I do not need to know what
  request (url, service, GET or POST...) triggered that output.
 
 That's data denormalization. What do you then do when the two sources of
 information don't agree? Also, do you actually need the json at all
 without knowing where it came from (not just from which api call, but
 which system and at what time)? I can't imagine such a situation.

Yeah, I agree with you that not wrapped json data is better solution.
I just liked how wrapped data looks.
 
 Finally, when you need to save a whole lot of json outputs and need to
 know where they come from, you traditionally put that information in the
 file name, no?
 
 -- 
 Radomir Dopieralski
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

My concern is that it looks like OpenStack does not have a proper way
how to format output json in APIs and it is on behalf of each project.

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest - Stress Test][qa] : implement a full SSH connection on ssh_floating.py and improve it

2014-03-05 Thread LELOUP Julien
I mean we still have the tempest/stress/actions/ and we could put something 
like that in there. In general I would like to discuss this topic in the next 
QA meeting..
@Julien: are you able to join the next meeting? It would be 22:00 UTC.

@Marc : so next Thrusday (3/6/2014) ? Yes I can be there.

I have a last minute hindrance preventing me to attend to the next QA meeting.

I will see to attend the one at 1700 UTC Thursday 13 march.


Best Regards,

Julien LELOUP
julien.lel...@3ds.com



This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Status of multi-attach-volume work

2014-03-05 Thread Zhi Yan Liu
Hi,

We decided multi-attach feature must be implemented as an extension to
core functionality in Cinder, but currently we have not a clear
extension support in Cinder, IMO it's the biggest blocker now. And the
other issues have been listed at
https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume#Comments_and_Discussion
as well. Probably we could get more inputs from Cinder cores.

thanks,
zhiyan

On Wed, Mar 5, 2014 at 8:19 PM, Niklas Widell
niklas.wid...@ericsson.com wrote:
 Hi
 What is the current status of the work around multi-attach-volume [1]? We
 have some cluster related use cases that would benefit from being able to
 attach a volume from several instances.

 [1] https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

 Best regards
 Niklas Widell
 Ericsson AB

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Christopher Lefelhocz
I like this plan as it addresses my primary concern of getting deployments
comfortable with the transition.

We don't mention SDKs in the plan.  It seems like getting at least one SDK
to use v3 would provide us additional data in the transition.  There is
clear risk associated with that, but it may help to work out any
additional issues that crop up.

Christopher

On 3/4/14 6:09 PM, Dan Smith d...@danplanet.com wrote:

 What I'd like to do next is work through a new proposal that includes
 keeping both v2 and v3, but with a new added focus of minimizing the
 cost.  This should include a path away from the dual code bases and to
 something like the v2.1 proposal.

I think that the most we can hope for is consensus on _something_. So,
the thing that I'm hoping would mostly satisfy the largest number of
people is:

- Leaving v2 and v3 as they are today in the tree, and with v3 still
  marked experimental for the moment
- We start on a v2 proxy to v3, with the first goal of fully
  implementing the v2 API on top of v3, as judged by tempest
- We define the criteria for removing the current v2 code and marking
  the v3 code supported as:
 - The v2 proxy passes tempest
 - The v2 proxy has sign-off from some major deployers as something
   they would be comfortable using in place of the existing v2 code
 - The v2 proxy seems to us to be lower maintenance and otherwise
   preferable to either keeping both, breaking all our users, deleting
   v3 entirely, etc
- We keep this until we either come up with a proxy that works, or
  decide that it's not worth the cost, etc.

I think the list of benefits here are:

- Gives the v3 code a chance to address some of the things we have
  identified as lacking in both trees
- Gives us a chance to determine if the proxy approach is reasonable or
  a nightmare
- Gives a clear go/no-go line in the sand that we can ask deployers to
  critique or approve

It doesn't address all of my concerns, but at the risk of just having
the whole community split over this discussion, I think this is probably
(hopefully?) something we can all get behind.

Thoughts?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Jay Pipes
On Wed, 2014-03-05 at 05:43 +, Kenichi Oomichi wrote:
  -Original Message-
  From: Dan Smith [mailto:d...@danplanet.com]
  Sent: Wednesday, March 05, 2014 9:09 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API
  
   What I'd like to do next is work through a new proposal that includes
   keeping both v2 and v3, but with a new added focus of minimizing the
   cost.  This should include a path away from the dual code bases and to
   something like the v2.1 proposal.
  
  I think that the most we can hope for is consensus on _something_. So,
  the thing that I'm hoping would mostly satisfy the largest number of
  people is:
  
  - Leaving v2 and v3 as they are today in the tree, and with v3 still
marked experimental for the moment
  - We start on a v2 proxy to v3, with the first goal of fully
implementing the v2 API on top of v3, as judged by tempest
  - We define the criteria for removing the current v2 code and marking
the v3 code supported as:
   - The v2 proxy passes tempest
   - The v2 proxy has sign-off from some major deployers as something
 they would be comfortable using in place of the existing v2 code
   - The v2 proxy seems to us to be lower maintenance and otherwise
 preferable to either keeping both, breaking all our users, deleting
 v3 entirely, etc
 
 Thanks, Dan.
 The above criteria is reasonable to me.
 
 Now Tempest does not check API responses in many cases.
 For example, Tempest does not check what API attributes(flavor, image,
 etc.) should be included in the response body of create a server API.
 So we need to improve Tempest coverage from this viewpoint for verifying
 any backward incompatibility does not happen on v2.1 API.
 We started this improvement for Tempest and have proposed some patches
 for it now.

Kenichi-san, you may also want to check out this ML post from David
Kranz:

http://lists.openstack.org/pipermail/openstack-dev/2014-March/028920.html

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-03-05 Thread Julien Danjou
On Tue, Mar 04 2014, Joe Gordon wrote:

 So since tools/config/check_uptodate.sh is oslo code, I assumed this
 issue falls into the domain of oslo-incubator.

 Until this gets resolved nova is considering
 https://review.openstack.org/#/c/78028/

Removing tools/config/oslo.config.generator.rc would have a been a
better trade-off I think.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Russell Bryant
On 03/05/2014 08:52 AM, Christopher Lefelhocz wrote:
 I like this plan as it addresses my primary concern of getting deployments
 comfortable with the transition.
 
 We don't mention SDKs in the plan.  It seems like getting at least one SDK
 to use v3 would provide us additional data in the transition.  There is
 clear risk associated with that, but it may help to work out any
 additional issues that crop up.

I think this plan is mostly about ensuring we don't drown in this
transition.  We're focusing on lowering our maintenance cost so that the
transition to the new API can happen more naturally.

I think SDK support is critical for the success of v3 long term.  I
expect most people are using the APIs through one of the major SDKs, so
v3 won't take off until that happens.  I think our top priority in Nova
to help ensure this happens is to provide top notch documentation on the
v3 API, as well as all of the differences between v2 and v3.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Miguel Angel Ajo


Hello,

Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes 
it's wrapping itself.


On a database with 1 public network, 192 private networks, 192 
routers, and 192 nano VMs, with OVS plugin:



Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


   That's the time since you reboot a network node, until all namespaces
and services are restored.


   If you see appendix 1, this extra 14min overhead, matches with the 
fact that rootwrap needs 0.3s to start, and launch a system command 
(once filtered).


14minutes =  840 s.
(840s. / 192 resources)/0.3s ~= 15 operations / 
resource(qdhcp+qrouter) (iptables, ovs port creation  tagging, starting 
child processes, etc..)


   The overhead comes from python startup time + rootwrap loading.

   I suppose that rootwrap was designed for lower amount of system 
calls (nova?).


   And, I understand what rootwrap provides, a level of filtering that 
sudo cannot offer. But it raises some question:


1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

   0) already being done: coalescing system calls. But I'm unsure 
that's enough. (if we coalesce 15 calls to 3 on this system we get: 
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).


   a) Rewriting rules into sudo (to the extent that it's possible), and 
live with that.
   b) How secure is neutron about command injection to that point? How 
much is user input filtered on the API calls?
   c) Even if b is ok , I suppose that if the DB gets compromised, 
that could lead to command injection.


   d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and 
live with sudo with simple filtering. (we kill the python/rootwrap 
startup overhead).


3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
structures, I wonder if that time could be reduced by conversion

of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)

Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo int main() { return 0; }  test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation on 
this machine


real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys0m0.019s


[root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'

real0m0.057s
user0m0.016s
sys0m0.011s

[root@rhos4-neutron2 ~]# time neutron-rootwrap --help
/usr/bin/neutron-rootwrap: No command specified

real0m0.309s
user0m0.128s
sys0m0.037s

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Sandhya Dasu (sadasu)
Hi Irena,
My MD has to take care of admin state changes since I have no L2
agent. I think that is what Bob also alluded to. That being said, I am not
doing anything specific to handle admin_state_up/down. The SR-IOV port on
my device is always going to be up, for now atleast.

Thanks,
Sandhya

On 3/5/14 1:56 AM, Irena Berezovsky ire...@mellanox.com wrote:

Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from
SimpleAgentMechanismDriverBase and add there verify method for
supported_pci_vendor_info. Specific MD will pass the list of supported
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will
call 'supported_pci_vendor_info', and if supported continue with binding
flow. 
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info
support, should be done in order to see if MD should bind the port. If
the answer is Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how would
you deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
usage questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from neutron.openstack.common
import log from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_(%s: Missing pci vendor info in profile),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_(%(func_name)s: unsupported pci vendor 
info: %(info)s),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running on the port's host.

MechanismDrivers that uses this base class and requires an agent must
pass the agent type to __init__(), and must implement
try_to_bind_segment_for_agent() and check_segment_for_agent().

MechanismDrivers that uses this base class may provide supported
vendor
information, and must provide the supported vnic types.

def __init__(self, agent_type=None, supported_pci_vendor_info=[],
 supported_vnic_types=DEFAULT_VNIC_TYPES_SUPPORTED):
Initialize base class for SR-IOV capable Mechanism Drivers

:param agent_type: Constant identifying agent type in agents_db

Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Robert Li (baoli)
Hi Sean,

See embedded commentsŠ

Thanks,
Robert

On 3/4/14 3:25 PM, Collins, Sean sean_colli...@cable.comcast.com wrote:

On Tue, Mar 04, 2014 at 02:08:03PM EST, Robert Li (baoli) wrote:
 Hi Xu Han  Sean,
 
 Is this code going to be committed as it is? Based on this morning's
 discussion, I thought that the IP address used to install the RA rule
 comes from the qr-xxx interface's LLA address. I think that I'm
confused.

Xu Han has a better grasp on the query than I do, but I'm going to try
and take a crack at explaining the code as I read through it. Here's
some sample data from the Neutron database - built using
vagrant_devstack. 

https://gist.github.com/sc68cal/568d6119eecad753d696

I don't have V6 addresses working in vagrant_devstack just yet, but for
the sake of discourse I'm going to use it as an example.

If you look at the queries he's building in 72252 - he's querying all
the ports on the network, that are q_const.DEVICE_OWNER_ROUTER_INTF
(network:router_interface). The IP of those ports are added to the list
of IPs.

Then a second query is done to find the port connected from the router
to the gateway, q_const.DEVICE_OWNER_ROUTER_GW
('network:router_gateway'). Those IPs are then appended to the list of
IPs.

Finally, the last query adds the IPs of the gateway for each subnet
in the network.

So, ICMPv6 traffic from ports that are either:

A) A gateway device
B) A router
C) The subnet's gateway

My understanding is that the RA (if enabled) will be sent to the router
interface (the qr interface). Therefore, the RA's source IP will be an LLA
from the qr interface

 

Will be passed through to an instance.

Now, please take note that I have *not* discussed what *kind* of IP
address will be picked up. We intend for it to be a Link Local address,
but that will be/is addressed in other patch sets.

 Also this bug: Allow LLA as router interface of IPv6 subnet
 https://review.openstack.org/76125 was created due to comments to 72252.
 If We don't need to create a new LLA for the gateway IP, is the fix
still
 needed? 

Yes - we still need this patch - because that code path is how we are
able to create ports on routers that are a link local address.

As a result of this change, it will end up having two LLA addresses in the
router's qr interface. It would have made more sense if the LLA will be
replacing the qr interface's automatically generated LLA address.



This is at least my understanding of our progress so far, but I'm not
perfect - Xu Han will probably have the last word.

-- 
Sean M. Collins


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread 方祯
Hi:
I'm Fang Zhen, an M.S student from China. My current research work is on
scheduling policy on cloud computing. I have been following the openstack
for about 2 years.I always thought of picking a blueprint and implementing
it with the community's guidance.Luckily, open-stack participates GSOC this
year and is is impossible for me to implement Cross-services Scheduler of
Openstack-Gantt project.And also, I'm sure that I can continue to help to
 openstack after GSOC.

About me:
I'm a M.S student from XiDian University. I'm good at c and python
programming and I'm familiar with git, python development.I have
participated in developing several web server with python web framework and
implentmented some python scripts to use openstack.I have read the docs and
papers about the the project and the guide.md of openstack development. But
as a newbie to OpenStack dev, it would be very great if somebody could give
me some guidance on staring with the project;)

Thanks and Regards,
fangzhen
GitHub : https://github.com/fz1989
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Feature Freeze Exceptions for Icehouse

2014-03-05 Thread Russell Bryant
Nova is now feature frozen for the Icehouse release.  Patches for
blueprints not already merged will need a feature freeze exception (FFE)
to be considered for Icehouse.

If you would like to request a FFE, please do so on the openstack-dev
mailing list with a prefix of [Nova] FFE Request: .

In addition to evaluation the request in terms of risks and benefits, I
would like to require that every FFE be sponsored by two members of
nova-core.  This is to ensure that there are reviewers willing to review
the code in a timely manner so that we can exclusively focus on bug
fixes as soon as possible.

Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread Russell Bryant
On 03/05/2014 09:59 AM, 方祯 wrote:
 Hi:
 I'm Fang Zhen, an M.S student from China. My current research work is on
 scheduling policy on cloud computing. I have been following the
 openstack for about 2 years.I always thought of picking a blueprint and
 implementing it with the community's guidance.Luckily, open-stack
 participates GSOC this year and is is impossible for me to implement
 Cross-services Scheduler of Openstack-Gantt project.And also, I'm sure
 that I can continue to help to  openstack after GSOC.

Thanks for your interest in OpenStack!

I think the project as you've described it is far too large to be able
to implement in one GSoC term.  If you're interested in scheduling,
perhaps we can come up with a specific enhancement to Nova's current
scheduler that would be more achievable in the time allotted.  I want to
make sure we're setting you up for success, and I think helping scope
the project is a big early part of that.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread Sylvain Bauza

Hi Fang,

Gantt subteam owns weekly meetings every Tuesdays 1500 UTC at 
#openstack-meeting IRC channel, where we discuss about the steps for 
forklifting Nova scheduler into a separate service.
As there is now FeatureFreeze period, there are no patches targeted to 
be merged before next Juno summit, but there is opportunity for 
discussing anyway.


Thanks,
-Sylvain

Le 05/03/2014 15:59, ?? a écrit :

Hi:
I'm Fang Zhen, an M.S student from China. My current research work is 
on scheduling policy on cloud computing. I have been following the 
openstack for about 2 years.I always thought of picking a blueprint 
and implementing it with the community's guidance.Luckily, open-stack 
participates GSOC this year and is is impossible for me to implement 
Cross-services Scheduler of Openstack-Gantt project.And also, I'm 
sure that I can continue to help to  openstack after GSOC.


About me:
I'm a M.S student from XiDian University. I'm good at c and python 
programming and I'm familiar with git, python development.I have 
participated in developing several web server with python web 
framework and implentmented some python scripts to use openstack.I 
have read the docs and papers about the the project and the guide.md 
http://guide.md of openstack development. But as a newbie to 
OpenStack dev, it would be very great if somebody could give me some 
guidance on staring with the project;)


Thanks and Regards,
fangzhen
GitHub : https://github.com/fz1989


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Thierry Carrez
Miguel Angel Ajo wrote:
 [...]
The overhead comes from python startup time + rootwrap loading.
 
I suppose that rootwrap was designed for lower amount of system calls
 (nova?).

Yes, it was not really designed to escalate rights on hundreds of
separate shell commands in a row.

And, I understand what rootwrap provides, a level of filtering that
 sudo cannot offer. But it raises some question:
 
 1) It's actually someone using rootwrap in production?
 
 2) What alternatives can we think about to improve this situation.
 
0) already being done: coalescing system calls. But I'm unsure that's
 enough. (if we coalesce 15 calls to 3 on this system we get:
 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
 
a) Rewriting rules into sudo (to the extent that it's possible), and
 live with that.

We used to use sudo and a sudoers file. The rules were poorly written,
and there is just so much you can check in a sudoers file. But the main
issue was that the sudoers file lived in packaging
(distribution-dependent), and was not maintained in sync with the code.
Rootwrap let us to maintain the rules (filters) in sync with the code
calling them.

To work around perf issues, you still have the option of running with a
wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
running with a badly-written or badly-maintained sudo rules anyway.

 [...]
d) Re-writing rootwrap into C (it's 600 python LOCs now).

(d2) would be to explore running rootwrap under Pypy. Testing that is on
my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
that option.

e) Doing the command filtering at neutron-side, as a library and live
 with sudo with simple filtering. (we kill the python/rootwrap startup
 overhead).

That's as safe as running with a wildcard sudoers file (neutron user can
escalate to root). Which may just be acceptable in /some/ scenarios.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Collins, Sean
Hi Robert,

I'm reaching out to you off-list for this:

On Wed, Mar 05, 2014 at 09:48:46AM EST, Robert Li (baoli) wrote:
 As a result of this change, it will end up having two LLA addresses in the
 router's qr interface. It would have made more sense if the LLA will be
 replacing the qr interface's automatically generated LLA address.

Was this not what you intended, when you -1'd the security group patch
because you were not able to create gateways for Neutron subnets with a
LLA address? I am a little frustrated because we scrambled to create a
patch so you would remove your -1, then now your suggesting we abandon
it?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Blueprint cinder-rbd-driver-qos

2014-03-05 Thread git harry
Hi,

https://blueprints.launchpad.net/cinder/+spec/cinder-rbd-driver-qos

I've been looking at this blueprint with a view to contributing on it, assuming 
I can take it. I am unclear as to whether or not it is still valid. I can see 
that it was registered around a year ago and it appears the functionality is 
essentially already supported by using multiple backends.

Looking at the existing drivers that have qos support it appears IOPS etc are 
available for control/customisation. As I understand it  Ceph has no qos type 
control built-in and creating pools using different hardware is as granular as 
it gets. The two don't quite seem comparable to me so I was hoping to get some 
feedback, as to whether or not this is still useful/appropriate, before 
attempting to do any work.

Thanks,
git-harry 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Darren Birkett
Hi,

I'm wondering why in this commit:

https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

...the slave_connection option was removed.  It seems like a useful option
to have, even if a lot of projects weren't yet using it.

Darren
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Flavors per datastore

2014-03-05 Thread Daniel Salinas
After reading this I feel it requires me to ask the question:

Do flavors have datastores or do datastores have flavors?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Flavors per datastore

2014-03-05 Thread Denis Makogon
Hey, Danies.

Datastore has a set of flavor that allowed to be used while instance
provisioning with the given datastore.


Best regards,
Denis Makogon


On Wed, Mar 5, 2014 at 5:28 PM, Daniel Salinas imsplit...@gmail.com wrote:

 After reading this I feel it requires me to ask the question:

 Do flavors have datastores or do datastores have flavors?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread Davanum Srinivas
Hi Fang,

Agree with Russell. Also please update the wiki with your information
https://wiki.openstack.org/wiki/GSoC2014 and also information about
the mentor/ideas as well (if you have not yet done so already). You
can reach out to folks on #openstack-gsoc and #openstack-nova IRC
channels as well

thanks,
dims

On Wed, Mar 5, 2014 at 10:12 AM, Russell Bryant rbry...@redhat.com wrote:
 On 03/05/2014 09:59 AM, 方祯 wrote:
 Hi:
 I'm Fang Zhen, an M.S student from China. My current research work is on
 scheduling policy on cloud computing. I have been following the
 openstack for about 2 years.I always thought of picking a blueprint and
 implementing it with the community's guidance.Luckily, open-stack
 participates GSOC this year and is is impossible for me to implement
 Cross-services Scheduler of Openstack-Gantt project.And also, I'm sure
 that I can continue to help to  openstack after GSOC.

 Thanks for your interest in OpenStack!

 I think the project as you've described it is far too large to be able
 to implement in one GSoC term.  If you're interested in scheduling,
 perhaps we can come up with a specific enhancement to Nova's current
 scheduler that would be more achievable in the time allotted.  I want to
 make sure we're setting you up for success, and I think helping scope
 the project is a big early part of that.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-05 Thread Thierry Carrez
Dina Belova wrote:
 I think your idea is really interesting. I mean, that thought “Gantt -
 where to schedule, Climate - when to schedule” is quite understandable
 and good looking.

Would Climate also be usable to support functionality like Spot
Instances ? Schedule when spot price falls under X ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-05 Thread Gary Kotton
Hi,
Unfortunately we did not get the ISO support approved by the deadline. If 
possible can we please get the FFE.

The feature is completed and has been tested extensively internally. The 
feature is very low risk and has huge value for users. In short a user is able 
to upload a iso to glance then boot from that iso.

BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
Code: https://review.openstack.org/#/c/63084/ and 
https://review.openstack.org/#/c/77965/
Sponsors: John Garbutt and Nikola Dipanov

One of the things that we are planning on improving in Juno is the way that the 
Vmops code is arranged and organized. We will soon be posting a wiki for ideas 
to be discussed. That will enable use to make additions like this a lot simpler 
in the future. But sadly that is not part of the scope at the moment.

Thanks in advance
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-05 Thread Thierry Carrez
Anne Gentle wrote:
 It feels like it should be part of a scheduler or reservation program
 but we don't have one today. We also don't have a workflow, planning, or
 capacity management program, all of which these use cases could fall under. 
 
 (I should know this but) What are the options when a program doesn't
 exist already? Am I actually struggling with a scope expansion beyond
 infrastructure definitions? I'd like some more discussion by next week's
 TC meeting.

When a project files for incubation and covers a new scope, they also
file for a new program to go with it.

https://wiki.openstack.org/wiki/Governance/NewProjects

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Tracy Jones
Hi - Please consider the image cache aging BP for FFE 
(https://review.openstack.org/#/c/56416/)

This is the last of several patches (already merged) that implement image cache 
cleanup for the vmware driver.  This patch solves a significant customer pain 
point as it removes unused images from their datastore.  Without this patch 
their datastore can become unnecessarily full.  In addition to the customer 
benefit from this patch it

1.  has a turn off switch 
2.  if fully contained within the vmware driver
3.  has gone through functional testing with our internal QA team 

ndipanov has been good enough to say he will review the patch, so we would ask 
for one additional core sponsor for this FFE.

Thanks

Tracy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Alexei Kornienko

Hello Darren,

This option is removed since oslo.db will no longer manage engine 
objects on it's own. Since it will not store engines it cannot handle 
query dispatching.


Every project that wan't to use slave_connection will have to implement 
this logic (creation of the slave engine and query dispatching) on it's own.


Regards,

On 03/05/2014 05:18 PM, Darren Birkett wrote:

Hi,

I'm wondering why in this commit:

https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

...the slave_connection option was removed.  It seems like a useful 
option to have, even if a lot of projects weren't yet using it.


Darren


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-03-05 Thread Jay Lau
Hi Gokul,




2014-03-05 3:30 GMT+08:00 Gokul Kandiraju gokul4o...@gmail.com:



 Dear All,



 We are working on a framework where we want to monitor the system and take
 certain actions when specific events or situations occur. Here are two
 examples of 'different' situations:



Example 1: A VM's-Owner and N/W's-owner are different == this could
 mean a violation == we need to take some action

Example 2: A simple policy such as (VM-migrate of all VMs on possible
 node failure) OR (a more complex Energy Policy that may involve
 optimization).



 Both these examples need monitoring and actions to be taken when certain
 events happen (or through polling). However, the first one falls into the
 Compliance domain with Boolean conditions getting evaluated while the
 second one may require a more richer set of expression allowing for
 sequences or algorithms.

  So far, based on this discussion, it seems that these are *separate*
 initiatives in the community. I am understanding the Congress project to be
 in the domain of Boolean conditions (used for Compliance, etc.) where as
 the Run-time-policies (Jay's proposal) where policies can be expressed as
 rules, algorithms with higher-level goals. Is this understanding correct?

 Also, looking at all the mails, this is what I am reading:



  1. Congress -- Focused on Compliance [ is that correct? ] (Boolean
 constraints and logic)



  2. Runtime-Policies -- Jay's mail -- Focused on Runtime policies
 for Load Balancing, Availability, Energy, etc. (sequences of actions,
 rules, algorithms)

[Jay] Yes, exactly.



  3. SolverScheduler -- Focused on Placement [ static or runtime ] and
 will be invoked by the (above) policy engines



  4. Gantt - Focused on (Holistic) Scheduling

  [Jay] For 3 and 4, I was always thinking Gantt is doing something for
implementing SolverScheduler, not sure if run time policy can be included.



  5. Neat -- seems to be a special case of Runtime-Policies  (policies
 based on Load)



 Would this be correct understanding?  We need to understand this to
 contribute to the right project. :)



 Thanks!

 -Gokul



 On Fri, Feb 28, 2014 at 5:46 PM, Jay Lau jay.lau@gmail.com wrote:

 Hi Yathiraj and Tim,

 Really appreciate your comments here ;-)

 I will prepare some detailed slides or documents before summit and we can
 have a review then. It would be great if OpenStack can provide DRS
 features.

 Thanks,

 Jay



 2014-03-01 6:00 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

 Hi Jay,

 I think the Solver Scheduler is a better fit for your needs than
 Congress because you know what kinds of constraints and enforcement you
 want.  I'm not sure this topic deserves an entire design session--maybe
 just talking a bit at the summit would suffice (I *think* I'll be
 attending).

 Tim

 - Original Message -
 | From: Jay Lau jay.lau@gmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Wednesday, February 26, 2014 6:30:54 PM
 | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
 for OpenStack run time policy to manage
 | compute/storage resource
 |
 |
 |
 |
 |
 |
 | Hi Tim,
 |
 | I'm not sure if we can put resource monitor and adjust to
 | solver-scheduler (Gantt), but I have proposed this to Gantt design
 | [1], you can refer to [1] and search jay-lau-513.
 |
 | IMHO, Congress does monitoring and also take actions, but the actions
 | seems mainly for adjusting single VM network or storage. It did not
 | consider migrating VM according to hypervisor load.
 |
 | Not sure if this topic deserved to be a design session for the coming
 | summit, but I will try to propose.
 |
 |
 |
 |
 | [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
 |
 |
 |
 | Thanks,
 |
 |
 | Jay
 |
 |
 |
 | 2014-02-27 1:48 GMT+08:00 Tim Hinrichs  thinri...@vmware.com  :
 |
 |
 | Hi Jay and Sylvain,
 |
 | The solver-scheduler sounds like a good fit to me as well. It clearly
 | provisions resources in accordance with policy. Does it monitor
 | those resources and adjust them if the system falls out of
 | compliance with the policy?
 |
 | I mentioned Congress for two reasons. (i) It does monitoring. (ii)
 | There was mention of compute, networking, and storage, and I
 | couldn't tell if the idea was for policy that spans OS components or
 | not. Congress was designed for policies spanning OS components.
 |
 |
 | Tim
 |
 | - Original Message -
 |
 | | From: Jay Lau  jay.lau@gmail.com 
 | | To: OpenStack Development Mailing List (not for usage questions)
 | |  openstack-dev@lists.openstack.org 
 |
 |
 | | Sent: Tuesday, February 25, 2014 10:13:14 PM
 | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
 | | for OpenStack run time policy to manage
 | | compute/storage resource
 | |
 | |
 | |
 | |
 | |
 | | Thanks Sylvain and Tim for the great sharing.
 | |
 | | @Tim, I also go through with Congress and 

Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Joe Gordon
On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao chaoc...@gmail.com wrote:
 Hi Joe, my meaning is that cloud users may not hope to create new instances
 or new images, because those actions may require additional approval and
 additional charging. Or, due to instance/image quota limits, they can not do
 that. Anyway, from user's perspective, saving and reverting the existing
 instance will be preferred sometimes. Creating a new instance will be
 another story.


Are you saying some users may not be able to create an instance at
all? If so why not just control that via quotas.

Assuming the user has the power to rights and quota to create one
instance and one snapshot, your proposed idea is only slightly
different then the current workflow.

Currently one would:
1) Create instance
2) Snapshot instance
3) Use instance / break instance
4) delete instance
5) boot new instance from snapshot
6) goto step 3

From what I gather you are saying that instead of 4/5 you want the
user to be able to just reboot the instance. I don't think such a
subtle change in behavior is worth a whole new API extension.


 On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao chaoc...@gmail.com wrote:
  I think the current snapshot implementation can be a solution sometimes,
  but
  it is NOT exact same as user's expectation. For example, a new blueprint
  is
  created last week,
  https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
  which
  seems a little similar with this discussion. I feel the user is
  requesting
  Nova to create in-place snapshot (not a new image), in order to revert
  the
  instance to a certain state. This capability should be very useful when
  testing new software or system settings. It seems a short-term temporary
  snapshot associated with a running instance for Nova. Creating a new
  instance is not that convenient, and may be not feasible for the user,
  especially if he or she is using public cloud.
 

 Why isn't it easy to create a new instance from a snapshot?

 
  On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
  divakar.padiyar-nanda...@hp.com wrote:
 
   Why reboot an instance? What is wrong with deleting it and create a
   new one?
 
  You generally use non-persistent disk mode when you are testing new
  software or experimenting with settings.   If something goes wrong just
  reboot and you are back to clean state and start over again.I feel
  it's
  convenient to handle this with just a reboot rather than recreating the
  instance.
 
  Thanks,
  Divakar
 
  -Original Message-
  From: Joe Gordon [mailto:joe.gord...@gmail.com]
  Sent: Tuesday, March 04, 2014 10:41 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova][cinder] non-persistent
  storage(after
  stopping VM, data will be rollback automatically), do you think we
  shoud
  introduce this feature?
  Importance: High
 
  On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang
  zhangleiqi...@huawei.com
  wrote:
  
   This sounds like ephemeral storage plus snapshots.  You build a base
   image, snapshot it then boot from the snapshot.
  
  
   Non-persistent storage/disk is useful for sandbox-like environment,
   and
   this feature has already exists in VMWare ESX from version 4.1. The
   implementation of ESX is the same as what you said, boot from
   snapshot of
   the disk/volume, but it will also *automatically* delete the
   transient
   snapshot after the instance reboots or shutdowns. I think the whole
   procedure may be controlled by OpenStack other than user's manual
   operations.
 
  Why reboot an instance? What is wrong with deleting it and create a new
  one?
 
  
   As far as I know, libvirt already defines the corresponding
   transient
   element in domain xml for non-persistent disk ( [1] ), but it cannot
   specify
   the location of the transient snapshot. Although qemu-kvm has
   provided
   support for this feature by the -snapshot command argument, which
   will
   create the transient snapshot under /tmp directory, the qemu driver
   of
   libvirt don't support transient element currently.
  
   I think the steps of creating and deleting transient snapshot may be
   better to done by Nova/Cinder other than waiting for the transient
   support
   added to libvirt, as the location of transient snapshot should
   specified by
   Nova.
  
  
   [1] http://libvirt.org/formatdomain.html#elementsDisks
   --
   zhangleiqiang
  
   Best Regards
  
  
   -Original Message-
   From: Joe Gordon [mailto:joe.gord...@gmail.com]
   Sent: Tuesday, March 04, 2014 11:26 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Cc: Luohao (brian)
   Subject: Re: [openstack-dev] [nova][cinder] non-persistent
   storage(after stopping VM, data will be rollback automatically), do
   you think we shoud introduce this feature?
  
   On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou 

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Robert Li (baoli)
Hi Irena,

The main reason for me to do it that way is how vif_details should be
setup in our case. Do you need vlan in vif_details? The behavior in the
existing base classes is that the vif_details is set during the driver
init time. In our case, it needs to be setup during bind_port().

thanks,
Robert


On 3/5/14 7:37 AM, Irena Berezovsky ire...@mellanox.com wrote:

Hi Robert, Sandhya,
I have pushed the reference implementation SriovAgentMechanismDriverBase
as part the following WIP:
https://review.openstack.org/#/c/74464/

The code is in mech_agent.py, and very simple code for
mech_sriov_nic_switch.py.

Please take a look and review.

BR,
Irena

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Wednesday, March 05, 2014 9:04 AM
To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development
Mailing List (not for usage questions); Robert Kukura; Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from
SimpleAgentMechanismDriverBase and add there verify method for
supported_pci_vendor_info. Specific MD will pass the list of supported
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will
call 'supported_pci_vendor_info', and if supported continue with binding
flow. 
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info
support, should be done in order to see if MD should bind the port. If
the answer is Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how would
you deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
usage questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from neutron.openstack.common
import log from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_(%s: Missing pci vendor info in profile),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_(%(func_name)s: unsupported pci vendor 
info: %(info)s),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running 

Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 5:53 AM, Julien Danjou jul...@danjou.info wrote:
 On Tue, Mar 04 2014, Joe Gordon wrote:

 So since tools/config/check_uptodate.sh is oslo code, I assumed this
 issue falls into the domain of oslo-incubator.

 Until this gets resolved nova is considering
 https://review.openstack.org/#/c/78028/

 Removing tools/config/oslo.config.generator.rc would have a been a
 better trade-off I think.


Perhaps, although the previous consensus in this thread seemed to be
we generally don't want to include auto-generated files like config
sample in git. Removing the check all together also solves the case
where a patch in trunk adds a new config option and causes all
subsequent patches in that repo to fail.

 --
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Ben Nemec
This has actually come up before, too: 
http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html


-Ben

On 2014-03-05 08:42, Miguel Angel Ajo wrote:

Hello,

Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the
processes it's wrapping itself.

On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:


Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


   That's the time since you reboot a network node, until all 
namespaces

and services are restored.


   If you see appendix 1, this extra 14min overhead, matches with
the fact that rootwrap needs 0.3s to start, and launch a system
command (once filtered).

14minutes =  840 s.
(840s. / 192 resources)/0.3s ~= 15 operations /
resource(qdhcp+qrouter) (iptables, ovs port creation  tagging,
starting child processes, etc..)

   The overhead comes from python startup time + rootwrap loading.

   I suppose that rootwrap was designed for lower amount of system
calls (nova?).

   And, I understand what rootwrap provides, a level of filtering that
sudo cannot offer. But it raises some question:

1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

   0) already being done: coalescing system calls. But I'm unsure
that's enough. (if we coalesce 15 calls to 3 on this system we get:
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).

   a) Rewriting rules into sudo (to the extent that it's possible),
and live with that.
   b) How secure is neutron about command injection to that point? How
much is user input filtered on the API calls?
   c) Even if b is ok , I suppose that if the DB gets compromised,
that could lead to command injection.

   d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and
live with sudo with simple filtering. (we kill the python/rootwrap
startup overhead).

3) I also find 10 minutes a long time to setup 192 networks/basic
tenant structures, I wonder if that time could be reduced by
conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)

Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo int main() { return 0; }  test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation
on this machine

real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys0m0.019s


[root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'

real0m0.057s
user0m0.016s
sys0m0.011s

[root@rhos4-neutron2 ~]# time neutron-rootwrap --help
/usr/bin/neutron-rootwrap: No command specified

real0m0.309s
user0m0.128s
sys0m0.037s

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Robert Li (baoli)
Hi Sean,

Sorry for your frustration. I actually provided the comments about the two
LLAs in the review (see patch set 1). If the intent for these changes is
to allow RAs from legitimate sources only, I'm afraid that that goal won't
be reached with them. I may be completely wrong, but so far I haven't been
convinced yet. 
 

thanks,
Robert



On 3/5/14 10:21 AM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

Hi Robert,

I'm reaching out to you off-list for this:

On Wed, Mar 05, 2014 at 09:48:46AM EST, Robert Li (baoli) wrote:
 As a result of this change, it will end up having two LLA addresses in
the
 router's qr interface. It would have made more sense if the LLA will be
 replacing the qr interface's automatically generated LLA address.

Was this not what you intended, when you -1'd the security group patch
because you were not able to create gateways for Neutron subnets with a
LLA address? I am a little frustrated because we scrambled to create a
patch so you would remove your -1, then now your suggesting we abandon
it?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Irena Berezovsky
Hi Robert,
I think what you mentioned can be achieved by calling into specific MD method 
from
SriovAgentMechanismDriverBase .try_to_bind_segment_for_agent mehod, maybe 
something like 'get_vif_details' before it calls to context.set_binding.
Would you mind to continue discussion over patch gerrit review 
https://review.openstack.org/#/c/74464/ ?
I think it will be easier to follow up the comments and decisions.

Thanks,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Wednesday, March 05, 2014 6:10 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions); Sandhya Dasu (sadasu); Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Irena,

The main reason for me to do it that way is how vif_details should be setup in 
our case. Do you need vlan in vif_details? The behavior in the existing base 
classes is that the vif_details is set during the driver init time. In our 
case, it needs to be setup during bind_port().

thanks,
Robert


On 3/5/14 7:37 AM, Irena Berezovsky ire...@mellanox.com wrote:

Hi Robert, Sandhya,
I have pushed the reference implementation 
SriovAgentMechanismDriverBase as part the following WIP:
https://review.openstack.org/#/c/74464/

The code is in mech_agent.py, and very simple code for 
mech_sriov_nic_switch.py.

Please take a look and review.

BR,
Irena

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Wednesday, March 05, 2014 9:04 AM
To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development 
Mailing List (not for usage questions); Robert Kukura; Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
binding of ports

Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from 
SimpleAgentMechanismDriverBase and add there verify method for 
supported_pci_vendor_info. Specific MD will pass the list of supported 
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will 
call 'supported_pci_vendor_info', and if supported continue with 
binding flow.
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info 
support, should be done in order to see if MD should bind the port. If 
the answer is Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how 
would you deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for 
usage questions); Irena Berezovsky; Robert Kukura; Brian Bowen 
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
binding of ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from 
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from 
neutron.openstack.common import log from neutron.plugins.ml2 import 
driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = 

Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Doug Hellmann
On Wed, Mar 5, 2014 at 10:43 AM, Alexei Kornienko 
alexei.kornie...@gmail.com wrote:

  Hello Darren,

 This option is removed since oslo.db will no longer manage engine objects
 on it's own. Since it will not store engines it cannot handle query
 dispatching.

 Every project that wan't to use slave_connection will have to implement
 this logic (creation of the slave engine and query dispatching) on it's own.


If we are going to have multiple projects using that feature, we will have
to restore it to oslo.db. Just because the primary API won't manage global
objects doesn't mean we can't have a secondary API that does.

Doug




 Regards,


 On 03/05/2014 05:18 PM, Darren Birkett wrote:

 Hi,

  I'm wondering why in this commit:


 https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

  ...the slave_connection option was removed.  It seems like a useful
 option to have, even if a lot of projects weren't yet using it.

  Darren


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FFE Request: Freescale SDN ML2 Mechanism Driver

2014-03-05 Thread trinath.soman...@freescale.com
Hi Mark,?


We have the codebase and the 3rd Party CI setup in place for review.


Freescale CI is currently in non-voting status.


Kindly please consider the Blueprint and the codebase for FFE (icehouse 
release).


Blueprint: 
https://blueprints.launchpad.net/neutron/+spec/fsl-sdn-os-mech-driver and


Code base: https://review.openstack.org/#/c/78092/

?

Kindly please do the needful.


Thanking you


-

Trinath Somanchi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-03-05 Thread Tim Hinrichs
Hi Gokul, 

Thanks for working out how all these policy initiatives relate to each other. 
I'll be spending some time diving into the ones I hadn't heard about. 

I made some additional comments about Congress below. 

Tim 

- Original Message -

From: Jay Lau jay.lau@gmail.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Wednesday, March 5, 2014 7:31:55 AM 
Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage compute/storage resource 

Hi Gokul, 




2014-03-05 3:30 GMT+08:00 Gokul Kandiraju  gokul4o...@gmail.com  : 





Dear All, 

We are working on a framework where we want to monitor the system and take 
certain actions when specific events or situations occur. Here are two examples 
of ‘different’ situations: 

Example 1: A VM’s-Owner and N/W’s-owner are different == this could mean a 
violation == we need to take some action 

Example 2: A simple policy such as (VM-migrate of all VMs on possible node 
failure) OR (a more complex Energy Policy that may involve optimization). 

Both these examples need monitoring and actions to be taken when certain events 
happen (or through polling). However, the first one falls into the Compliance 
domain with Boolean conditions getting evaluated while the second one may 
require a more richer set of expression allowing for sequences or algorithms. 





So far, based on this discussion, it seems that these are *separate* 
initiatives in the community. I am understanding the Congress project to be in 
the domain of Boolean conditions (used for Compliance, etc.) where as the 
Run-time-policies (Jay's proposal) where policies can be expressed as rules, 
algorithms with higher-level goals. Is this understanding correct? 

Also, looking at all the mails, this is what I am reading: 

1. Congress -- Focused on Compliance [ is that correct? ] (Boolean constraints 
and logic) 



[Tim] Your characterization of boolean constraints for Congress is probably a 
good one. Congress won't be solving optimization/numeric problems any time soon 
if ever. However, I could imagine that down the road we could tell Congress 
here's the policy (optimization or Boolean) that we want to enforce, and it 
would carve off say the Load-balancing part of the policy and send it to the 
Runtime-Policies component; or it would carve off the placement policy and send 
it to the SolverScheduler. Not saying I know how to do this today, but that's 
always been part of the goal for Congress: to have a central point for admins 
to control the global policy being enforced throughout the datacenter/cloud. 

The other delta here is that the Congress policy language is general-purpose, 
so there's not a list of policy types that it will handle (Load Balancing, 
Placement, Energy). That generality comes with a price: that Congress must rely 
on other enforcement points, such as the ones below, to handle complicated 
policy enforcement problems. 



blockquote



2. Runtime-Policies -- Jay’s mail -- Focused on Runtime policies for Load 
Balancing, Availability, Energy, etc. (sequences of actions, rules, algorithms) 

/blockquote

[Jay] Yes, exactly. 

blockquote





3. SolverScheduler -- Focused on Placement [ static or runtime ] and will be 
invoked by the (above) policy engines 




4. Gantt – Focused on (Holistic) Scheduling 

/blockquote

[Jay] For 3 and 4, I was always thinking Gantt is doing something for 
implementing SolverScheduler, not sure if run time policy can be included. 

blockquote





5. Neat -- seems to be a special case of Runtime-Policies (policies based on 
Load) 



Would this be correct understanding? We need to understand this to contribute 
to the right project. :) 



Thanks! 

-Gokul 


On Fri, Feb 28, 2014 at 5:46 PM, Jay Lau  jay.lau@gmail.com  wrote: 

blockquote

Hi Yathiraj and Tim, 

Really appreciate your comments here ;-) 

I will prepare some detailed slides or documents before summit and we can have 
a review then. It would be great if OpenStack can provide DRS features. 

Thanks, 

Jay 



2014-03-01 6:00 GMT+08:00 Tim Hinrichs  thinri...@vmware.com  : 


blockquote
Hi Jay, 

I think the Solver Scheduler is a better fit for your needs than Congress 
because you know what kinds of constraints and enforcement you want. I'm not 
sure this topic deserves an entire design session--maybe just talking a bit at 
the summit would suffice (I *think* I'll be attending). 

Tim 

- Original Message - 
| From: Jay Lau  jay.lau@gmail.com  
| To: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org  
| Sent: Wednesday, February 26, 2014 6:30:54 PM 
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage 
| compute/storage resource 
| 
| 
| 
| 
| 
| 
| Hi Tim, 
| 
| I'm not sure if we can put resource monitor and adjust to 
| solver-scheduler 

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Miguel Angel Ajo Pelayo


- Original Message -
 Miguel Angel Ajo wrote:
  [...]
 The overhead comes from python startup time + rootwrap loading.
  
 I suppose that rootwrap was designed for lower amount of system calls
  (nova?).
 
 Yes, it was not really designed to escalate rights on hundreds of
 separate shell commands in a row.
 
 And, I understand what rootwrap provides, a level of filtering that
  sudo cannot offer. But it raises some question:
  
  1) It's actually someone using rootwrap in production?
  
  2) What alternatives can we think about to improve this situation.
  
 0) already being done: coalescing system calls. But I'm unsure that's
  enough. (if we coalesce 15 calls to 3 on this system we get:
  192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
  
 a) Rewriting rules into sudo (to the extent that it's possible), and
  live with that.
 
 We used to use sudo and a sudoers file. The rules were poorly written,
 and there is just so much you can check in a sudoers file. But the main
 issue was that the sudoers file lived in packaging
 (distribution-dependent), and was not maintained in sync with the code.
 Rootwrap let us to maintain the rules (filters) in sync with the code
 calling them.

Yes, from security  maintenance, it was an smart decision. I'm thinking
of automatically converting rootwrap rules to sudoers, but that's very 
limited, specially for the ip netns exec ... case.


 To work around perf issues, you still have the option of running with a
 wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
 running with a badly-written or badly-maintained sudo rules anyway.

That's what I used for my benchmark. I just wonder, the how possible
is to get command injection from neutron, via API or DB.

 
  [...]
 d) Re-writing rootwrap into C (it's 600 python LOCs now).
 
 (d2) would be to explore running rootwrap under Pypy. Testing that is on
 my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
 that option.

I tried in my system right now, it takes more time to boot-up. Pypy JIT 
is awesome on runtime, but it seems that boot time is slower.

I also played a little with shedskin (py-c++ converter), but it 
doesn't support all the python libraries, dynamic typing, or parameter 
unpacking.

That could be another approach, writing a simplified rootwrap in python, and
have it automatically converted to C++.

f) haleyb on IRC is pointing me to another approach Carl Baldwin is
pushing https://review.openstack.org/#/c/67490/ towards command execution 
coalescing.


 
 e) Doing the command filtering at neutron-side, as a library and live
  with sudo with simple filtering. (we kill the python/rootwrap startup
  overhead).
 
 That's as safe as running with a wildcard sudoers file (neutron user can
 escalate to root). Which may just be acceptable in /some/ scenarios.

I think it can be safer, (from the command injection point of view).

 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-05 Thread Paul Marshall
Hey, 

Sorry I missed this thread a couple of days ago. I am working on a first-pass 
of this and hope to have something soon. So far I've mostly focused on getting 
OpenVZ and the HP LH SAN driver working for online extend. I've had trouble 
with libvirt+kvm+lvm so I'd love some help there if you have ideas about how to 
get them working. For example, in a devstack VM the only way I can get the 
iSCSI target to show the new size (after an lvextend) is to delete and recreate 
the target, something jgriffiths said he doesn't want to support ;-). I also 
haven't dived into any of those other limits you mentioned (nfs_used_ratio, 
etc.). Feel free to ping me on IRC (pdmars).

Paul


On Mar 3, 2014, at 8:50 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:

 @john.griffith. Thanks for your information.
  
 I have read the BP you mentioned ([1]) and have some rough thoughts about it.
  
 As far as I know, the corresponding online-extend command for libvirt is 
 “blockresize”, and for Qemu, the implement differs among disk formats.
  
 For the regular qcow2/raw disk file, qemu will take charge of the 
 drain_all_io and truncate_disk actions, but for raw block device, qemu will 
 only check if the *Actual* size of the device is larger than current size.
  
 I think the former need more consideration, because the extend work is done 
 by libvirt, Nova may need to do this first and then notify Cinder. But if we 
 take allocation limit of different cinder backend drivers (such as quota, 
 nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be 
 more complicated.
  
 This scenario is not included by the Item 3 of BP ([1]), as it cannot be 
 simply “just work” or notified by the compute node/libvirt after the volume 
 is extended.
  
 This regular qcow2/raw disk files are normally stored in file system based 
 storage, maybe the Manila project is more appropriate for this scenario?
  
  
 Thanks.
  
  
 [1]: 
 https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
  
 --
 zhangleiqiang
  
 Best Regards
  
 From: John Griffith [mailto:john.griff...@solidfire.com] 
 Sent: Tuesday, March 04, 2014 1:05 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian)
 Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the 
 online-extend feature to cinder ?
  
  
  
 
 On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang zhangleiqi...@huawei.com 
 wrote:
 Hi, stackers:
 
 Libvirt/qemu have supported online-extend for multiple disk formats, 
 including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
 currently.
 
 Offline-extend volume will force the instance to be shutoff or the volume 
 to be detached. I think it will be useful if we introduce the online-extend 
 feature to cinder, especially for the file system based driver, e.g. nfs, 
 glusterfs, etc.
 
 Is there any other suggestions?
 
 Thanks.
 
 
 --
 zhangleiqiang
 
 Best Regards
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 Hi Zhangleiqiang,
  
 So yes, there's a rough BP for this here: [1], and some of the folks from the 
 Trove team (pdmars on IRC) have actually started to dive into this.  Last I 
 checked with him there were some sticking points on the Nova side but we 
 should synch up with Paul, it's been a couple weeks since I've last caught up 
 with him.
  
 Thanks,
 John
 [1]: 
 https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Nikola Đipanov
Hi folks,

This did not make it in fully.  Outstanding patches are:

https://review.openstack.org/#/c/71064/
https://review.openstack.org/#/c/71065/
https://review.openstack.org/#/c/71067/
https://review.openstack.org/#/c/71067/
https://review.openstack.org/#/c/71479/
https://review.openstack.org/#/c/72341/
https://review.openstack.org/#/c/72346/

Why accept it?

* It's low risk but needed refactoring that will make the code that has
been a source of occasional bugs.
* It is very low risk internal refactoring that uses code that has been
in tree for some time now (BDM objects).
* It has seen it's fair share of reviews

In addition I'd like to ask for the following patch that is based on the
above also be considered:

https://review.openstack.org/#/c/72797/

It is part of periodic-tasks-to-db-slave, a very useful BP, and was
blocked waiting for my work to land.

Thanks for consideration.

Regards,

Nikola

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Rick Jones

On 03/05/2014 06:42 AM, Miguel Angel Ajo wrote:


 Hello,

 Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes
it's wrapping itself.

 On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:


Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


I've not been looking at rootwrap, but have been looking at sudo and ip. 
(Using some scripts which create fake routers so I could look without 
any of this icky OpenStack stuff in the way :) ) The Ubuntu 12.04 
versions of each at least will enumerate all the interfaces on the 
system, even though they don't need to.


There was already an upstream change to 'ip' that eliminates the 
unnecessary enumeration.  In the last few weeks an enhancement went into 
the upstream sudo that allows one to configure sudo to not do the same 
thing.   Down in the low(ish) three figures of interfaces it may not be 
a Big Deal (tm) but as one starts to go beyond that...


commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8
Author: Stephen Hemminger step...@networkplumber.org
Date:   Thu Mar 28 15:17:47 2013 -0700

ip: remove unnecessary ll_init_map

Don't call ll_init_map on modify operations
Saves significant overhead with 1000's of devices.

http://www.sudo.ws/pipermail/sudo-workers/2014-January/000826.html

Whether your environment already has the 'ip' change I don't know, but 
odd are probably pretty good it doesn't have the sudo enhancement.



That's the time since you reboot a network node, until all namespaces
and services are restored.


So, that includes the time for the system to go down and reboot, not 
just the time it takes to rebuild once rebuilding starts?



If you see appendix 1, this extra 14min overhead, matches with the
fact that rootwrap needs 0.3s to start, and launch a system command
(once filtered).

 14minutes =  840 s.
 (840s. / 192 resources)/0.3s ~= 15 operations /
resource(qdhcp+qrouter) (iptables, ovs port creation  tagging, starting
child processes, etc..)

The overhead comes from python startup time + rootwrap loading.


How much of the time is python startup time?  I assume that would be all 
the find this lib, find that lib stuff one sees in a system call 
trace?  I saw a boatload of that at one point but didn't quite feel like 
wading into that at the time.



I suppose that rootwrap was designed for lower amount of system
calls (nova?).


And/or a smaller environment perhaps.


And, I understand what rootwrap provides, a level of filtering that
sudo cannot offer. But it raises some question:

1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

0) already being done: coalescing system calls. But I'm unsure
that's enough. (if we coalesce 15 calls to 3 on this system we get:
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).


It may not be sufficient, but it is (IMO) certainly necessary.  It will 
make any work that minimizes or eliminates the overhead of rootwrap look 
that much better.



a) Rewriting rules into sudo (to the extent that it's possible), and
live with that.
b) How secure is neutron about command injection to that point? How
much is user input filtered on the API calls?
c) Even if b is ok , I suppose that if the DB gets compromised,
that could lead to command injection.

d) Re-writing rootwrap into C (it's 600 python LOCs now).

e) Doing the command filtering at neutron-side, as a library and
live with sudo with simple filtering. (we kill the python/rootwrap
startup overhead).

3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)


Certainly going back and forth creating short-lived processes is at 
least anti-social and perhaps ever so slightly upsetting to the process 
scheduler.  Particularly at scale.  The/a problem is though that the 
Linux networking folks have been somewhat reticent about creating 
libraries (at least any that they would end-up supporting) because they 
have a concern it will lock-in interfaces and reduce their freedom of 
movement.


happy benchmarking,

rick jones
the fastest procedure call is the one you never make



Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo int main() { return 0; }  test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation on
this machine

real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys

Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Qin Zhao
Hi Joe,
If we assume the user is willing to create a new instance, the workflow you
are saying is exactly correct. However, what I am assuming is that the user
is NOT willing to create a new instance. If Nova can revert the existing
instance, instead of creating a new one, it will become the alternative way
utilized by those users who are not allowed to create a new instance.
Both paths lead to the target. I think we can not assume all the people
should walk through path one and should not walk through path two. Maybe
creating new instance or adjusting the quota is very easy in your point of
view. However, the real use case is often limited by business process. So I
think we may need to consider that some users can not or are not allowed to
creating the new instance under specific circumstances.


On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao chaoc...@gmail.com wrote:
  Hi Joe, my meaning is that cloud users may not hope to create new
 instances
  or new images, because those actions may require additional approval and
  additional charging. Or, due to instance/image quota limits, they can
 not do
  that. Anyway, from user's perspective, saving and reverting the existing
  instance will be preferred sometimes. Creating a new instance will be
  another story.
 

 Are you saying some users may not be able to create an instance at
 all? If so why not just control that via quotas.

 Assuming the user has the power to rights and quota to create one
 instance and one snapshot, your proposed idea is only slightly
 different then the current workflow.

 Currently one would:
 1) Create instance
 2) Snapshot instance
 3) Use instance / break instance
 4) delete instance
 5) boot new instance from snapshot
 6) goto step 3

 From what I gather you are saying that instead of 4/5 you want the
 user to be able to just reboot the instance. I don't think such a
 subtle change in behavior is worth a whole new API extension.

 
  On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
  On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao chaoc...@gmail.com wrote:
   I think the current snapshot implementation can be a solution
 sometimes,
   but
   it is NOT exact same as user's expectation. For example, a new
 blueprint
   is
   created last week,
   https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
   which
   seems a little similar with this discussion. I feel the user is
   requesting
   Nova to create in-place snapshot (not a new image), in order to revert
   the
   instance to a certain state. This capability should be very useful
 when
   testing new software or system settings. It seems a short-term
 temporary
   snapshot associated with a running instance for Nova. Creating a new
   instance is not that convenient, and may be not feasible for the user,
   especially if he or she is using public cloud.
  
 
  Why isn't it easy to create a new instance from a snapshot?
 
  
   On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
   divakar.padiyar-nanda...@hp.com wrote:
  
Why reboot an instance? What is wrong with deleting it and
 create a
new one?
  
   You generally use non-persistent disk mode when you are testing new
   software or experimenting with settings.   If something goes wrong
 just
   reboot and you are back to clean state and start over again.I
 feel
   it's
   convenient to handle this with just a reboot rather than recreating
 the
   instance.
  
   Thanks,
   Divakar
  
   -Original Message-
   From: Joe Gordon [mailto:joe.gord...@gmail.com]
   Sent: Tuesday, March 04, 2014 10:41 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [nova][cinder] non-persistent
   storage(after
   stopping VM, data will be rollback automatically), do you think we
   shoud
   introduce this feature?
   Importance: High
  
   On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang
   zhangleiqi...@huawei.com
   wrote:
   
This sounds like ephemeral storage plus snapshots.  You build a
 base
image, snapshot it then boot from the snapshot.
   
   
Non-persistent storage/disk is useful for sandbox-like environment,
and
this feature has already exists in VMWare ESX from version 4.1. The
implementation of ESX is the same as what you said, boot from
snapshot of
the disk/volume, but it will also *automatically* delete the
transient
snapshot after the instance reboots or shutdowns. I think the whole
procedure may be controlled by OpenStack other than user's manual
operations.
  
   Why reboot an instance? What is wrong with deleting it and create a
 new
   one?
  
   
As far as I know, libvirt already defines the corresponding
transient
element in domain xml for non-persistent disk ( [1] ), but it
 cannot
specify
the location of the transient snapshot. Although qemu-kvm has
provided
support 

Re: [openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Dan Smith
 Why accept it?
 
 * It's low risk but needed refactoring that will make the code that has
 been a source of occasional bugs.
 * It is very low risk internal refactoring that uses code that has been
 in tree for some time now (BDM objects).
 * It has seen it's fair share of reviews

Yeah, this has been conflict-heavy for a long time. If it hadn't been, I
think it'd be merged by now.

The bulk of this is done, the bits remaining have seen a *lot* of real
review. I'm happy to commit to reviewing this, since I've already done
so many times :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Andrew Laski

On 03/05/14 at 09:05am, Dan Smith wrote:

Why accept it?

* It's low risk but needed refactoring that will make the code that has
been a source of occasional bugs.
* It is very low risk internal refactoring that uses code that has been
in tree for some time now (BDM objects).
* It has seen it's fair share of reviews


Yeah, this has been conflict-heavy for a long time. If it hadn't been, I
think it'd be merged by now.

The bulk of this is done, the bits remaining have seen a *lot* of real
review. I'm happy to commit to reviewing this, since I've already done
so many times :)


I will also commit to reviewing this as I have reviewed much of it 
already.




--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Andrew Laski

On 03/05/14 at 07:37am, Tracy Jones wrote:

Hi - Please consider the image cache aging BP for FFE 
(https://review.openstack.org/#/c/56416/)

This is the last of several patches (already merged) that implement image cache 
cleanup for the vmware driver.  This patch solves a significant customer pain 
point as it removes unused images from their datastore.  Without this patch 
their datastore can become unnecessarily full.  In addition to the customer 
benefit from this patch it

1.  has a turn off switch
2.  if fully contained within the vmware driver
3.  has gone through functional testing with our internal QA team

ndipanov has been good enough to say he will review the patch, so we would ask 
for one additional core sponsor for this FFE.


Looking over the blueprint and outstanding review it seems that this is 
a fairly low risk change, so I am willing to sponsor this bp as well.




Thanks

Tracy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao chaoc...@gmail.com wrote:
 Hi Joe,
 If we assume the user is willing to create a new instance, the workflow you
 are saying is exactly correct. However, what I am assuming is that the user
 is NOT willing to create a new instance. If Nova can revert the existing
 instance, instead of creating a new one, it will become the alternative way
 utilized by those users who are not allowed to create a new instance.
 Both paths lead to the target. I think we can not assume all the people
 should walk through path one and should not walk through path two. Maybe
 creating new instance or adjusting the quota is very easy in your point of
 view. However, the real use case is often limited by business process. So I
 think we may need to consider that some users can not or are not allowed to
 creating the new instance under specific circumstances.


What sort of circumstances would prevent someone from deleting and
recreating an instance?


 On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao chaoc...@gmail.com wrote:
  Hi Joe, my meaning is that cloud users may not hope to create new
  instances
  or new images, because those actions may require additional approval and
  additional charging. Or, due to instance/image quota limits, they can
  not do
  that. Anyway, from user's perspective, saving and reverting the existing
  instance will be preferred sometimes. Creating a new instance will be
  another story.
 

 Are you saying some users may not be able to create an instance at
 all? If so why not just control that via quotas.

 Assuming the user has the power to rights and quota to create one
 instance and one snapshot, your proposed idea is only slightly
 different then the current workflow.

 Currently one would:
 1) Create instance
 2) Snapshot instance
 3) Use instance / break instance
 4) delete instance
 5) boot new instance from snapshot
 6) goto step 3

 From what I gather you are saying that instead of 4/5 you want the
 user to be able to just reboot the instance. I don't think such a
 subtle change in behavior is worth a whole new API extension.

 
  On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon joe.gord...@gmail.com
  wrote:
 
  On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao chaoc...@gmail.com wrote:
   I think the current snapshot implementation can be a solution
   sometimes,
   but
   it is NOT exact same as user's expectation. For example, a new
   blueprint
   is
   created last week,
   https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
   which
   seems a little similar with this discussion. I feel the user is
   requesting
   Nova to create in-place snapshot (not a new image), in order to
   revert
   the
   instance to a certain state. This capability should be very useful
   when
   testing new software or system settings. It seems a short-term
   temporary
   snapshot associated with a running instance for Nova. Creating a new
   instance is not that convenient, and may be not feasible for the
   user,
   especially if he or she is using public cloud.
  
 
  Why isn't it easy to create a new instance from a snapshot?
 
  
   On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
   divakar.padiyar-nanda...@hp.com wrote:
  
Why reboot an instance? What is wrong with deleting it and
create a
new one?
  
   You generally use non-persistent disk mode when you are testing new
   software or experimenting with settings.   If something goes wrong
   just
   reboot and you are back to clean state and start over again.I
   feel
   it's
   convenient to handle this with just a reboot rather than recreating
   the
   instance.
  
   Thanks,
   Divakar
  
   -Original Message-
   From: Joe Gordon [mailto:joe.gord...@gmail.com]
   Sent: Tuesday, March 04, 2014 10:41 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [nova][cinder] non-persistent
   storage(after
   stopping VM, data will be rollback automatically), do you think we
   shoud
   introduce this feature?
   Importance: High
  
   On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang
   zhangleiqi...@huawei.com
   wrote:
   
This sounds like ephemeral storage plus snapshots.  You build a
base
image, snapshot it then boot from the snapshot.
   
   
Non-persistent storage/disk is useful for sandbox-like
environment,
and
this feature has already exists in VMWare ESX from version 4.1.
The
implementation of ESX is the same as what you said, boot from
snapshot of
the disk/volume, but it will also *automatically* delete the
transient
snapshot after the instance reboots or shutdowns. I think the
whole
procedure may be controlled by OpenStack other than user's manual
operations.
  
   Why reboot an instance? What is wrong with deleting it and create a
   new
   one?
  
   
As far as I know, libvirt already 

Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Victor Sergeyev
Hello All.

We suppose to have common database code oslo.db library. So we decided to
let end applications to cope with engines, not oslo.db. For example, see
work with slave engine in Nova [1]. These is also patch to oslo with more
details - [2]

Also, Darren, please inform us about your usecase a bit.

[1]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L95
[2] https://review.openstack.org/#/c/68684/


On Wed, Mar 5, 2014 at 6:35 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Mar 5, 2014 at 10:43 AM, Alexei Kornienko 
 alexei.kornie...@gmail.com wrote:

  Hello Darren,

 This option is removed since oslo.db will no longer manage engine objects
 on it's own. Since it will not store engines it cannot handle query
 dispatching.

 Every project that wan't to use slave_connection will have to implement
 this logic (creation of the slave engine and query dispatching) on it's own.


 If we are going to have multiple projects using that feature, we will have
 to restore it to oslo.db. Just because the primary API won't manage global
 objects doesn't mean we can't have a secondary API that does.

 Doug




 Regards,


 On 03/05/2014 05:18 PM, Darren Birkett wrote:

 Hi,

  I'm wondering why in this commit:


 https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

  ...the slave_connection option was removed.  It seems like a useful
 option to have, even if a lot of projects weren't yet using it.

  Darren


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 8:51 AM, Miguel Angel Ajo Pelayo
mangel...@redhat.com wrote:


 - Original Message -
 Miguel Angel Ajo wrote:
  [...]
 The overhead comes from python startup time + rootwrap loading.
 
 I suppose that rootwrap was designed for lower amount of system calls
  (nova?).

 Yes, it was not really designed to escalate rights on hundreds of
 separate shell commands in a row.

 And, I understand what rootwrap provides, a level of filtering that
  sudo cannot offer. But it raises some question:
 
  1) It's actually someone using rootwrap in production?
 
  2) What alternatives can we think about to improve this situation.
 
 0) already being done: coalescing system calls. But I'm unsure that's
  enough. (if we coalesce 15 calls to 3 on this system we get:
  192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
 
 a) Rewriting rules into sudo (to the extent that it's possible), and
  live with that.

 We used to use sudo and a sudoers file. The rules were poorly written,
 and there is just so much you can check in a sudoers file. But the main
 issue was that the sudoers file lived in packaging
 (distribution-dependent), and was not maintained in sync with the code.
 Rootwrap let us to maintain the rules (filters) in sync with the code
 calling them.

 Yes, from security  maintenance, it was an smart decision. I'm thinking
 of automatically converting rootwrap rules to sudoers, but that's very
 limited, specially for the ip netns exec ... case.


 To work around perf issues, you still have the option of running with a
 wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
 running with a badly-written or badly-maintained sudo rules anyway.

 That's what I used for my benchmark. I just wonder, the how possible
 is to get command injection from neutron, via API or DB.


  [...]
 d) Re-writing rootwrap into C (it's 600 python LOCs now).

 (d2) would be to explore running rootwrap under Pypy. Testing that is on
 my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
 that option.

 I tried in my system right now, it takes more time to boot-up. Pypy JIT
 is awesome on runtime, but it seems that boot time is slower.

That is the wrong pypy! there are some pypy core devs lurking on this
ML so they may correct some of these details but:

It turns out python has a really big startup overhead:

jogo@lappy:~$ time echo true
true

real0m0.000s
user0m0.000s
sys 0m0.000s

jogo@lappy:~$ time python -c print True
True

real0m0.022s
user0m0.013s
sys 0m0.009s

And I am not surprised pypy isn't much better, pypy works better with
longer running programs.

But pypy isn't just one thing its two parts:

In common parlance, PyPy has been used to mean two things. The first
is the RPython translation toolchain, which is a framework for
generating dynamic programming language implementations. And the
second is one particular implementation that is so generated - an
implementation of the Pythonprogramming language written in Python
itself. It is designed to be flexible and easy to experiment with.

So the idea is to rewrite rootwrap in in RPython and use the Rpython
translation toolchain to convert rootwrap into C. That way we keep the
source code in a language more friendly to OpenStack devs, and we
hopefully avoid the overhead assocated with starting python up.


 I also played a little with shedskin (py-c++ converter), but it
 doesn't support all the python libraries, dynamic typing, or parameter 
 unpacking.

 That could be another approach, writing a simplified rootwrap in python, and
 have it automatically converted to C++.

 f) haleyb on IRC is pointing me to another approach Carl Baldwin is
 pushing https://review.openstack.org/#/c/67490/ towards command execution
 coalescing.



 e) Doing the command filtering at neutron-side, as a library and live
  with sudo with simple filtering. (we kill the python/rootwrap startup
  overhead).

 That's as safe as running with a wildcard sudoers file (neutron user can
 escalate to root). Which may just be acceptable in /some/ scenarios.

 I think it can be safer, (from the command injection point of view).


 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM: irc discussion?

2014-03-05 Thread Isaku Yamahata
Since I received some mails privately, I'd like to start weekly IRC meeting.
The first meeting will be

  Tuesdays 23:00UTC from March 11, 2014
  #openstack-meeting
  https://wiki.openstack.org/wiki/Meetings/ServiceVM
  If you have topics to discuss, please add to the page.

Sorry if the time is inconvenient for you. The schedule will also be
discussed, and the meeting time would be changed from the 2nd one.

Thanks,

On Mon, Feb 10, 2014 at 03:11:43PM +0900,
Isaku Yamahata isaku.yamah...@gmail.com wrote:

 As the first patch for service vm framework is ready for review[1][2],
 it would be a good idea to have IRC meeting.
 Anyone interested in it? How about schedule?
 
 Schedule candidate
 Monday  22:00UTC-, 23:00UTC-
 Tuesday 22:00UTC-, 23:00UTC-
 (Although the slot of servanced service vm[3] can be resuled,
  it doesn't work for me because my timezone is UTC+9.)
 
 topics for 
 - discussion/review on the patch
 - next steps
 - other open issues?
 
 [1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
 [2] https://review.openstack.org/#/c/56892/
 [3] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
 -- 
 Isaku Yamahata isaku.yamah...@gmail.com

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-03-05 Thread Samuel Bercovici
Hi,

In 
https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit?usp=sharing
 referenced by the Wiki, I have added the section that address the items raised 
on the last irc meeting.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Wednesday, February 26, 2014 7:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici; Eugene Nikanorov (enikano...@mirantis.com); Evgeny 
Fedoruk; Avishay Balderman
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I have added to the wiki page: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_model
 that points to a document that includes the current model + L7 + SSL.
Please review.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Monday, February 24, 2014 7:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current state management will not 
handle such relationship well.
To me this means that the state management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Roman Podoliaka
Hi all,

So yeah, we could restore the option and put creation of a slave
engine instance to EngineFacade class, but I don't think we want this.

The only reason why slave connections aren't implemented e.g. in
SQLAlchemy is that, SQLAlchemy, as a library can't decide  for you how
those engines should be used: do you have an ACTIVE-ACTIVE setup or
ACTIVE-PASSIVE, to which database reads/writes must go, and so on. The
same is true for oslo.db.

Nova is the only project that uses slave_connection option and it was
kind of broken: nova bare metal driver uses a separate database and
there was no way to use a slave db connection for it.

So due to lack of consistency in using of slave connection, IMO, this
should be left up to application to decide, how to use it. And we
provide EngineFacade helper already. So I'd just say, create an
EngineFacade instance for a slave connection explicitly, if you want
it to be used like it is used in Nova right now.

Thanks,
Roman

On Wed, Mar 5, 2014 at 8:35 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:



 On Wed, Mar 5, 2014 at 10:43 AM, Alexei Kornienko
 alexei.kornie...@gmail.com wrote:

 Hello Darren,

 This option is removed since oslo.db will no longer manage engine objects
 on it's own. Since it will not store engines it cannot handle query
 dispatching.

 Every project that wan't to use slave_connection will have to implement
 this logic (creation of the slave engine and query dispatching) on it's own.


 If we are going to have multiple projects using that feature, we will have
 to restore it to oslo.db. Just because the primary API won't manage global
 objects doesn't mean we can't have a secondary API that does.

 Doug




 Regards,


 On 03/05/2014 05:18 PM, Darren Birkett wrote:

 Hi,

 I'm wondering why in this commit:


 https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

 ...the slave_connection option was removed.  It seems like a useful option
 to have, even if a lot of projects weren't yet using it.

 Darren


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Feature Freeze

2014-03-05 Thread Devananda van der Veen
All,

Feature freeze for Ironic is now in effect, and the icehouse-3
milestone-proposed branch has been created:

http://git.openstack.org/cgit/openstack/ironic/log/?h=milestone-proposed

I have bumped to the next cycle any blueprints which were targeted to
Icehouse but not yet completed, and temporarily blocked code reviews
related to new features. I will unblock those reviews when Juno opens. The
following blueprints were affected:

https://blueprints.launchpad.net/ironic/+spec/serial-console-access
https://blueprints.launchpad.net/ironic/+spec/migration-from-nova
https://blueprints.launchpad.net/ironic/+spec/windows-disk-image-support
https://blueprints.launchpad.net/ironic/+spec/ironic-ilo-power-driver
https://blueprints.launchpad.net/ironic/+spec/ironic-ilo-virtualmedia-driver

Icehouse release candidates will be tagged near the end of March [*]. Until
then, I would like everyone to focus on CI by means of integration with
TripleO and devstack, and fixing bugs and improving stability. We should
not change either the REST or Driver APIs unless absolutely necessary. I am
targeting bugs which I believe are necessary for the Icehouse release to
the RC1 milestone; that list can be seen here:

https://launchpad.net/ironic/+milestone/icehouse-rc1

If you believe a bug should be targeted to icehouse, please raise it with a
member of the core team in #openstack-ironic on irc.freenode.net. Code
reviews for non-RC-targeted bugs may be blocked, or the bug should be
targeted to the RC so we can track ongoing work.


Thanks!
Devananda

[*] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Mar 6 1800 UTC

2014-03-05 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_March.2C_6

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20140306T18

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Vishvananda Ishaya

On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo majop...@redhat.com wrote:

 
Hello,
 
Recently, I found a serious issue about network-nodes startup time,
 neutron-rootwrap eats a lot of cpu cycles, much more than the processes it's 
 wrapping itself.
 
On a database with 1 public network, 192 private networks, 192 routers, 
 and 192 nano VMs, with OVS plugin:
 
 
 Network node setup time (rootwrap): 24 minutes
 Network node setup time (sudo): 10 minutes
 
 
   That's the time since you reboot a network node, until all namespaces
 and services are restored.
 
 
   If you see appendix 1, this extra 14min overhead, matches with the fact 
 that rootwrap needs 0.3s to start, and launch a system command (once 
 filtered).
 
14minutes =  840 s.
(840s. / 192 resources)/0.3s ~= 15 operations / resource(qdhcp+qrouter) 
 (iptables, ovs port creation  tagging, starting child processes, etc..)
 
   The overhead comes from python startup time + rootwrap loading.
 
   I suppose that rootwrap was designed for lower amount of system calls 
 (nova?).
 
   And, I understand what rootwrap provides, a level of filtering that sudo 
 cannot offer. But it raises some question:
 
 1) It's actually someone using rootwrap in production?
 
 2) What alternatives can we think about to improve this situation.
 
   0) already being done: coalescing system calls. But I'm unsure that's 
 enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60 ~=3 
 minutes overhead on a 10min operation).
 
   a) Rewriting rules into sudo (to the extent that it's possible), and live 
 with that.
   b) How secure is neutron about command injection to that point? How much is 
 user input filtered on the API calls?
   c) Even if b is ok , I suppose that if the DB gets compromised, that 
 could lead to command injection.
 
   d) Re-writing rootwrap into C (it's 600 python LOCs now).


This seems like the best choice to me. It shouldn’t be that much work for a 
proficient C coder. Obviously it will need to be audited for buffer overflow 
issues etc, but the code should be small enough to make this doable with high 
confidence.

Vish

 
   e) Doing the command filtering at neutron-side, as a library and live with 
 sudo with simple filtering. (we kill the python/rootwrap startup overhead).
 
 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
 structures, I wonder if that time could be reduced by conversion
 of system process calls into system library calls (I know we don't have
 libraries for iproute, iptables?, and many other things... but it's a
 problem that's probably worth looking at.)
 
 Best,
 Miguel Ángel Ajo.
 
 
 Appendix:
 
 [1] Analyzing overhead:
 
 [root@rhos4-neutron2 ~]# echo int main() { return 0; }  test.c
 [root@rhos4-neutron2 ~]# gcc test.c -o test
 [root@rhos4-neutron2 ~]# time test  # to time process invocation on this 
 machine
 
 real0m0.000s
 user0m0.000s
 sys0m0.000s
 
 
 [root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'
 
 real0m0.032s
 user0m0.010s
 sys0m0.019s
 
 
 [root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'
 
 real0m0.057s
 user0m0.016s
 sys0m0.011s
 
 [root@rhos4-neutron2 ~]# time neutron-rootwrap --help
 /usr/bin/neutron-rootwrap: No command specified
 
 real0m0.309s
 user0m0.128s
 sys0m0.037s
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster Recovery for OpenStack - community interest for Juno and beyond - meeting notes and next steps

2014-03-05 Thread Ronen Kat
Thanks you for the participants who joined the kick-off meeting for work 
in the community toward Disaster Recovery for OpenStack.
We captured the meeting notes on the Etherpad - see 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 


Per the consensus in the meeting we will schedule meeting toward the next 
summit.
Next meeting: March 19 12pm - 1pm ET (phone call-in)
Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941

Everyone is invited!

Ronen,

- Forwarded by Ronen Kat/Haifa/IBM on 05/03/2014 08:05 PM -

From:   Ronen Kat/Haifa/IBM
To: openstack-dev@lists.openstack.org, 
Date:   04/03/2014 01:16 PM
Subject:Disaster Recovery for OpenStack - call for stakeholders


Hello,

In the Hong-Kong summit, there was a lot of interest around OpenStack 
support for Disaster Recovery including a design summit session, an 
un-conference session and a break-out session.
In addition we set up a Wiki for OpenStack disaster recovery - see 
https://wiki.openstack.org/wiki/DisasterRecovery 
The first step was enabling volume replication in Cinder, which has 
started in the Icehouse development cycle and will continue into Juno.

Toward the Juno summit and development cycle we would like to send out a 
call for disaster recovery stakeholders, looking to:
* Create a list of use-cases and scenarios for disaster recovery with 
OpenStack
* Find interested parties who wish to contribute features and code to 
advance disaster recovery in OpenStack
* Plan needed for discussions at the Juno summit

To coordinate such efforts, I  would like to invite you to a conference 
call on Wednesday March 5 at 12pm ET and work together coordinating 
actions for the Juno summit (an invitation is attached).
We will record minutes of the call at - 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 
(link also available from the disaster recovery wiki page).
If you are unable to join and interested, please register your self and 
share your thoughts.



Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [OpenStack GSoC] Chenchong Ask for Mentoring on Implement a cross-services scheduler Project

2014-03-05 Thread Chenchong Qin
Hi

Sorry for not cc openstack-dev at first (haven't got familiar with
OpenStack's GSoC
custom... but it's quite a different flavor compared with my last mentoring
org). I just
sent it to the possible mentors. But it turns out that openstack-dev gives
lots of
benefit. :)

I noticed that Fang also has interests towards this idea. It's strengthened
my thought
that it's a great idea/project.

Russell and dims showed their concerns that the project described it is far
too large
to be able to implement in one GSoC term. In fact, I hold the same concern,
so I
asked the possible mentors about it at the end of my last mail.

This project appear to have a big name. But when we dig into detail of
the project
description, it seems that the project is about implementing a nova
scheduler that
can take information from storage and network components into consideration
and
can make decisions based on global information. Besides, Sylvain also
mentioned
that it's now in FeatureFreeze period. So, I think maybe we can move this
project
from Gantt section to Nova section (with the consent of original project
proposers),
and further specify the contents of the project to make it a enhancement or
a new
feature/option to nova's current scheduler.

Thanks all your help and Sylvain's reminder on #openstack-meeting!

Regards!

Chenchong


-- Forwarded message --
From: Chenchong Qin qinchench...@gmail.com
Date: Wed, Mar 5, 2014 at 10:28 PM
Subject: [OpenStack GSoC] Chenchong Ask for Mentoring on Implement a
cross-services scheduler Project
To: yud...@cisco.com, dedu...@cisco.com


Hi, Yathi and Debo

I'm a master student from China who got a great interest in the Implement
a cross-services scheduler
project you put in the Gantt section of OpenStack's GSoC 2014 idea list.
I'm taking the liberty of asking
you as my mentor for applying this project.

My name is Chenchong Qin. I'm now in my second year as a master student of
Computer Science at
University of Chinese Academy of Sciences. My research interests mainly
focus on Computer Network
and Cloud Computing. I participated in GSoC 2013 to develop a rate control
API that is 802.11n features
aware for FreeBSD (project
homepagehttps://wiki.freebsd.org/SummerOfCode2013/80211RateControl80211nExtensions).
I've been following closely with OpenStack since last year and
have done some work related to network policy migration. I'm familiar with
C/C++ and Python, and have
also write some little tools and simulation programs with python.

When I first saw your idea of implementing a cross-services scheduler, I
determined that it's a necessary
and meaningful proposal. I participated in a research project on channel
scheduling in a distributed MIMO
system last year. From that project, I learned that without global
information, any scheduling mechanisms
seemed feeble. I've read the blueprints you wrote and I highly agree with
you that the scheduler should be
able to leverage global information from multiple components like Nova,
Cinder, and Neutron to make the
placement decisions. I'm willing to help with the SolverScheduler blueprint
both during this GSoC project
and after.

And, I also got a question here. According to the project description,
This project will help to build a
cross-services scheduler that can interact with storage and network
services to make decisions. So, our
cross-services scheduler is now just a nova scheduler that can interact
with storage and network component
to make decisions, but not a universal scheduler that can be used by other
components. Did I make it right?

Looking forward to hear from you.

Thanks and regards!

Chenchong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack GSoC] Chenchong Ask for Mentoring on Implement a cross-services scheduler Project

2014-03-05 Thread Yathiraj Udupi (yudupi)
Hi Chenchong, Fang,

I am glad that you have expressed interested in this project for GSoC.  It is a 
big project I agree in terms of its scope. But it is good to start with smaller 
goals.
It will be interesting to see what incremental  things can be added to the 
current Nova scheduler to achieve cross-services scheduling.
Solver Scheduler (https://blueprints.launchpad.net/nova/+spec/solver-scheduler) 
 has been pushed to Juno, and that BP has a goal of providing a generic 
framework for expressing scheduling problem as a constraint optimization 
problem, and hence can take different forms of constraints and cost metrics 
including the cross-services aspects.
So it is good to not limit your ideas with respect to Solver Scheduler BP, but 
in general also think of what additional stuff can be added to the current 
Filter Scheduler as well.

For GSoC, I don’t think you should worry about the feature freeze for now.  You 
can propose ideas in this theme for GSoC, and we can eventually get it upstream 
to be merged with Nova/ Gantt.

The cross service scheduling BP for Filter Scheduler enhancements  is here - 
https://blueprints.launchpad.net/nova/+spec/cross-service-filter-scheduling  We 
can probably use this for additional filter scheduler enhancements.

Thanks,
Yathi.




On 3/5/14, 10:33 AM, Chenchong Qin 
qinchench...@gmail.commailto:qinchench...@gmail.com wrote:

Hi

Sorry for not cc openstack-dev at first (haven't got familiar with OpenStack's 
GSoC
custom... but it's quite a different flavor compared with my last mentoring 
org). I just
sent it to the possible mentors. But it turns out that openstack-dev gives lots 
of
benefit. :)

I noticed that Fang also has interests towards this idea. It's strengthened my 
thought
that it's a great idea/project.

Russell and dims showed their concerns that the project described it is far too 
large
to be able to implement in one GSoC term. In fact, I hold the same concern, so I
asked the possible mentors about it at the end of my last mail.

This project appear to have a big name. But when we dig into detail of the 
project
description, it seems that the project is about implementing a nova scheduler 
that
can take information from storage and network components into consideration and
can make decisions based on global information. Besides, Sylvain also mentioned
that it's now in FeatureFreeze period. So, I think maybe we can move this 
project
from Gantt section to Nova section (with the consent of original project 
proposers),
and further specify the contents of the project to make it a enhancement or a 
new
feature/option to nova's current scheduler.

Thanks all your help and Sylvain's reminder on #openstack-meeting!

Regards!

Chenchong


-- Forwarded message --
From: Chenchong Qin qinchench...@gmail.commailto:qinchench...@gmail.com
Date: Wed, Mar 5, 2014 at 10:28 PM
Subject: [OpenStack GSoC] Chenchong Ask for Mentoring on Implement a 
cross-services scheduler Project
To: yud...@cisco.commailto:yud...@cisco.com, 
dedu...@cisco.commailto:dedu...@cisco.com


Hi, Yathi and Debo

I'm a master student from China who got a great interest in the Implement a 
cross-services scheduler
project you put in the Gantt section of OpenStack's GSoC 2014 idea list. I'm 
taking the liberty of asking
you as my mentor for applying this project.

My name is Chenchong Qin. I'm now in my second year as a master student of 
Computer Science at
University of Chinese Academy of Sciences. My research interests mainly focus 
on Computer Network
and Cloud Computing. I participated in GSoC 2013 to develop a rate control API 
that is 802.11n features
aware for FreeBSD (project 
homepagehttps://wiki.freebsd.org/SummerOfCode2013/80211RateControl80211nExtensions).
 I've been following closely with OpenStack since last year and
have done some work related to network policy migration. I'm familiar with 
C/C++ and Python, and have
also write some little tools and simulation programs with python.

When I first saw your idea of implementing a cross-services scheduler, I 
determined that it's a necessary
and meaningful proposal. I participated in a research project on channel 
scheduling in a distributed MIMO
system last year. From that project, I learned that without global information, 
any scheduling mechanisms
seemed feeble. I‘ve read the blueprints you wrote and I highly agree with you 
that the scheduler should be
able to leverage global information from multiple components like Nova, Cinder, 
and Neutron to make the
placement decisions. I'm willing to help with the SolverScheduler blueprint 
both during this GSoC project
and after.

And, I also got a question here. According to the project description, This 
project will help to build a
cross-services scheduler that can interact with storage and network services to 
make decisions. So, our
cross-services scheduler is now just a nova scheduler that can interact with 
storage and network component
to make decisions, but not a 

Re: [openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Russell Bryant
On 03/05/2014 12:18 PM, Andrew Laski wrote:
 On 03/05/14 at 09:05am, Dan Smith wrote:
 Why accept it?

 * It's low risk but needed refactoring that will make the code that has
 been a source of occasional bugs.
 * It is very low risk internal refactoring that uses code that has been
 in tree for some time now (BDM objects).
 * It has seen it's fair share of reviews

 Yeah, this has been conflict-heavy for a long time. If it hadn't been, I
 think it'd be merged by now.

 The bulk of this is done, the bits remaining have seen a *lot* of real
 review. I'm happy to commit to reviewing this, since I've already done
 so many times :)
 
 I will also commit to reviewing this as I have reviewed much of it already.

Ok great, consider it approved.  It really needs to get in this week
though, with an absolute hard deadline of this coming Tuesday.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Russell Bryant
On 03/05/2014 12:27 PM, Andrew Laski wrote:
 On 03/05/14 at 07:37am, Tracy Jones wrote:
 Hi - Please consider the image cache aging BP for FFE
 (https://review.openstack.org/#/c/56416/)

 This is the last of several patches (already merged) that implement
 image cache cleanup for the vmware driver.  This patch solves a
 significant customer pain point as it removes unused images from their
 datastore.  Without this patch their datastore can become
 unnecessarily full.  In addition to the customer benefit from this
 patch it

 1.  has a turn off switch
 2.  if fully contained within the vmware driver
 3.  has gone through functional testing with our internal QA team

 ndipanov has been good enough to say he will review the patch, so we
 would ask for one additional core sponsor for this FFE.
 
 Looking over the blueprint and outstanding review it seems that this is
 a fairly low risk change, so I am willing to sponsor this bp as well.

Nikola, can you confirm if you're willing to sponsor (review) this?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-05 Thread Russell Bryant
On 03/05/2014 10:34 AM, Gary Kotton wrote:
 Hi,
 Unfortunately we did not get the ISO support approved by the deadline.
 If possible can we please get the FFE.
 
 The feature is completed and has been tested extensively internally. The
 feature is very low risk and has huge value for users. In short a user
 is able to upload a iso to glance then boot from that iso.
 
 BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
 Code: https://review.openstack.org/#/c/63084/ and 
 https://review.openstack.org/#/c/77965/
 Sponsors: John Garbutt and Nikola Dipanov
 
 One of the things that we are planning on improving in Juno is the way
 that the Vmops code is arranged and organized. We will soon be posting a
 wiki for ideas to be discussed. That will enable use to make additions
 like this a lot simpler in the future. But sadly that is not part of the
 scope at the moment.

John and Nikola, can you confirm your sponsorship of this one?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Tracy Jones
Russell - i also believe that danbp and alaski said they would sponsor 


On Mar 5, 2014, at 10:59 AM, Russell Bryant rbry...@redhat.com wrote:

 On 03/05/2014 12:27 PM, Andrew Laski wrote:
 On 03/05/14 at 07:37am, Tracy Jones wrote:
 Hi - Please consider the image cache aging BP for FFE
 (https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/56416/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=fysbO8%2FBLtC%2B0WXqPRtZjP%2BFTxUY74FYnj8tkYiMlD4%3D%0Am=qBzHh8rJVCAgDuyV9OOsUqK5joMcb%2BWA5nBRCaM5mzU%3D%0As=0decb0928a178cc2b07ed80a75ef39bb3501417ff110d005886c77ffae8db98b)
 
 This is the last of several patches (already merged) that implement
 image cache cleanup for the vmware driver.  This patch solves a
 significant customer pain point as it removes unused images from their
 datastore.  Without this patch their datastore can become
 unnecessarily full.  In addition to the customer benefit from this
 patch it
 
 1.  has a turn off switch
 2.  if fully contained within the vmware driver
 3.  has gone through functional testing with our internal QA team
 
 ndipanov has been good enough to say he will review the patch, so we
 would ask for one additional core sponsor for this FFE.
 
 Looking over the blueprint and outstanding review it seems that this is
 a fairly low risk change, so I am willing to sponsor this bp as well.
 
 Nikola, can you confirm if you're willing to sponsor (review) this?
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=fysbO8%2FBLtC%2B0WXqPRtZjP%2BFTxUY74FYnj8tkYiMlD4%3D%0Am=qBzHh8rJVCAgDuyV9OOsUqK5joMcb%2BWA5nBRCaM5mzU%3D%0As=a73a22c3a013d36d1b517efc9d754819b2339667ae10ff9afc10f7738ae5c3bd

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Status of multi-attach-volume work

2014-03-05 Thread Mike

On 03/05/2014 05:37 AM, Zhi Yan Liu wrote:

Hi,

We decided multi-attach feature must be implemented as an extension to
core functionality in Cinder, but currently we have not a clear
extension support in Cinder, IMO it's the biggest blocker now. And the
other issues have been listed at
https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume#Comments_and_Discussion
as well. Probably we could get more inputs from Cinder cores.

thanks,
zhiyan

On Wed, Mar 5, 2014 at 8:19 PM, Niklas Widell
niklas.wid...@ericsson.com wrote:

Hi
What is the current status of the work around multi-attach-volume [1]? We
have some cluster related use cases that would benefit from being able to
attach a volume from several instances.

[1] https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

Best regards
Niklas Widell
Ericsson AB


As discussed in previous IRC meetings, this is not blocked by new ideas 
with extensions. We've decided there's a lot involved with the changes 
that it didn't make sense to block progress of others. Zhi, I've spoke 
to you personally about how you can continue your work as normal. Feel 
free to reach out to me on IRC user thingee if you need help.


-Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Solly Ross
Has anyone tried compiling rootwrap under Cython?  Even with non-optimized 
libraries,
Cython sometimes sees speedups.

Best Regards,
Solly Ross

- Original Message -
From: Vishvananda Ishaya vishvana...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, March 5, 2014 1:13:33 PM
Subject: Re: [openstack-dev] [neutron][rootwrap] Performance considerations,
sudo?


On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo majop...@redhat.com wrote:

 
Hello,
 
Recently, I found a serious issue about network-nodes startup time,
 neutron-rootwrap eats a lot of cpu cycles, much more than the processes it's 
 wrapping itself.
 
On a database with 1 public network, 192 private networks, 192 routers, 
 and 192 nano VMs, with OVS plugin:
 
 
 Network node setup time (rootwrap): 24 minutes
 Network node setup time (sudo): 10 minutes
 
 
   That's the time since you reboot a network node, until all namespaces
 and services are restored.
 
 
   If you see appendix 1, this extra 14min overhead, matches with the fact 
 that rootwrap needs 0.3s to start, and launch a system command (once 
 filtered).
 
14minutes =  840 s.
(840s. / 192 resources)/0.3s ~= 15 operations / resource(qdhcp+qrouter) 
 (iptables, ovs port creation  tagging, starting child processes, etc..)
 
   The overhead comes from python startup time + rootwrap loading.
 
   I suppose that rootwrap was designed for lower amount of system calls 
 (nova?).
 
   And, I understand what rootwrap provides, a level of filtering that sudo 
 cannot offer. But it raises some question:
 
 1) It's actually someone using rootwrap in production?
 
 2) What alternatives can we think about to improve this situation.
 
   0) already being done: coalescing system calls. But I'm unsure that's 
 enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60 ~=3 
 minutes overhead on a 10min operation).
 
   a) Rewriting rules into sudo (to the extent that it's possible), and live 
 with that.
   b) How secure is neutron about command injection to that point? How much is 
 user input filtered on the API calls?
   c) Even if b is ok , I suppose that if the DB gets compromised, that 
 could lead to command injection.
 
   d) Re-writing rootwrap into C (it's 600 python LOCs now).


This seems like the best choice to me. It shouldn’t be that much work for a 
proficient C coder. Obviously it will need to be audited for buffer overflow 
issues etc, but the code should be small enough to make this doable with high 
confidence.

Vish

 
   e) Doing the command filtering at neutron-side, as a library and live with 
 sudo with simple filtering. (we kill the python/rootwrap startup overhead).
 
 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
 structures, I wonder if that time could be reduced by conversion
 of system process calls into system library calls (I know we don't have
 libraries for iproute, iptables?, and many other things... but it's a
 problem that's probably worth looking at.)
 
 Best,
 Miguel Ángel Ajo.
 
 
 Appendix:
 
 [1] Analyzing overhead:
 
 [root@rhos4-neutron2 ~]# echo int main() { return 0; }  test.c
 [root@rhos4-neutron2 ~]# gcc test.c -o test
 [root@rhos4-neutron2 ~]# time test  # to time process invocation on this 
 machine
 
 real0m0.000s
 user0m0.000s
 sys0m0.000s
 
 
 [root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'
 
 real0m0.032s
 user0m0.010s
 sys0m0.019s
 
 
 [root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'
 
 real0m0.057s
 user0m0.016s
 sys0m0.011s
 
 [root@rhos4-neutron2 ~]# time neutron-rootwrap --help
 /usr/bin/neutron-rootwrap: No command specified
 
 real0m0.309s
 user0m0.128s
 sys0m0.037s
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Qin Zhao
Hi Joe,
For example, I used to use a private cloud system, which will calculate
charge bi-weekly. and it charging formula looks like Total_charge =
Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
Volume_number*C4.  Those Instance/Image/Volume number are the number of
those objects that user created within these two weeks. And it also has
quota to limit total image size and total volume size. That formula is not
very exact, but you can see that it regards each of my 'create' operation
as a 'ticket', and will charge all those tickets, plus the instance
duration fee. In order to reduce the expense of my department, I am asked
not to create instance very frequently, and not to create too many images
and volume. The image quota is not very big. And I would never be permitted
to exceed the quota, since it request additional dollars.


On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao chaoc...@gmail.com wrote:
  Hi Joe,
  If we assume the user is willing to create a new instance, the workflow
 you
  are saying is exactly correct. However, what I am assuming is that the
 user
  is NOT willing to create a new instance. If Nova can revert the existing
  instance, instead of creating a new one, it will become the alternative
 way
  utilized by those users who are not allowed to create a new instance.
  Both paths lead to the target. I think we can not assume all the people
  should walk through path one and should not walk through path two. Maybe
  creating new instance or adjusting the quota is very easy in your point
 of
  view. However, the real use case is often limited by business process.
 So I
  think we may need to consider that some users can not or are not allowed
 to
  creating the new instance under specific circumstances.
 

 What sort of circumstances would prevent someone from deleting and
 recreating an instance?

 
  On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
  On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao chaoc...@gmail.com wrote:
   Hi Joe, my meaning is that cloud users may not hope to create new
   instances
   or new images, because those actions may require additional approval
 and
   additional charging. Or, due to instance/image quota limits, they can
   not do
   that. Anyway, from user's perspective, saving and reverting the
 existing
   instance will be preferred sometimes. Creating a new instance will be
   another story.
  
 
  Are you saying some users may not be able to create an instance at
  all? If so why not just control that via quotas.
 
  Assuming the user has the power to rights and quota to create one
  instance and one snapshot, your proposed idea is only slightly
  different then the current workflow.
 
  Currently one would:
  1) Create instance
  2) Snapshot instance
  3) Use instance / break instance
  4) delete instance
  5) boot new instance from snapshot
  6) goto step 3
 
  From what I gather you are saying that instead of 4/5 you want the
  user to be able to just reboot the instance. I don't think such a
  subtle change in behavior is worth a whole new API extension.
 
  
   On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon joe.gord...@gmail.com
   wrote:
  
   On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao chaoc...@gmail.com wrote:
I think the current snapshot implementation can be a solution
sometimes,
but
it is NOT exact same as user's expectation. For example, a new
blueprint
is
created last week,
   
 https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
which
seems a little similar with this discussion. I feel the user is
requesting
Nova to create in-place snapshot (not a new image), in order to
revert
the
instance to a certain state. This capability should be very useful
when
testing new software or system settings. It seems a short-term
temporary
snapshot associated with a running instance for Nova. Creating a
 new
instance is not that convenient, and may be not feasible for the
user,
especially if he or she is using public cloud.
   
  
   Why isn't it easy to create a new instance from a snapshot?
  
   
On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
divakar.padiyar-nanda...@hp.com wrote:
   
 Why reboot an instance? What is wrong with deleting it and
 create a
 new one?
   
You generally use non-persistent disk mode when you are testing
 new
software or experimenting with settings.   If something goes wrong
just
reboot and you are back to clean state and start over again.I
feel
it's
convenient to handle this with just a reboot rather than
 recreating
the
instance.
   
Thanks,
Divakar
   
-Original Message-
From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Tuesday, March 04, 2014 10:41 AM
To: OpenStack Development Mailing List (not for usage questions)
  

[openstack-dev] [QA] Meeting Thursday March 6th at 22:00UTC

2014-03-05 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 6th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

17:00 EST
07:00 JST
08:30 ACDT
23:00 CET
16:00 CST
14:00 PST

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly meeting object model discussion IMPORTANT

2014-03-05 Thread Eugene Nikanorov
Hi neutron  lbaas folks,

Let's meet tomorrow, Thursday, 06 at 14-00 on #openstack-meeting to
continue discussing the object model.

We had discussed with Samuel Bercovici proposals at hand and currently
there are two main proposals that we are evaluating.
Both of them allow to add two major features that initially made us to do
that whole object model redesign:
1) neutron port (ip address reuse) by multiple vips pointing to the same
pool.
Use case: http and https protocols for the same pool
2) multiple pools per vip via L7 rules.

Approach #1 (which I'm advocating) is #3 here:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion

Approach #2 (Sam's proposal):
https://docs.google.com/a/mirantis.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r

In short, the difference between two is in how neutron port reuse is
achieved:
- Proposal #1 uses VIP object to keep neutron port (ip address) and
Listener objects
to represent different tcp ports and protocols.
- Proposal #2 uses VIP object only, neutron port reuse is achieved by
creating another VIP with vip_id of the VIP who's port is going to be
shared.
Both proposals suggest making VIP a root object (e.g. the object to which
different bindings are applied)

Those two proposals have the following advantages and disadvantages:
Proposal #1:
 - logical instance has 1 root object (VIP) which gives API clarity and
implementation advantage.
The following operations will have clear semantics: changing SLA for the
logical balancer, plugging into the different network, changing operational
status, etc.
E.g. many kinds of update operations applied to the root object (VIP)
affect whole child configuration.
 - Introducing another resource (listener) is a disadvantage (although
backward compatibility could be preserved)

Proposal #2:
 - Keeping existing set of resources, which might be an advantage for some
consumers.
 - As a disadvantage I see several root object that are implicitly bound to
the same logical configuration.
That creates small subtle inconsistencies in API that are better to be
avoided (IMO):
 when when updating certain VIP parameters like IP address or subnet that
leads to a changed parameters of another VIP that shares neutron port.
That is a direct consequence of having several 'root objects' within one
logical configuration (non-hierarchical)

Technically both proposals are fine to me.
Practically I prefer #1 over #2 because IMO it leads to a clearer API.

Please look at those proposals, think about the differences, your
preference and any concern you have about these two. We're going to
dedicate the meeting to that.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-05 Thread Yathiraj Udupi (yudupi)
Hi,

We would like to make a request for FFE for the Solver Scheduler work.  A lot 
of work has gone into it since Sep’13, and the first patch has gone through 
several iteration after some reviews.   The first patch - 
https://review.openstack.org/#/c/46588/ introduces the main solver scheduler 
driver, and a reference solver implementation, and the subsequent patches that 
are already added provide the pluggable solver, and individual support for 
adding constraints, costs, etc.

First Patch: https://review.openstack.org/#/c/46588/
Second patch with enhanced support for pluggable constraints and costs: -  
https://review.openstack.org/#/c/70654/
Subsequent patches add the constraints and the costs.
BP: https://blueprints.launchpad.net/nova/+spec/solver-scheduler
Core sponsor:  Joe Gordon

John Garbutt expressed concerns in Blueprint whiteboard regarding the 
configuration values, existing filters,etc and I noticed that you have 
un-approved this BP.
John, I will discuss with you in detail over IRC.
But briefly,  the plan is not many new configuration values will be added, just 
the ones to specify the solver to use, and the pluggable constraints, and costs 
to use, with the weights for the costs. (these are mainly part of the second 
patch -
 https://review.openstack.org/#/c/70654/ )

The plan is to gradually support the concepts for the existing filters as the 
constraints that are accepted by our Solver Scheduler.   Depending on the 
constraints and the costs chosen, the final scheduling will be done by solving 
the problem as an optimization problem.

Please reconsider this blueprint, and allow a FFE.

Thanks,
Yathi.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-05 Thread Eugene Nikanorov
Hi community,

Another interesting questions were raised during object model discussion
about how pool statistics and health monitoring should be used in case of
multiple vips sharing one pool.

Right now we can query statistics for the pool, and some data like in/out
bytes and request count will be returned.
If we had several vips sharing the pool, what kind of statistics would make
sense for the user?
The options are:

1) aggregated statistics for the pool, e.g. statistics of all requests that
has hit the pool through any VIP
2) per-vip statistics for the pool.

Depending on the answer, the statistics workflow will be different.

The good option of getting the statistics and health status could be to
query it through the vip and get it for the whole logical instance, e.g. a
call like:
 lb-vip-statistics-get --vip-id vip_id
the would result in json that returns statistics for every pool associated
with the vip, plus operational status of all members for the pools
associated with that VIP.

Looking forward to your feedback.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-05 Thread Thierry Carrez
Hi everyone,

We just hit feature freeze, so please do not approve changes that add
features or new configuration options unless those have been granted a
feature freeze exception.

This is also string freeze, so you should avoid changing translatable
strings. If you have to modify a translatable string, you should give a
heads-up to the I18N team.

Milestone-proposed branches were created for Horizon, Keystone, Glance,
Nova, Neutron, Cinder, Heat and and Trove in preparation for the
icehouse-3 milestone publication tomorrow.

Ceilometer should follow in an hour.

You can find candidate tarballs at:
http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz

You can also access the milestone-proposed branches directly at:
https://github.com/openstack/horizon/tree/milestone-proposed
https://github.com/openstack/keystone/tree/milestone-proposed
https://github.com/openstack/glance/tree/milestone-proposed
https://github.com/openstack/nova/tree/milestone-proposed
https://github.com/openstack/neutron/tree/milestone-proposed
https://github.com/openstack/cinder/tree/milestone-proposed
https://github.com/openstack/heat/tree/milestone-proposed
https://github.com/openstack/trove/tree/milestone-proposed

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Mar 5 2014

2014-03-05 Thread Anne Gentle
Here in Austin we are gearing up for the onslaught of visitors for the SXSW
festival next week. My first visit there was in 2003, and yes I had to look
it up in my blog,
http://justwriteclick.com/2006/03/14/a-trip-report-from-sxsw-interactive-2003/for
those who are into retro tech. :) I'll be there early next week so if
you're in town say hi!

1. In review and merged this past week:
Operations Guide dropped to production! More details and schedule below.

Just about a month until a release candidate is cut, March 27th starts the
release candidates. See
https://wiki.openstack.org/wiki/Icehouse_Release_Schedule for all the
glorious release dates. This week is Feature Freeze.

Great work going on to keep up with configuration options and good work
making the Configuration Reference more polished and leaner -- for true
reference. Summer Long has been going great gangbusters on ensuring ALL
configuration files have examples in the documentation, great work Summer.

We also revised the configuring live migration instructions to be more
careful with access.

Lots of cleanup in the training manuals as well.

2. High priority doc work:

The subteam on install doc is still working hard, diagrams are being drawn,
architectures are being made, servers are being booted. It's all good. I
should probably have one of their reps give the report! Come to one of the
weekly docs team meetings for details and to find ways to help.

I'd love to find a documentation mentor for an intern with the Outreach
Program for Women. Please add Documentation ideas to the wiki page at:
https://wiki.openstack.org/wiki/OutreachProgramForWomen/Ideas.

3. Doc work going on that I know of:

The Operations Guide went to production yesterday! Here's their schedule
(which shouldn't affect us, just letting you all know how these things
work). I'll be entering edits that are markup cleanup most likely. Our
single window of time to add  icehouse info in the Working with Roadmaps
appendix is 3/7-3/18. If you're curious, the book is in the latest O'Reilly
collaborative authoring system, Atlas2, backed by github and they convert
the DocBook to asciidoc on the fly. You can still update the master branch
at any time, and I'll be hand-picking and hand-editing as needed.

-Intake (3): 3/4-3/6
-Copyedit (5): 3/7-3/13
-AU review (3): 3/14-3/18 (Window closes)
-Enter edits (5): 3/19-3/25
-QC1/index (6): 3/26-4/2
-Enter edits (5): 4/3-4/9
-QC2 (2): 4/10-4/11
-Enter edits: 4/14-4/15
-final O'Reilly check: 4/16
-to print: 4/17

4. New incoming doc requests:

We are definitely behind in triaging doc bugs -- 48 new bugs, some of which
may be from DocImpact flags so until the code merges we won't need to
address. Even so, with over 400 open bugs we know we need help keeping up
with DocImpact flags. Please if you have some time, take a look through doc
bugs you could pick up.

Also the install doc group is logging bugs for improving the install guide,
so that accounts for about 10 doc bugs.

5. Doc tools updates:
The 1.14.1 version of the clouddocs-maven-plugin went out this week with
multiple improvements, many to the API Reference listing page to help with
unique anchor URLs, Google Analytics, and an automatically generated TOC
(rather than hand-coding the list of APIs to display). Read more at
https://github.com/stackforge/clouddocs-maven-plugin.

6. Other doc news:

Not to bury some very exciting news, but we've adjusted our list of
doc-core members. We hope to keep reviews moving through. We managed to get
over 100 doc patches in the last week and I'm hopeful the additional
reviewers will help push our numbers even higher! I'm very happy with the
strength of the doc core team and appreciate all the participants we've had
in this release period.

Welcome to:
Gauvain Pocentek
Lana Brindley
Summer Long
Shilla Saebi
Matt Kassawara

Also, the Design Summit proposals for blueprint discussions in the
documentation track should open up tomorrow or Friday, 3/7. Please let me
know what you'd like to discuss at the Summit in Atlanta. The deadline for
the Travel Support program was Monday and we'll be making our decisions
soon with notification by 3/24.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Everett Toews
On Mar 5, 2014, at 8:16 AM, Russell Bryant rbry...@redhat.com wrote:

 I think SDK support is critical for the success of v3 long term.  I
 expect most people are using the APIs through one of the major SDKs, so
 v3 won't take off until that happens.  I think our top priority in Nova
 to help ensure this happens is to provide top notch documentation on the
 v3 API, as well as all of the differences between v2 and v3.

Yes. Thank you.

And the earlier we can see the first parts of this documentation, both the 
differences between v2 and v3 and the final version, the better. If we can give 
you feedback on early versions of the docs, the whole thing will go much more 
smoothly.

You can find us in #openstack-sdks on IRC.

Cheers,
Everett
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][FWaaS] FFE request: fwaas-service-insertion

2014-03-05 Thread Rajesh Mohan
Hi All,

I would like to request FFE for the following patch

https://review.openstack.org/#/c/62599/

The design and the patch has gone through many reviews. We have reached out
to folks working on other advanced services as well.

This will be a first good step towards true service integration with
Neutron. Would also allow for innovative service integration.

Nachi and Sumit looked at this patch closely and are happy. Akihiro also
gave useful comments and I have addressed all his comments.

Please consider this patch for merge in I3.

Thanks,
-Rajesh Mohan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-05 Thread Sean Dague
On 03/05/2014 03:40 PM, Yathiraj Udupi (yudupi) wrote:
 Hi, 
 
 We would like to make a request for FFE for the Solver Scheduler work.
  A lot of work has gone into it since Sep’13, and the first patch has
 gone through several iteration after some reviews.   The first patch
 - https://review.openstack.org/#/c/46588/ introduces the main solver
 scheduler driver, and a reference solver implementation, and the
 subsequent patches that are already added provide the pluggable solver,
 and individual support for adding constraints, costs, etc. 
 
 First Patch: https://review.openstack.org/#/c/46588/ 
 Second patch with enhanced support for pluggable constraints and costs:
 -  https://review.openstack.org/#/c/70654/
 https://review.openstack.org/#/c/70654/
 Subsequent patches add the constraints and the costs. 
 BP: https://blueprints.launchpad.net/nova/+spec/solver-scheduler 
 Core sponsor:  Joe Gordon
 
 John Garbutt expressed concerns in Blueprint whiteboard regarding the
 configuration values, existing filters,etc and I noticed that you have
 un-approved this BP. 
 John, I will discuss with you in detail over IRC. 
 But briefly,  the plan is not many new configuration values will be
 added, just the ones to specify the solver to use, and the pluggable
 constraints, and costs to use, with the weights for the costs. (these
 are mainly part of the second patch -
  https://review.openstack.org/#/c/70654/
 https://review.openstack.org/#/c/70654/ )
 
 The plan is to gradually support the concepts for the existing filters
 as the constraints that are accepted by our Solver Scheduler.  
 Depending on the constraints and the costs chosen, the final scheduling
 will be done by solving the problem as an optimization problem. 
 
 Please reconsider this blueprint, and allow a FFE. 
 
 Thanks,
 Yathi. 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

This does not seem small or low risk in any way. And the blueprint is
not currently approved.

Also, as far as I can tell you never actually talked with Joe Gordon
about him supporting the FFE.

-2

This needs to wait for Juno.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-05 Thread Yathiraj Udupi (yudupi)
Sorry,  This is request for FFE.
I meant Approver of the BP was Joe Gordon below.. not sponsor, probably the 
wrong word.

Thanks
Yathi.



On 3/5/14, 12:40 PM, Yathiraj Udupi (yudupi) 
yud...@cisco.commailto:yud...@cisco.com wrote:

Hi,

We would like to make a request for FFE for the Solver Scheduler work.  A lot 
of work has gone into it since Sep’13, and the first patch has gone through 
several iteration after some reviews.   The first patch - 
https://review.openstack.org/#/c/46588/ introduces the main solver scheduler 
driver, and a reference solver implementation, and the subsequent patches that 
are already added provide the pluggable solver, and individual support for 
adding constraints, costs, etc.

First Patch: https://review.openstack.org/#/c/46588/
Second patch with enhanced support for pluggable constraints and costs: -  
https://review.openstack.org/#/c/70654/
Subsequent patches add the constraints and the costs.
BP: https://blueprints.launchpad.net/nova/+spec/solver-scheduler
Core sponsor:  Joe Gordon

John Garbutt expressed concerns in Blueprint whiteboard regarding the 
configuration values, existing filters,etc and I noticed that you have 
un-approved this BP.
John, I will discuss with you in detail over IRC.
But briefly,  the plan is not many new configuration values will be added, just 
the ones to specify the solver to use, and the pluggable constraints, and costs 
to use, with the weights for the costs. (these are mainly part of the second 
patch -
 https://review.openstack.org/#/c/70654/ )

The plan is to gradually support the concepts for the existing filters as the 
constraints that are accepted by our Solver Scheduler.   Depending on the 
constraints and the costs chosen, the final scheduling will be done by solving 
the problem as an optimization problem.

Please reconsider this blueprint, and allow a FFE.

Thanks,
Yathi.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-05 Thread Martinx - ジェームズ
AWESOME!!

If may I ask, does IPv6 bits got included into this milestone?! I'm very
anxious to start testing it with Ubuntu 14.04 plus the official devel
packages from Canonical.

Thanks a lot!!

Best,
Thiago

On 5 March 2014 17:46, Thierry Carrez thie...@openstack.org wrote:

 Hi everyone,

 We just hit feature freeze, so please do not approve changes that add
 features or new configuration options unless those have been granted a
 feature freeze exception.

 This is also string freeze, so you should avoid changing translatable
 strings. If you have to modify a translatable string, you should give a
 heads-up to the I18N team.

 Milestone-proposed branches were created for Horizon, Keystone, Glance,
 Nova, Neutron, Cinder, Heat and and Trove in preparation for the
 icehouse-3 milestone publication tomorrow.

 Ceilometer should follow in an hour.

 You can find candidate tarballs at:
 http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
 http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
 http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
 http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
 http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
 http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
 http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
 http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz

 You can also access the milestone-proposed branches directly at:
 https://github.com/openstack/horizon/tree/milestone-proposed
 https://github.com/openstack/keystone/tree/milestone-proposed
 https://github.com/openstack/glance/tree/milestone-proposed
 https://github.com/openstack/nova/tree/milestone-proposed
 https://github.com/openstack/neutron/tree/milestone-proposed
 https://github.com/openstack/cinder/tree/milestone-proposed
 https://github.com/openstack/heat/tree/milestone-proposed
 https://github.com/openstack/trove/tree/milestone-proposed

 Regards,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-05 Thread Lyle, David
I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews 
very insightful and more importantly have come to rely on their quality. He has 
contributed to several areas in Horizon and he understands the code base well.  
Radomir is also very active in tuskar-ui both contributing and reviewing.

David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-03-05 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
2014-03-06!!! 

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow
Docs: http://docs.openstack.org/developer/taskflow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Any open reviews/questions/discussion needed for for 0.2
- Integration progress, help, furthering integration efforts.
- Possibly discuss about worker capability discovery.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, reviews needing help, questions and
answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-05 Thread Youcef Laribi
Hi Eugene,

Having an aggregate call to get all of the stats and statuses is good, but we 
should also keep the ability to retrieve statistics or the status of individual 
resources IMHO.

Thanks
Youcef

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, March 05, 2014 12:42 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for 
complex LB configurations.

Hi community,

Another interesting questions were raised during object model discussion about 
how pool statistics and health monitoring should be used in case of multiple 
vips sharing one pool.

Right now we can query statistics for the pool, and some data like in/out bytes 
and request count will be returned.
If we had several vips sharing the pool, what kind of statistics would make 
sense for the user?
The options are:

1) aggregated statistics for the pool, e.g. statistics of all requests that has 
hit the pool through any VIP
2) per-vip statistics for the pool.

Depending on the answer, the statistics workflow will be different.

The good option of getting the statistics and health status could be to query 
it through the vip and get it for the whole logical instance, e.g. a call like:
 lb-vip-statistics-get --vip-id vip_id
the would result in json that returns statistics for every pool associated with 
the vip, plus operational status of all members for the pools associated with 
that VIP.

Looking forward to your feedback.

Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-05 Thread John Dewey
On Wednesday, March 5, 2014 at 12:41 PM, Eugene Nikanorov wrote:
 Hi community,
 
 Another interesting questions were raised during object model discussion 
 about how pool statistics and health monitoring should be used in case of 
 multiple vips sharing one pool. 
 
 Right now we can query statistics for the pool, and some data like in/out 
 bytes and request count will be returned.
 If we had several vips sharing the pool, what kind of statistics would make 
 sense for the user?
 The options are:
 
 1) aggregated statistics for the pool, e.g. statistics of all requests that 
 has hit the pool through any VIP
 2) per-vip statistics for the pool.
 
 
 

Would it be crazy to offer both?  We can return stats for each pool associated 
with the VIP as you described below.  However, we also offer an aggregated 
section for those interested.

IMO, having stats broken out per-pool seem more helpful than only aggregated, 
while both would be ideal.

John
 
 Depending on the answer, the statistics workflow will be different.
 
 The good option of getting the statistics and health status could be to query 
 it through the vip and get it for the whole logical instance, e.g. a call 
 like: 
  lb-vip-statistics-get --vip-id vip_id
 the would result in json that returns statistics for every pool associated 
 with the vip, plus operational status of all members for the pools associated 
 with that VIP.
 
 Looking forward to your feedback.
 
 Thanks,
 Eugene.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] a key-value storage service for OpenStack. The pilot implementation is available now

2014-03-05 Thread Ilya Sviridov
Hello openstackers,

I'm excited to share that we have finished work on pilot implementation of
MagnetoDB, a key-value storage service for OpenStack.

During pilot development we have reached the following goals:

   -

   evaluated python cassandra clients maturity
   -

   evaluated python web stack maturity to handle high availability and high
   load scenarios
   -

   found a number of performance bottlenecks and analyzed the approaches to
   address
   -

   drafted and checked service architecture
   -

   drafted and checked deployment procedures


The API implementation is compatible with AWS DynamoDB API and pilot
version already supports the basic operations with tables and items. We
tested with boto library and Java AWS SDK and it work seamlessly with both
services.

Currently we are working on RESTFul API what will follow OpenStack tenets
in addition to current AWS DynamoDB API.

We have chosen Cassandra for pilot as most suitable storage for service
functionality. However, the cost of ownership and administration of
additional type of software can be determinative factor in choosing of
solution. That is why the backend database pluggability is important.

Currently we are evaluating HBase as one of alternatives as far as Hadoop
powered analytics often co-exists with OpenStack installations or works on
top of it like Savanna.

You can find more details on MagnetoDB along with the screencast on
Mirantis blog [1].

We will be publishing more details on each area of the findings during the
course of the next few weeks.

Any questions and ideas are very welcome. For those who are interested to
contribute, you can always find us on #magnetodb.

Links

[1]
http://www.mirantis.com/blog/introducing-magnetodb-nosql-database-service-openstack/

[2] https://github.com/stackforge/magnetodb

[3] https://wiki.openstack.org/wiki/MagnetoDB

[4] https://launchpad.net/magnetodb

With best regards,
Ilya Sviridov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Adds PCI support for the V3 API (just one patch in novaclient)

2014-03-05 Thread Tian, Shuangtai

Hi,

I would like to make a request for FFE for one patch in novaclient for PCI V3 
API : https://review.openstack.org/#/c/75324/
Though the V3 API will not release in Icehouse but all the PCI patches for V3 
API have been merged, and this is the last for V3,
I think some people may use the V3 and the PCI passthrough. Hope all the 
function can be used in the V3 in Icehouse.
This patch got one +2 from Kevin L. Mitchell but I updated it because of the 
comments.

The PCI patches in V3(merged):
Addressed by: https://review.openstack.org/51135
Extends V3 servers api for pci support
Addressed by: https://review.openstack.org/52376
Extends V3 os-hypervisor api for pci support
Addressed by: https://review.openstack.org/52377
Add V3 api for pci support

BTW the PCI Patches in V2 will defer to Juno.
The Blueprint https://blueprints.launchpad.net/nova/+spec/pci-api-support

Best regards,
Tian, Shuangtai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] FFE for vmdk-storage-policy-volume-type

2014-03-05 Thread Subramanian
Hi,

https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type
.

This is a blueprint that I am working on since Dec 2013 and as far I
remember it was targetted to icehouse-3. Just today I noticed that it was
moved to future, so should have feel through the cracks for core
reviewers.Is there a chance that this can still make it into icehouse?
Given that the change is fairly isolated in vmdk driver, and that the code
across 4 patches [1] that implement this blueprint has been fairly
reviewed, can I request for an FFE for this one?

Thanks,
Subbu

[1]
https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/vmdk-storage-policy-volume-type,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest] Coverage improvement for Nova API tests

2014-03-05 Thread Kenichi Oomichi

Hi,

I am working for Nova API tests in Tempest, and I'd like to ask
opinions about the ways.

Tempest generates a request and sends it to each RESTful API.
Then Tempest receives a response from the API and verifies the
response.
From the viewpoint of the backward compatibility, it is important
to keep the formats of request and response.
Now Tempest already verifies the request format because Tempest
generates it. However it does not verify the response format in
many test cases. So I'd like to improve it by adding verification
code.

but now I am facing a problems and I'd like to propose a workaround
for it.

Problem:
  The deserialized bodies of an XML response seems broken in some cases.
  In one case, some API attributes disappear in a deserialized body.
  In the other case, some API attribute names are different from JSON
  response. For example, the one of JSON is 'OS-EXT-SRV-ATTR:host' but
  the one of XML is '{http://docs.openstack.org/compute/ext/extended
  _status/api/v1.1}host'. I guess they are deserializer bugs of Tempest,
  but I'm not sure yet.

Possible solution:
  The best way is to fix all of them, but I think the best way needs a lot
  of effort. So I'd like to propose the way to skip additional verification
  code in the case of XML tests. The sample is the line151 of
   
https://review.openstack.org/#/c/77517/16/tempest/api/compute/admin/test_servers.py

  Now XML format has been marked as deprecated in Nova v2 API[1] and XML
  client would be removed from Tempest in Juno cycle. In addition, I guess
  there is a lot of this kind of problem because I faced the above problems 
  through adding verification code for 2 APIs only. So now I feel the best
  way is overkill.


Thanks
Ken'ichi Ohmichi

---
[1]: https://review.openstack.org/#/c/75439/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Kenichi Oomichi

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Wednesday, March 05, 2014 10:52 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API
 
 On Wed, 2014-03-05 at 05:43 +, Kenichi Oomichi wrote:
   -Original Message-
   From: Dan Smith [mailto:d...@danplanet.com]
   Sent: Wednesday, March 05, 2014 9:09 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API
  
What I'd like to do next is work through a new proposal that includes
keeping both v2 and v3, but with a new added focus of minimizing the
cost.  This should include a path away from the dual code bases and to
something like the v2.1 proposal.
  
   I think that the most we can hope for is consensus on _something_. So,
   the thing that I'm hoping would mostly satisfy the largest number of
   people is:
  
   - Leaving v2 and v3 as they are today in the tree, and with v3 still
 marked experimental for the moment
   - We start on a v2 proxy to v3, with the first goal of fully
 implementing the v2 API on top of v3, as judged by tempest
   - We define the criteria for removing the current v2 code and marking
 the v3 code supported as:
- The v2 proxy passes tempest
- The v2 proxy has sign-off from some major deployers as something
  they would be comfortable using in place of the existing v2 code
- The v2 proxy seems to us to be lower maintenance and otherwise
  preferable to either keeping both, breaking all our users, deleting
  v3 entirely, etc
 
  Thanks, Dan.
  The above criteria is reasonable to me.
 
  Now Tempest does not check API responses in many cases.
  For example, Tempest does not check what API attributes(flavor, image,
  etc.) should be included in the response body of create a server API.
  So we need to improve Tempest coverage from this viewpoint for verifying
  any backward incompatibility does not happen on v2.1 API.
  We started this improvement for Tempest and have proposed some patches
  for it now.
 
 Kenichi-san, you may also want to check out this ML post from David
 Kranz:
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/028920.html

Hi Jay-san,

Thank you for pointing it out. That is a good point :-)
I will join in David's idea.

Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] who builds other part of test environment

2014-03-05 Thread Gareth
Hi

Here a test result:
http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.htmland
its result is different from in my local environment. So I want to
check some details of official test environment, for example
/home/jenkins/workspace/gate-rally-pep8/tox.ini.

I guess it is in tempest repo but it isn't. I didn't find any test tox.ini
file in tempest repo. So it should be hosted in another repo. Which is that
one?

thanks

-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][wsme][pecan] Need help for ceilometer havana alarm issue

2014-03-05 Thread ZhiQiang Fan
I already check the stable/havana and master branch via devstack, the
problem is still in havana, but master branch is not affected

I think it is important to fix it for havana too, since some high level
application may depends on the returned faultstring. Currently, I'm not
sure mater branch fix it in pecan or wsme module, or in ceilometer itself

Is there anyone can help with this problem?

thanks


On Tue, Feb 18, 2014 at 9:09 AM, ZhiQiang Fan aji.zq...@gmail.com wrote:

 Hi,

 When I try to figure out the root cause of bug[1], I found that once
 wsme.exc.ClientSideError is triggerd when create an alarm, assume the
 faultstring is x, then if next http trigger a EntityNotFound(Exception)
 will get http response with faultstring equal to x.

 I trace the calling stack with a lot log, and the last log I got is in
 wsmeext.pecan.wsexpose, which showes that the dict which contains the
 faultstring is correct, it seems the problem occurs in formatting http
 response.

 env info:
 os: sles 11 sp3, ubuntu 12.04.3
 ceilometer: 2013.2.2, 2013.2.1
 wsme: cannot know, checked the egginfo and __init__ but got nothing
 pecan: forget...

 Please help, any information will be appreciated.

 Thanks!

 [1]: https://bugs.launchpad.net/ceilometer/+bug/1280036




-- 
blog: zqfan.github.com
git: github.com/zqfan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Crack at a Real life workflow

2014-03-05 Thread Dmitri Zimine
Folks, 

I took a crack at using our DSL to build a real-world workflow. 
Just to see how it feels to write it. And how it compares with alternative 
tools. 

This one automates a page from OpenStack operation guide: 
http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node
 

Here it is https://gist.github.com/dzimine/9380941
or here http://paste.openstack.org/show/72741/

I have a bunch of comments, implicit assumptions, and questions which came to 
mind while writing it. Want your and other people's opinions on it. 

But gist and paste don't let annotate lines!!! :(

May be we can put it on the review board, even with no intention to check in,  
to use for discussion? 

Any interest?

DZ ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [LBaaS] API spec for SSL Support

2014-03-05 Thread Youcef Laribi
Hi Anand,

I don't think it's fully documented in the API spec yet, but there is a 
patchset being reviewed in gerrit that shows how the API would look like 
(LbaasSSLDBMixin class):

https://review.openstack.org/#/c/74031/5/neutron/db/loadbalancer/lbaas_ssl_db.py

Thanks,
Youcef

From: Palanisamy, Anand [mailto:apalanis...@paypal.com]
Sent: Wednesday, March 05, 2014 5:26 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [LBaaS] API spec for SSL Support

Hi All,

Please let us know if we have the blueprint or the proposal for the LBaaS SSL 
API specification. We see only the workflow documented here 
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL.

Thanks
Anand

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >