Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-31 Thread Thomas Herve

 I favor the second option for the same reasons as Zane described, but also
 don't think we need a LaunchConfiguration resource. How about just adding a
 attribute to the resources such that the engine knows is not meant to be
 handled in the usual way, and instead it is really a template (sorry for
 the overloaded term) used in a scaling group. For example:
 
 group:
 type: OS::Heat::ScalingGroup
 properties:
 scaled_resource: server_for_scaling
 
 server_for_scaling:
 use_for_scaling: true ( the name of this attribute is clearly up for
 discussion ;-) )
 type: OS::Nova::Server
 properties:
 image: my_image
 flavor: m1.large
 
 When the engine sees the use_for_scaling set to true, then it does not call
 things like handle_create. Anyway, that's the general idea. I'm sure there
 are many other ways to achieve a similar effect.


I'm strongly opposed to that. It reduces the readability of the template a lot 
for no obvious benefit. Any time you radically change behavior by putting a 
flag somewhere, somethings's wrong.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Problem with Allow Address Pairs in Murano Workflows

2014-01-31 Thread Dmitry Teselkin
Hi,

The feature to auto-generate IP for cluster (Cluster Management IP and
Availability Group IP) looks good idea. In general there is no need to set
these IP addresses to some special values. And this feature will simplify
end-user interface. However, we must provide the chosen IP addresses to the
end user via web interface. Without this it will be quite strange
improvement - to create a cluster without knowing its endpoints.



On Fri, Jan 31, 2014 at 7:31 AM, Alexander Tivelkov
ativel...@mirantis.comwrote:

 Hi folks,

 It seems like Murano networking workflow has a problem with the way how it
 uses the allowed_address_pairs property of Neutron ports.
 This problem causes an intermittent bug preventing Microsoft SQL Server
 Cluster application to be deployed in Murano. We have a bug reported about
 that - [1]. Seems like this issue became a problem since the time when we
 introduced the advanced networking, i.e. the algorithm which attempts to
 automatically pick the proper networking setting for the newly created
 environment.

 Deployment workflow of MS SQL Server Cluster (and, wider, any Murano
 Application relying on the virtual ip address) uses Neutron's Allowed
 Address Pairs feature ([2]) to specify its virtual ip address, so Neutron
 allows the calls to this address through the ports of application's
 machines.
 However, there is a limitation: Neutron does not allow to specify the
 address to be equal to the fixed ip address of the port (see the first note
 at [3]). Murano does not assign the ip addresses of any ports explicitly
 and relies on the automatic ip allocation provided by Neutron.
 In the situation where the fixed IP address is not defined, but
 allowed_address_pair is set, I would expect Neutron do a little analysis
 and not to allocate the address provided in the allowed pair as the fixed
 IP. But Neutron does not do it - and the exception is thrown.

 However, even the worse situation may happen if such an address conflict
 appears not on a single port, but on the two different ones: when the ip of
 allow_address_pair of port A is equal to the static ip of port B. This
 situation is perfectly fine from Neutron's point of view, and no exception
 is thrown. However, later, during the configuration of the cluster, the
 virtual IP will point to one of the real ports - which may cause two
 machine sharing the same actual ip at the same time. You may guess the
 consequences.

 So, we need to find some working solution for this situation.
 The obvious one would be to exclude the virtual ip address from the
 allocation pools of the subnet, being generated for the environment. This
 may be tricky (as the cidrs for subnets are picked automatically), but
 still definitely doable. The problem which I see here is that we will have
 to do it at the time when the environment is created - i.e. run Neutron API
 calls from the Dashboard (but we have to do it anyway now, as we check the
 user's input for cluster ip to match the target subnet).
 Another solution is to remove the option for user to specify the virtual
 ip at all, and allocate this IP at the runtime of the workflow, when all
 the ports of the cluster's instances are already created and their IP
 known. I don't know if there is a real use-case requiring the user to know
 the virtual ip in advance and be able to control its value.  If there is no
 such scenario, then we may just hide it, and it will simplify things a lot.
 May be something else is possible as well. Any ideas are welcome



 [1] https://bugs.launchpad.net/murano/+bug/1274636
 [2] https://blueprints.launchpad.net/neutron/+spec/allowed-address-pairs
 [3]
 http://docs.openstack.org/admin-guide-cloud/content/section_allowed_address_pairs_workflow.html
 --
 Regards,
 Alexander Tivelkov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-31 Thread Thomas Herve


 On 30/01/14 12:20, Randall Burt wrote:
  On Jan 30, 2014, at 12:09 PM, Clint Byrumcl...@fewbar.com
wrote:
 
  I would hope we would solve that at a deeper level, rather than making
  resources for the things we think will need re-use. I think nested stacks
  allow this level of re-use already anyway. Software config just allows
  sub-resource composition.
  Agreed. Codifying re-use inside specific resource types is a game of
  catch-up I don't think we can win in the end.
 
 Thomas started this discussion talking about resources, but IMHO it's
 mostly really about the scaling API. It just happens that we're
 implementing the resources first and they need to match what we
 eventually decide the API should look like.

Not talking about the API was completely intentional. I believe that we should 
offer the best resource interface possible, regardless of the underlying API.

 So we're creating a new API. That doesn't happen very often. It's
 absolutely appropriate to consider whether we want two classes of
 resources in that API to have a 1:1 or 1:Many relationship, regardless
 of how many fancy transclusion things we have in the templates. For a
 start because one of the main reasons to create an API is so people can
 access it without templates.

Fair enough.

To give my opinion, I think we're overstating how useful it'd be to reuse 
launch configurations. The real reusability is at software config level, not so 
much about what flavor you want to use when starting an instance. Also, we 
already have provider templates to be able to embed resources and their 
properties.

Finally, if we go back to the API, it's one less thing to manage and simplify 
the model. The group becomes the bag containing scaling properties and resource 
definition.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] PXE driver deploy issues

2014-01-31 Thread Rohan Kanade
 The deploy ramdisk should ping back the Ironic API and call the
 vendor_passthru/pass_deploy_info with the iscsi informations etc... So,
 make sure you've built your deploy ramdisk after this patch landed on

Any strategies on how to verify if the ramdisk has been deployed on the
server?

Also, I am using different Power and VendorPassthru interfaces (unreleased
SeaMicro), And i am using PXE only for Deploy interface. How can the
pass_deploy_info be called by the ramdisk since it is not implemented by
SeaMicro VendorPassthru?

Regards,
Rohan Kanade
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-31 Thread Sean Dague
On 01/30/2014 07:15 PM, John Dickinson wrote:
 I've been keeping an eye on this thread, and it seems I actually have a few 
 minutes to spend on a response today.
 
 To first answer the specific question, while there are some minor technical 
 concerns about oslo logging, the bigger concerns are non-technical. Some 
 things I'm concerned about from a technical perspective are that it's not a 
 separate module or package that can be imported, so it would probably 
 currently require copy/paste code into the Swift codebase. My second concern 
 is that there are log line elements that just don't seem to make sense like 
 instance. I'd be happy to be wrong on both of these items, and I want to 
 make clear that these are not long-term issues. They are both solvable.
 
 My bigger concern with using oslo logging in Swift is simply changing the 
 request log format is something that cannot be done lightly. Request logs are 
 a very real interface into the system, and changing the log format in a 
 breaking way can cause major headaches for people relying on those logs for 
 system health, billing, and other operational concerns.
 
 One possible solution to this is to keep requests logged the same way, but 
 add configuration options for all of the other things that are logged. Having 
 two different logging systems (or multiple configurable log handlers) to do 
 this seems to add a fair bit of complexity to me, especially when I'm not 
 quite sure of the actual problem that's being solved. That said, adding in a 
 different log format into Swift isn't a terrible idea by itself, but 
 migration is a big concern of any implementation (and I know you'll find very 
 strong feelings on this in gerrit if/when something is proposed).
 
 
 
 Now back to the original topic of actual logging formats.
 
 Here's (something like) what I'd like to see for a common log standard (ie 
 Sean, what I think you were asking for comments on):
 
 log_line = prefix message
 prefix = timestamp project log_level
 message = bytestream
 timestamp = `eg the output of time.time()`
 project = `one of {nova,swift,neutron,cinder,glance,etc}`
 
 Now, there's plenty of opportunity to bikeshed what the actual log line would 
 look like, but the general idea of what I want to see has 2 major parts:
 
 1) Every log message is one line (ends with \n) and the log fields are 
 space-delineated. eg (`log_line = ' '.join(urllib.quote(x) for x in 
 log_fields_list)`)
 
 2) The only definition of a log format is the prefix and the message is a set 
 of fields defined by the service actually doing the logging.

So, actually, most of my concern at this point wasn't the line format.
It was the concern about when projects were calling the loggers, and
what kind of information should be logged at each level.

Given that most projects are using the oslo defaults today, much of the
line format is handled. I think that if you have concerns on that front,
it's probably a different conversation with the oslo team.

I do agree we should standards a little more on project (i.e. logger
name), because in most projects this is just defaulting to module.
Which is fine for debug level, but not very user friendly at ops levels.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-31 Thread Vipin Balachandran
The current API is stable since this is used by nova and cinder for the
last two releases. Yes, I can act as the maintainer.

 

Here is the list of reviewers:

Arnaud Legendre arnaud...@gmail.com

Davanum Srinivas (dims) dava...@gmail.com

garyk gkot...@vmware.com

Kartik Bommepally kbommepa...@vmware.com

Sabari Murugesan smuruge...@vmware.com

Shawn Hartsock harts...@acm.org

Subbu subramanian.neelakan...@gmail.com

Vui Lam v...@vmware.com

 

From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com] 
Sent: Friday, January 31, 2014 4:22 AM
To: Vipin Balachandran
Cc: Donald Stufft; OpenStack Development Mailing List (not for usage
questions)
Subject: Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
straight to oslo.vmware

 

 

On Thu, Jan 30, 2014 at 12:38 PM, Vipin Balachandran
vbalachand...@vmware.com wrote:

This library is highly specific to VMware drivers in OpenStack and not a
generic VMware API client. As Doug mentioned, this library won't be useful
outside OpenStack. Also, it has some dependencies on openstack.common code
as well. Therefore it makes sense if we make this code as part of OSLO.

 

I think we have consensus that, assuming you are committing to API
stability, this set of code does not need to go through the incubator
before becoming a library. How stable is the current API?

 

If it stable and is not going to be useful to anyone outside of OpenStack,
we can create an oslo.vmware library for it. I can start working with
-infra next week to set up the repository.

 

We will need someone on your team to be designated as the lead maintainer,
to coordinate with the Oslo PTL for release management issues and bug
triage. Is that you, Vipin?

 

We will also need to have a set of reviewers for the new repository. I'll
add oslo-core, but it will be necessary for a few people familiar with the
code to also be included. If you have anyone from nova or cinder who
should be a reviewer, we can add them, too. Please send me a list of names
and the email addresses used in gerrit so I can add them to the reviewer
list when the repository is created.

 

Doug

 

 

 

By the way, a work in progress review has been posted for the VMware
cinder driver integration with the OSLO common code
(https://review.openstack.org/#/c/70108/). The nova integration is
currently under progress.

 

Thanks,

Vipin

 

From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com] 
Sent: Wednesday, January 29, 2014 4:06 AM
To: Donald Stufft
Cc: OpenStack Development Mailing List (not for usage questions); Vipin
Balachandran
Subject: Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
straight to oslo.vmware

 

 

 

On Tue, Jan 28, 2014 at 5:06 PM, Donald Stufft don...@stufft.io wrote:


On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jan 28 2014, Doug Hellmann wrote:

 There are several reviews related to adding VMware interface code to
the
 oslo-incubator so it can be shared among projects (start at
 https://review.openstack.org/#/c/65075/7 if you want to look at the
code).

 I expect this code to be fairly stand-alone, so I wonder if we would be
 better off creating an oslo.vmware library from the beginning, instead
of
 bringing it through the incubator.

 Thoughts?

 This sounds like a good idea, but it doesn't look OpenStack specific, so
 maybe building a non-oslo library would be better.

 Let's not zope it! :)

+1 on not making it an oslo library.

 

Given the number of issues we've seen with stackforge libs in the gate,
I've changed my default stance on this point.

 

It's not clear from the code whether Vipin et al expect this library to be
useful for anyone not working with both OpenStack and VMware. Either way,
I anticipate having the library under the symmetric gating rules and
managed by the one of the OpenStack teams (oslo, nova, cinder?) and VMware
contributors should make life easier in the long run.

 

As far as the actual name goes, I'm not set on oslo.vmware it was just a
convenient name for the conversation.

 

Doug

 

 



 --
 Julien Danjou
 # Free Software hacker # independent consultant
 # http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
DCFA

 

 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-31 Thread Hugh Brock
 On Jan 31, 2014, at 1:30 AM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Zane Bitter's message of 2014-01-30 19:30:40 -0800:
 On 30/01/14 16:54, Clint Byrum wrote:
 I'm pretty sure it is useful to model images in Heat.
 
 Consider this scenario:
 
 
 resources:
   build_done_handle:
 type: AWS::CloudFormation::WaitConditionHandle
   build_done:
 type: AWS::CloudFormation::WaitCondition
 properties:
   handle: {Ref: build_done_handle}
   build_server:
 type: OS::Nova::Server
 properties:
   image: build-server-image
   userdata:
 join [ ,
   - #!/bin/bash\n
   - build_an_image\n
   - cfn-signal -s SUCCESS 
   - {Ref: build_done_handle}
   - \n]
   built_image:
 type: OS::Glance::Image
 depends_on: build_done
 properties:
   fetch_url: join [ , [http://;, {get_attribute: [ build_server, 
 fixed_ip ]}, /image_path]]
   actual_server:
 type: OS::Nova::Server
 properties:
   image: {Ref: built_image}
 
 
 Anyway, seems rather useful. Maybe I'm reaching.
 
 Well, consider that when this build is complete you'll still have the 
 server you used to build the image still sitting around. Of course you 
 can delete the stack to remove it - and along with it will go the image 
 in Glance. Still seem useful?
 
 No, not as such. However I have also discussed with other users having
 an OS::Heat::TemporaryServer which is deleted after a wait condition is
 signaled (resurrected on each update). This would be useful for hosting
 workflow code as the workflow doesn't actually need to be running all
 the time. It would also be useful for heat resources that want to run
 code that needs to be contained into their own VM/network such as the
 port probe thing that came up a few weeks ago.
 
 Good idea? I don't know. But it is the next logical step my brain keeps
 jumping to for things like this.
 
 (I'm conveniently ignoring the fact that you could have set 
 DeletionPolicy: Retain on the image to hack your way around this.)
 
 What you're looking for is a workflow service (I think it's called 
 Mistral this week?). A workflow service would be awesome, and Heat is 
 pretty awesome, but Heat is not a workflow service.
 
 Totally agree. I think workflow and orchestration have an unusual
 relationship though, because orchestration has its own workflow that
 users will sometimes need to defer to. This is why we use wait
 conditions, right?
 
 So yeah, Glance images in Heat might be kinda useful, but at best as a 
 temporary hack to fill in a gap because the Right Place to implement it 
 doesn't exist yet. That's why I feel ambivalent about it.
 
 I think you've nudged me away from optimistic at least closer to
 ambivalent as well.

We (RH tripleo folks) were having a similar conversation around Heat and stack 
upgrades the other day. There is unquestionably a workflow involving stack 
updates when a user goes to upgrade their overcloud, and it's awkward trying to 
shoehorn it into Heat (Steve Dake agreed). Our first thought was Tuskar should 
do that, but our second thought was Whatever the workflow service is should 
do that, and Tuskar should maybe provide a shorthand API for it.

I feel like we (tripleo) need to take a harder look at getting a working 
workflow thing available for our needs, soon...

--Hugh

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-31 Thread Macdonald-Wallace, Matthew
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 31 January 2014 12:29
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards
 
 On 01/30/2014 07:15 PM, John Dickinson wrote:
  1) Every log message is one line (ends with \n) and the log fields are
  space-delineated. eg (`log_line = ' '.join(urllib.quote(x) for x in
  log_fields_list)`)

+1 for this - multiple lines (even in in DEBUG mode!) are a PITA to handle with 
most log analyser software.


  2) The only definition of a log format is the prefix and the message is a 
  set of
 fields defined by the service actually doing the logging.
 
 So, actually, most of my concern at this point wasn't the line format.
 It was the concern about when projects were calling the loggers, and what kind
 of information should be logged at each level.
 
 Given that most projects are using the oslo defaults today, much of the line
 format is handled. I think that if you have concerns on that front, it's 
 probably a
 different conversation with the oslo team.
 
 I do agree we should standards a little more on project (i.e. logger name),
 because in most projects this is just defaulting to module.
 Which is fine for debug level, but not very user friendly at ops levels.

I'd just love to see the ability in the python logger to include the 
application name, not just the class/module that created the log message (it's 
in 3.something but I don't think we can justify a switch to Python 3 just 
based on logging!):

datetime LEVEL PID PROGRAM-NAME (i.e. nova-compute) module stuff 
more_stuff even_more_stuff

At the moment, all of the above is possible except for the PROGRAM_NAME part. 
 Is there anything we can do to add this to the context or similar?

Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-31 Thread Doug Hellmann
Thanks, Vipin,

Dims is going work on setting up your repository and help you configure the
review team. Give us a few days to get the details sorted out.

Doug


On Fri, Jan 31, 2014 at 7:39 AM, Vipin Balachandran 
vbalachand...@vmware.com wrote:

 The current API is stable since this is used by nova and cinder for the
 last two releases. Yes, I can act as the maintainer.



 Here is the list of reviewers:

 Arnaud Legendre arnaud...@gmail.com

 Davanum Srinivas (dims) dava...@gmail.com

 garyk gkot...@vmware.com

 Kartik Bommepally kbommepa...@vmware.com

 Sabari Murugesan smuruge...@vmware.com

 Shawn Hartsock harts...@acm.org

 Subbu subramanian.neelakan...@gmail.com

 Vui Lam v...@vmware.com



 *From:* Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 *Sent:* Friday, January 31, 2014 4:22 AM
 *To:* Vipin Balachandran
 *Cc:* Donald Stufft; OpenStack Development Mailing List (not for usage
 questions)

 *Subject:* Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
 straight to oslo.vmware





 On Thu, Jan 30, 2014 at 12:38 PM, Vipin Balachandran 
 vbalachand...@vmware.com wrote:

 This library is highly specific to VMware drivers in OpenStack and not a
 generic VMware API client. As Doug mentioned, this library won't be useful
 outside OpenStack. Also, it has some dependencies on openstack.common code
 as well. Therefore it makes sense if we make this code as part of OSLO.



 I think we have consensus that, assuming you are committing to API
 stability, this set of code does not need to go through the incubator
 before becoming a library. How stable is the current API?



 If it stable and is not going to be useful to anyone outside of OpenStack,
 we can create an oslo.vmware library for it. I can start working with
 -infra next week to set up the repository.



 We will need someone on your team to be designated as the lead maintainer,
 to coordinate with the Oslo PTL for release management issues and bug
 triage. Is that you, Vipin?



 We will also need to have a set of reviewers for the new repository. I'll
 add oslo-core, but it will be necessary for a few people familiar with the
 code to also be included. If you have anyone from nova or cinder who should
 be a reviewer, we can add them, too. Please send me a list of names and the
 email addresses used in gerrit so I can add them to the reviewer list when
 the repository is created.



 Doug







 By the way, a work in progress review has been posted for the VMware
 cinder driver integration with the OSLO common code (
 https://review.openstack.org/#/c/70108/). The nova integration is
 currently under progress.



 Thanks,

 Vipin



 *From:* Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 *Sent:* Wednesday, January 29, 2014 4:06 AM
 *To:* Donald Stufft
 *Cc:* OpenStack Development Mailing List (not for usage questions); Vipin
 Balachandran
 *Subject:* Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
 straight to oslo.vmware







 On Tue, Jan 28, 2014 at 5:06 PM, Donald Stufft don...@stufft.io wrote:


 On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

  On Tue, Jan 28 2014, Doug Hellmann wrote:
 
  There are several reviews related to adding VMware interface code to the
  oslo-incubator so it can be shared among projects (start at
  https://review.openstack.org/#/c/65075/7 if you want to look at the
 code).
 
  I expect this code to be fairly stand-alone, so I wonder if we would be
  better off creating an oslo.vmware library from the beginning, instead
 of
  bringing it through the incubator.
 
  Thoughts?
 
  This sounds like a good idea, but it doesn't look OpenStack specific, so
  maybe building a non-oslo library would be better.
 
  Let's not zope it! :)

 +1 on not making it an oslo library.



 Given the number of issues we've seen with stackforge libs in the gate,
 I've changed my default stance on this point.



 It's not clear from the code whether Vipin et al expect this library to be
 useful for anyone not working with both OpenStack and VMware. Either way, I
 anticipate having the library under the symmetric gating rules and managed
 by the one of the OpenStack teams (oslo, nova, cinder?) and VMware
 contributors should make life easier in the long run.



 As far as the actual name goes, I'm not set on oslo.vmware it was just a
 convenient name for the conversation.



 Doug






 
  --
  Julien Danjou
  # Free Software hacker # independent consultant
  # http://julien.danjou.info

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 -
 Donald Stufft
 PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
 DCFA





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Deployment units and plans

2014-01-31 Thread devdatta kulkarni
Hi,

In this week's language pack working group meeting we briefly touched
upon questions regarding creation of deployment units, their handling
with respect to plans, etc.

I have created a etherpad page where I have put these and other questions.

https://etherpad.openstack.org/p/RegardingLangPacks

Please feel free to add more questions and/or answers.

Thanks,
Devdatta



-Original Message-
From: Krishna Raman kra...@gmail.com
Sent: Wednesday, January 29, 2014 1:00am
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Solum][Git] Meeting reminder  agenda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi,

We have a git working group meeting scheduled for tomorrow morning at 9am PST.
Please follow link for additional timezones:
http://www.worldtimebuddy.com/?qm=1lid=100,8,524901,2158177h=100date=2014-01-29sln=17-18

I currently have only 3 things on the agenda:
- solum_hacks branch work:
- test cases
- code organization
- patch submission process
- integration point with Solum API
- integration point with Solum Plan file
- integration with LP

Please let me know if you would like additional items added to the agenda.

Thanks
—Krishna


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-31 Thread Robert Li (baoli)
Hi Irena,

Thanks for the Reply. See inline…

If possible, can we put details on what would be exactly covered by each BP?

--Robert

On 1/30/14 4:13 PM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

[Robert] This makes sense to me. Would this live in the extension area, or 
would it be in the ML2 area? I thought one of the above listed would cover the 
persistence of SRIOV attributes. But sounds like we need this BP.

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
[IrenaB] vnic_type will be added as an additional attribute to binding 
extension. For persistency it should be added in PortBindingMixin for non ML2. 
I didn’t think to cover it as part of ML2 vnic_type bp.
For the rest attributes, need to see what Bob plans.

[Robert] Sounds good to me. But again, which BP would cover this?

 -- is a neutron agent making decision based on the binding:vif_type?  In that 
case, it makes sense for binding:vnic_type not to be exposed to agents.
[IrenaB] vnic_type is input parameter that will eventually cause certain 
vif_type to be sent to GenericVIFDriver and create network interface. Neutron 
agents periodically scan for attached interfaces. For example, OVS agent will 
look only for OVS interfaces, so if SRIOV interface is created, it won’t be 
discovered by OVS agent.
[Robert] I get the idea. it relies on what are plugged onto the integration 
bridge by nova to determine if it needs to take actions.


Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest thoughts

2014-01-31 Thread James Slagle
On Thu, Jan 30, 2014 at 2:39 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from James Slagle's message of 2014-01-30 07:28:01 -0800:
 However, due to these changes,  I think that devtest no longer works
 great as a tripleo developer setup. You haven't been able to complete
 a setup following our docs for 1 week now. The patches are in review
 to fix that, and they need to be properly reviewed and I'm not saying
 they should be rushed. Just that it's another aspect of the problem of
 trying to use devtest for CI/CD and a dev setup.


 I wonder, if we have a gate which runs through devtest entirely, would
 that reduce the instances where we've broken everybody? Seems like it
 would, but the gate isn't going to read the docs, it is going to run the
 script, so maybe it will still break sometimes.

That would certainly help. Though, it could be hard to tell if a
failure is due to the devtest process *itself* (e.g., someone forgot
to document a step), or a change in one of the upstream OpenStack
projects.  Whereas if the process itself is less complex, I think it's
less likely to break.


 What if we just focus on breaking devtest less often? Seems like that is
 achievable and then we don't diverge from CI.

I'm sure it's achievable, but I'm not sure it's worth the cost. It's
difficult to anticipate how hard it's going to be in the future to
continue to bend devtest to do all of the things really well (CI,
CD, dev manual/scripted setup, doc generation).

That being said, there's also cost associated with maintaining a
separate dev setup. I hope that whatever we came up with though would
keep that cost fairly minimal.

 In irc earlier this week (sorry if i misquoting the intent here), I
 saw mention of getting setup easier by just using a seed to deploy an
 overcloud.  I think that's a great idea.  We are all already probably
 doing it :). Why not document that in some sort of fashion?


 +1. I think a note at the end of devtest_seed which basically says If
 you are not interested in testing HA baremetal, set these variables like
 so and skip to devtest_overcloud. Great idea actually, as thats what I
 do often when I know I'll be tearing down my setup later.

Agreed, I think this an easy short term win. I'll probably look at
getting that update submitted soon.

 There would be some initial trade offs, around folks not necessarily
 understanding the full devtest process. But, you don't necessarily
 need to understand all of that to hack on the upgrade story, or
 tuskar, or ironic.


 Agreed totally. The processes are similar enough that when the time
 comes that a user needs to think about working on things which impact
 the undercloud they can back up to seed and then do that.

Thanks for the feedback.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-31 Thread Ben Nemec

On 2014-01-31 07:00, Macdonald-Wallace, Matthew wrote:

-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 31 January 2014 12:29
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Proposed Logging Standards

On 01/30/2014 07:15 PM, John Dickinson wrote:
 1) Every log message is one line (ends with \n) and the log fields are
 space-delineated. eg (`log_line = ' '.join(urllib.quote(x) for x in
 log_fields_list)`)


+1 for this - multiple lines (even in in DEBUG mode!) are a PITA to
handle with most log analyser software.



 2) The only definition of a log format is the prefix and the message is a set 
of
fields defined by the service actually doing the logging.

So, actually, most of my concern at this point wasn't the line format.
It was the concern about when projects were calling the loggers, and 
what kind

of information should be logged at each level.

Given that most projects are using the oslo defaults today, much of 
the line
format is handled. I think that if you have concerns on that front, 
it's probably a

different conversation with the oslo team.

I do agree we should standards a little more on project (i.e. logger 
name),

because in most projects this is just defaulting to module.
Which is fine for debug level, but not very user friendly at ops 
levels.


I'd just love to see the ability in the python logger to include the
application name, not just the class/module that created the log
message (it's in 3.something but I don't think we can justify a
switch to Python 3 just based on logging!):

datetime LEVEL PID PROGRAM-NAME (i.e. nova-compute) module
stuff more_stuff even_more_stuff

At the moment, all of the above is possible except for the
PROGRAM_NAME part.  Is there anything we can do to add this to the
context or similar?

Matt


Wish granted. :-)

https://github.com/openstack/oslo-incubator/commit/8c3046b78dca8eae1d911e3421b5938c19f20c37

The plan is to turn that on by default as soon as we can get through the 
deprecation process for the existing format.


Related to John's comments on the line format, I know that has come up 
before.  I gather the default log format in Oslo started life as a 
direct copy of the Nova log code, so there seem to be some nova-isms 
left there.  I'm open to suggestions on how to make the default better, 
and probably how to allow each project to specify its own default format 
since I doubt we're going to find one that satisfies everyone.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-31 Thread Georgy Okrokvertskhov
Hi,

There is a stackforge project Mistral which is aimed to provide generic
workflow service. I believe Zane mentioned it in his previous e-mail.
Currently, this project is at a pilot stage. Mistral has working pilot with
all core components implemented and right now we are finalizing DSL syntax
for task definitions. Mistral can call any API endpoint which is defined in
a task and Mistral exposes hooks to trig workflow execution on some
external event.

There will be a meetup where Renat Akhmerov (Mistral lead) will present
Mistral, its use cases and current status of the project followed by a
demo. Here is a link: http://www.meetup.com/openstack/events/163020092/

We plan to finish Mistral core development during Icehouse release and
apply for an incubation. I think in J release Mistral can be used by other
OpenStack projects as all bits an pieces will be available at this time.

Thanks
Georgy


On Fri, Jan 31, 2014 at 4:53 AM, Hugh Brock hbr...@redhat.com wrote:

  On Jan 31, 2014, at 1:30 AM, Clint Byrum cl...@fewbar.com wrote:
 
  Excerpts from Zane Bitter's message of 2014-01-30 19:30:40 -0800:
  On 30/01/14 16:54, Clint Byrum wrote:
  I'm pretty sure it is useful to model images in Heat.
 
  Consider this scenario:
 
 
  resources:
build_done_handle:
  type: AWS::CloudFormation::WaitConditionHandle
build_done:
  type: AWS::CloudFormation::WaitCondition
  properties:
handle: {Ref: build_done_handle}
build_server:
  type: OS::Nova::Server
  properties:
image: build-server-image
userdata:
  join [ ,
- #!/bin/bash\n
- build_an_image\n
- cfn-signal -s SUCCESS 
- {Ref: build_done_handle}
- \n]
built_image:
  type: OS::Glance::Image
  depends_on: build_done
  properties:
fetch_url: join [ , [http://;, {get_attribute: [
 build_server, fixed_ip ]}, /image_path]]
actual_server:
  type: OS::Nova::Server
  properties:
image: {Ref: built_image}
 
 
  Anyway, seems rather useful. Maybe I'm reaching.
 
  Well, consider that when this build is complete you'll still have the
  server you used to build the image still sitting around. Of course you
  can delete the stack to remove it - and along with it will go the image
  in Glance. Still seem useful?
 
  No, not as such. However I have also discussed with other users having
  an OS::Heat::TemporaryServer which is deleted after a wait condition is
  signaled (resurrected on each update). This would be useful for hosting
  workflow code as the workflow doesn't actually need to be running all
  the time. It would also be useful for heat resources that want to run
  code that needs to be contained into their own VM/network such as the
  port probe thing that came up a few weeks ago.
 
  Good idea? I don't know. But it is the next logical step my brain keeps
  jumping to for things like this.
 
  (I'm conveniently ignoring the fact that you could have set
  DeletionPolicy: Retain on the image to hack your way around this.)
 
  What you're looking for is a workflow service (I think it's called
  Mistral this week?). A workflow service would be awesome, and Heat is
  pretty awesome, but Heat is not a workflow service.
 
  Totally agree. I think workflow and orchestration have an unusual
  relationship though, because orchestration has its own workflow that
  users will sometimes need to defer to. This is why we use wait
  conditions, right?
 
  So yeah, Glance images in Heat might be kinda useful, but at best as a
  temporary hack to fill in a gap because the Right Place to implement it
  doesn't exist yet. That's why I feel ambivalent about it.
 
  I think you've nudged me away from optimistic at least closer to
  ambivalent as well.

 We (RH tripleo folks) were having a similar conversation around Heat and
 stack upgrades the other day. There is unquestionably a workflow involving
 stack updates when a user goes to upgrade their overcloud, and it's awkward
 trying to shoehorn it into Heat (Steve Dake agreed). Our first thought was
 Tuskar should do that, but our second thought was Whatever the workflow
 service is should do that, and Tuskar should maybe provide a shorthand API
 for it.

 I feel like we (tripleo) need to take a harder look at getting a working
 workflow thing available for our needs, soon...

 --Hugh

 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list

Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-31 Thread Clint Byrum
Excerpts from Hugh Brock's message of 2014-01-31 04:53:11 -0800:
  On Jan 31, 2014, at 1:30 AM, Clint Byrum cl...@fewbar.com wrote:
  
  Excerpts from Zane Bitter's message of 2014-01-30 19:30:40 -0800:
  On 30/01/14 16:54, Clint Byrum wrote:
  I'm pretty sure it is useful to model images in Heat.
  
  Consider this scenario:
  
  
  resources:
build_done_handle:
  type: AWS::CloudFormation::WaitConditionHandle
build_done:
  type: AWS::CloudFormation::WaitCondition
  properties:
handle: {Ref: build_done_handle}
build_server:
  type: OS::Nova::Server
  properties:
image: build-server-image
userdata:
  join [ ,
- #!/bin/bash\n
- build_an_image\n
- cfn-signal -s SUCCESS 
- {Ref: build_done_handle}
- \n]
built_image:
  type: OS::Glance::Image
  depends_on: build_done
  properties:
fetch_url: join [ , [http://;, {get_attribute: [ build_server, 
  fixed_ip ]}, /image_path]]
actual_server:
  type: OS::Nova::Server
  properties:
image: {Ref: built_image}
  
  
  Anyway, seems rather useful. Maybe I'm reaching.
  
  Well, consider that when this build is complete you'll still have the 
  server you used to build the image still sitting around. Of course you 
  can delete the stack to remove it - and along with it will go the image 
  in Glance. Still seem useful?
  
  No, not as such. However I have also discussed with other users having
  an OS::Heat::TemporaryServer which is deleted after a wait condition is
  signaled (resurrected on each update). This would be useful for hosting
  workflow code as the workflow doesn't actually need to be running all
  the time. It would also be useful for heat resources that want to run
  code that needs to be contained into their own VM/network such as the
  port probe thing that came up a few weeks ago.
  
  Good idea? I don't know. But it is the next logical step my brain keeps
  jumping to for things like this.
  
  (I'm conveniently ignoring the fact that you could have set 
  DeletionPolicy: Retain on the image to hack your way around this.)
  
  What you're looking for is a workflow service (I think it's called 
  Mistral this week?). A workflow service would be awesome, and Heat is 
  pretty awesome, but Heat is not a workflow service.
  
  Totally agree. I think workflow and orchestration have an unusual
  relationship though, because orchestration has its own workflow that
  users will sometimes need to defer to. This is why we use wait
  conditions, right?
  
  So yeah, Glance images in Heat might be kinda useful, but at best as a 
  temporary hack to fill in a gap because the Right Place to implement it 
  doesn't exist yet. That's why I feel ambivalent about it.
  
  I think you've nudged me away from optimistic at least closer to
  ambivalent as well.
 
 We (RH tripleo folks) were having a similar conversation around Heat and 
 stack upgrades the other day. There is unquestionably a workflow involving 
 stack updates when a user goes to upgrade their overcloud, and it's awkward 
 trying to shoehorn it into Heat (Steve Dake agreed). Our first thought was 
 Tuskar should do that, but our second thought was Whatever the workflow 
 service is should do that, and Tuskar should maybe provide a shorthand API 
 for it.
 
 I feel like we (tripleo) need to take a harder look at getting a working 
 workflow thing available for our needs, soon...
 

I agree that is a thought that enters my mind as well.

However, I don't know if we're so much shoe-horning the upgrade workflow
into Heat as we are making sure Heat's internal workflow is useful so
that users don't have to write their own workflow to make the expressed
stack a reality.

I'll be starting work on rolling updates for Heat soon. That will provide
users with a way to express that they'd like parallelizable updates to
a certain group to be controlled by a certain waitcondition state. That
feels like workflow because we know the implementation is definitely a
workflow. However, to the user it is just declaring the rules, they're
not actually deciding how to do it.

I would hope that having generic workflows inside heat would free users
from drudgery, and they can write workflows when the flow is sufficiently
complex.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-31 Thread Macdonald-Wallace, Matthew
 -Original Message-
 From: Ben Nemec [mailto:openst...@nemebean.com]
 Sent: 31 January 2014 16:01
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Proposed Logging Standards
  I'd just love to see the ability in the python logger to include the
  application name, not just the class/module that created the log
  message (it's in 3.something but I don't think we can justify a
  switch to Python 3 just based on logging!):
 
  datetime LEVEL PID PROGRAM-NAME (i.e. nova-compute) module
  stuff more_stuff even_more_stuff
 
  At the moment, all of the above is possible except for the
  PROGRAM_NAME part.  Is there anything we can do to add this to the
  context or similar?
 
  Matt
 
 Wish granted. :-)

YAY! \o/
 
 https://github.com/openstack/oslo-
 incubator/commit/8c3046b78dca8eae1d911e3421b5938c19f20c37
 
 The plan is to turn that on by default as soon as we can get through the
 deprecation process for the existing format.

Awesome, good to hear.

Thanks,

Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-31 Thread Sandhya Dasu (sadasu)
Hi Irena,
  I was initially looking at 
https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
to take care of the extra information required to set up the SR-IOV port. When 
the scope of the BP was being decided, we had very little info about our own 
design so I didn't give any feedback about SR-IOV ports. But, I feel that this 
is the direction we should be going. Maybe we should target this in Juno.

Introducing, SRIOVPortProfileMixin would be creating yet another way to take 
care of extra port config. Let me know what you think.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Thursday, January 30, 2014 4:13 PM
To: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert 
Kukura rkuk...@redhat.commailto:rkuk...@redhat.com, Sandhya Dasu 
sad...@cisco.commailto:sad...@cisco.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
[IrenaB] vnic_type will be added as an additional attribute to binding 
extension. For persistency it should be added in PortBindingMixin for non ML2. 
I didn’t think to cover it as part of ML2 vnic_type bp.
For the rest attributes, need to see what Bob plans.

 -- is a neutron agent making decision based on the binding:vif_type?  In that 
case, it makes sense for binding:vnic_type not to be exposed to agents.
[IrenaB] vnic_type is input parameter that will eventually cause certain 
vif_type to be sent to GenericVIFDriver and create network interface. Neutron 
agents periodically scan for attached interfaces. For example, OVS agent will 
look only for OVS interfaces, so if SRIOV interface is created, it won’t be 
discovered by OVS agent.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about Nova BP.

2014-01-31 Thread Joe Gordon
Including openstack-dev ML in response.


On Fri, Jan 31, 2014 at 8:14 AM, wingwj win...@gmail.com wrote:
 Hi, Mr Gordon,

 Firstly, sorry for my lately reply for this BP..
 https://blueprints.launchpad.net/nova/+spec/driver-for-huawei-fusioncompute

 Honestly speaking, we wrote the first FusionCompute Nova-driver on Folsom 
 edition, and now it has been updated with Havana. We maintained by ourselves.

 Now I have a question about your suggestion in whiteboard of this BP:
 Is the CI environment a required term for this BP? Now Huawei is preparing 
 the CI environment for Nova  Neutron.
 But due to the company's policy, it's not a easy thing to realize it rapidly. 
 We'll try our best for it.

Yes, CI is a requirement for adding  a new driver, please see:

https://wiki.openstack.org/wiki/HypervisorSupportMatrix
http://lists.openstack.org/pipermail/openstack-dev/2013-July/011260.html
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan


 So can we commit the codes, and prepare the CI at the same time?

That's a good question, I don't think that is feasible for Icehouse,
but as far as I know we haven't fully discussed how to introduce new
drivers now that we have the third party testing requirement.


An alternate option is to add FusionCompute support to libvirt, and
since nova already supports libvirt you will get nova support
automatically.



 P.S. Now we're also preparing some materials for introducing FusionCompute, 
 like Dan suggesting in whiteboard of this BP. But this week is the Chinese 
 Spring-Festival holiday, so this work may be finished latterly, so I hope we 
 can understand it.


 Thanks very much.

 WingWJ

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-31 Thread Robert Kukura
On 01/30/2014 03:44 PM, Robert Li (baoli) wrote:
 Hi,
 
 We made a lot of progress today. We agreed that:
 -- vnic_type will be a top level attribute as binding:vnic_type
 -- BPs:
  * Irena's
 https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for
 binding:vnic_type
  * Bob to submit a BP for binding:profile in ML2. SRIOV input info
 will be encapsulated in binding:profile

This is https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile.

  * Bob to submit a BP for binding:vif_details in ML2. SRIOV output
 info will be encapsulated in binding:vif_details, which may include
 other information like security parameters. For SRIOV, vlan_id and
 profileid are candidates.

This is https://blueprints.launchpad.net/neutron/+spec/vif-details.

 -- new arguments for port-create will be implicit arguments. Future
 release may make them explicit. New argument: --binding:vnic_type
 {virtio, direct, macvtap}. 
 I think that currently we can make do without the profileid as an input
 parameter from the user. The mechanism driver will return a profileid in
 the vif output.

By vif output here, do you mean binding:vif_details? If so, do we know
how the MD gets the value to return?

 
 Please correct any misstatement in above.

Sounds right to me.

 
 Issues: 
   -- do we need a common utils/driver for SRIOV generic parts to be used
 by individual Mechanism drivers that support SRIOV? More details on what
 would be included in this sriov utils/driver? I'm thinking that a
 candidate would be the helper functions to interpret the pci_slot, which
 is proposed as a string. Anything else in your mind? 

I'd suggest looking at the
neutron.plugins.ml2.drivers.mech_agent.AgentMechanismDriverBase class
that is inherited by the various MDs that use L2 agents. This handles
most of what the MDs need to do, and the derived classes only deal with
details specific to that L2 agent. Maybe a similar
SriovMechanismDriverBase class would make sense.

 
   -- what should mechanism drivers put in binding:vif_details and how
 nova would use this information? as far as I see it from the code, a VIF
 object is created and populated based on information provided by neutron
 (from get network and get port)

I think nova should include the entire binding:vif_details attribute in
its VIF object so that the GenericVIFDriver can interpret whatever
key/value pairs are needed (based on the binding:vif_type). We are going
to need to work closely with the nova team to make this so.

 
 Questions:
   -- nova needs to work with both ML2 and non-ML2 plugins. For regular
 plugins, binding:vnic_type will not be set, I guess. Then would it be
 treated as a virtio type? And if a non-ML2 plugin wants to support
 SRIOV, would it need to  implement vnic-type, binding:profile,
 binding:vif-details for SRIOV itself?

Makes sense to me.

 
  -- is a neutron agent making decision based on the binding:vif_type?
  In that case, it makes sense for binding:vnic_type not to be exposed to
 agents.

I'm not sure I understand what an L2 agent would do with this. As I've
mentioned, I think ML2 will eventually allow the bound MD to add
whatever info it needs to the response returned for the
get_device_details RPC. If the vnic_type is needed in an SRIOV-specific
L2 agent, that should allow the associated driver to supply it.

 
 Thanks,
 Robert

-Bob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] PXE driver deploy issues

2014-01-31 Thread Devananda van der Veen
I think your driver should implement a wrapper around both VendorPassthru
interfaces and call each appropriately, depending on the request. This
keeps each VendorPassthru driver separate, and encapsulates the logic about
when to call each of them in the driver layer.

As an aside, this is a code path (multiplexed VendorPassthru interfaces)
that we haven't exercised yet, but has come up in other discussions
recently too, so if you run into something else that looks awkward, please
jump into IRC and we'll help hash it out.

Cheers,
Devananda


On Fri, Jan 31, 2014 at 1:34 AM, Rohan Kanade openst...@rohankanade.comwrote:

  The deploy ramdisk should ping back the Ironic API and call the
  vendor_passthru/pass_deploy_info with the iscsi informations etc... So,
  make sure you've built your deploy ramdisk after this patch landed on

 Any strategies on how to verify if the ramdisk has been deployed on the
 server?

 Also, I am using different Power and VendorPassthru interfaces (unreleased
 SeaMicro), And i am using PXE only for Deploy interface. How can the
 pass_deploy_info be called by the ramdisk since it is not implemented by
 SeaMicro VendorPassthru?

 Regards,
 Rohan Kanade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][ml2] Proposal to support VIF security, PCI-passthru/SR-IOV, and other binding-specific data

2014-01-31 Thread Robert Kukura
On 01/29/2014 10:26 AM, Robert Kukura wrote:
 The neutron patch [1] and nova patch [2], proposed to resolve the
 get_firewall_required should use VIF parameter from neutron bug [3],
 replace the binding:capabilities attribute in the neutron portbindings
 extension with a new binding:vif_security attribute that is a dictionary
 with several keys defined to control VIF security. When using the ML2
 plugin, this binding:vif_security attribute flows from the bound
 MechanismDriver to nova's GenericVIFDriver.
 
 Separately, work on PCI-passthru/SR-IOV for ML2 also requires
 binding-specific information to flow from the bound MechanismDriver to
 nova's GenericVIFDriver. See [4] for links to various documents and BPs
 on this.
 
 A while back, in reviewing [1], I suggested a general mechanism to allow
 ML2 MechanismDrivers to supply arbitrary port attributes in order to
 meet both the above requirements. That approach was incorporated into
 [1] and has been cleaned up and generalized a bit in [5].
 
 I'm now becoming convinced that proliferating new port attributes for
 various data passed from the neutron plugin (the bound MechanismDriver
 in the case of ML2) to nova's GenericVIFDriver is not such a great idea.
 One issue is that adding attributes keeps changing the API, but this
 isn't really a user-facing API. Another is that all ports should have
 the same set of attributes, so the plugin still has to be able to supply
 those attributes when a bound MechanismDriver does not supply them. See [5].
 
 Instead, I'm proposing here that the binding:vif_security attribute
 proposed in [1] and [2] be renamed binding:vif_details, and used to
 transport whatever data needs to flow from the neutron plugin (i.e.
 ML2's bound MechanismDriver) to the nova GenericVIFDriver. This same
 dictionary attribute would be able to carry the VIF security key/value
 pairs defined in [1], those needed for [4], as well as any needed for
 future GenericVIFDriver features. The set of key/value pairs in
 binding:vif_details that apply would depend on the value of
 binding:vif_type.

I've filed a blueprint for this:

 https://blueprints.launchpad.net/neutron/+spec/vif-details

Also, for a similar flow of binding-related information into the
plugin/MechanismDriver, I've filed a blueprint to implement the existing
binding:profile attribute in ML2:

 https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile

Both of these are admin-only dictionary attributes on port. One is
read-only for output data, the other read-write for input data. Together
they enable optional features like SR-IOV PCI passthrough to be
implemented in ML2 MechanismDrivers without requiring feature-specific
changes to the plugin itself.

-Bob

 
 If this proposal is agreed to, I can quickly write a neutron BP covering
 this and provide a generic implementation for ML2. Then [1] and [2]
 could be updated to use binding:vif_details for the VIF security data
 and eliminate the existing binding:capabilities attribute.
 
 If we take this proposed approach of using binding:vif_details, the
 internal ML2 handling of binding:vif_type and binding:vif_details could
 either take the approach used for binding:vif_type and
 binding:capabilities in the current code, where the values are stored in
 the port binding DB table. Or they could take the approach in [5] where
 they are obtained from bound MechanismDriver when needed. Comments on
 these options are welcome.
 
 Please provide feedback on this proposal and the various options in this
 email thread and/or at today's ML2 sub-team meeting.
 
 Thanks,
 
 -Bob
 
 [1] https://review.openstack.org/#/c/21946/
 [2] https://review.openstack.org/#/c/44596/
 [3] https://bugs.launchpad.net/nova/+bug/1112912
 [4] https://wiki.openstack.org/wiki/Meetings/Passthrough
 [5] https://review.openstack.org/#/c/69783/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-31 Thread Tzu-Mainn Chen
So after reading the replies on this thread, it seems like I (and others 
advocating
a custom scheduler) may have overthought things a bit.  The reason this route 
was
suggested was because of conflicting goals for Icehouse:

a) homogeneous nodes (to simplify requirements)
b) support diverse hardware sets (to allow as many users as possible to try 
Tuskar)

Option b) requires either a custom scheduler or forcing nodes to have the same 
attributes,
and the answer to that question is where much of the debate lies.

However, taking a step back, maybe the real answer is:

a) homogeneous nodes
b) document. . .
   - **unsupported** means of demoing Tuskar (set node attributes to match 
flavors, hack
 the scheduler, etc)
   - our goals of supporting heterogeneous nodes for the J-release.

Does this seem reasonable to everyone?


Mainn

- Original Message -
 On 30 January 2014 23:26, Tomas Sedovic tsedo...@redhat.com wrote:
  Hi all,
 
  I've seen some confusion regarding the homogenous hardware support as the
  first step for the tripleo UI. I think it's time to make sure we're all on
  the same page.
 
  Here's what I think is not controversial:
 
  1. Build the UI and everything underneath to work with homogenous hardware
  in the Icehouse timeframe
  2. Figure out how to support heterogenous hardware and do that (may or may
  not happen within Icehouse)
 
  The first option implies having a single nova flavour that will match all
  the boxes we want to work with. It may or may not be surfaced in the UI (I
  think that depends on our undercloud installation story).
 
 I don't agree that (1) implies a single nova flavour. In the context
 of the discussion it implied avoiding doing our own scheduling, and
 due to the many moving parts we never got beyond that.
 
 My expectation is that (argh naming of things) a service definition[1]
 will specify a nova flavour, right from the get go. That gives you
 homogeneous hardware for any service
 [control/network/block-storage/object-storage].
 
 Jaromir's wireframes include the ability to define multiple such
 definitions, so two definitions for compute, for instance (e.g. one
 might be KVM, one Xen, or one w/GPUs and the other without, with a
 different host aggregate configured).
 
 As long as each definition has a nova flavour, users with multiple
 hardware configurations can just create multiple definitions, done.
 
 That is not entirely policy driven, so for longer term you want to be
 able to say 'flavour X *or* Y can be used for this', but as a early
 iteration it seems very straight forward to me.
 
  Now, someone (I don't honestly know who or when) proposed a slight step up
  from point #1 that would allow people to try the UI even if their hardware
  varies slightly:
 
  1.1 Treat similar hardware configuration as equal
 
 I think this is a problematic idea, because of the points raised
 elsewhere in the thread.
 
 But more importantly, it's totally unnecessary. If one wants to handle
 minor variations in hardware (e.g. 1TB vs 1.1TB disks) just register
 them as being identical, with the lowest common denominator - Nova
 will then treat them as equal.
 
 -Rob
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-31 Thread Robert Kukura
On 01/31/2014 11:45 AM, Sandhya Dasu (sadasu) wrote:
 Hi Irena,
   I was initially looking
 at 
 https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
 to
 take care of the extra information required to set up the SR-IOV port.
 When the scope of the BP was being decided, we had very little info
 about our own design so I didn't give any feedback about SR-IOV ports.
 But, I feel that this is the direction we should be going. Maybe we
 should target this in Juno.

That BP covers including additional information from the bound network
segment's TypeDriver in the response to the get_device_details RPC. I
believe the bound MechanismDriver should also have the opportunity to
include additional information in that response. Possibly the bound
MechanismDriver is what would decide what information from the bound
segment's TypeDriver is needed by the L2 agent it supports. Anyway, I'm
still hopeful we can get this sorted out and implemented in icehouse,
but I agree its best to not depend on it until juno.

 
 Introducing, SRIOVPortProfileMixin would be creating yet another way to
 take care of extra port config. Let me know what you think.

This SRIOVPortProfileMixin has been mentioned a few times now. I'm not
clear on what this class is intended to be mixed into. Is this something
that would be mixed into any plugin that supports SRIOV?

If so, I'd prefer not to use such a mixin class in ML2, where we've so
far been avoiding the need to add any specific support for SRIOV to the
plugin itself. Instead we've been trying to define generic features in
ML2 that allow SRIOV to be packaged as an optional feature enabled by
configuring a MechanismDriver that supports it. This approach is a prime
example of the modular goal of Modular Layer 2.

-Bob

 
 Thanks,
 Sandhya
 
 From: Irena Berezovsky ire...@mellanox.com mailto:ire...@mellanox.com
 Date: Thursday, January 30, 2014 4:13 PM
 To: Robert Li (baoli) ba...@cisco.com mailto:ba...@cisco.com,
 Robert Kukura rkuk...@redhat.com mailto:rkuk...@redhat.com, Sandhya
 Dasu sad...@cisco.com mailto:sad...@cisco.com, OpenStack
 Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
 Jan. 30th
 
 Robert,
 
 Thank you very much for the summary.
 
 Please, see inline
 
  
 
 *From:*Robert Li (baoli) [mailto:ba...@cisco.com]
 *Sent:* Thursday, January 30, 2014 10:45 PM
 *To:* Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack
 Development Mailing List (not for usage questions); Brian Bowen (brbowen)
 *Subject:* [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
 Jan. 30th
 
  
 
 Hi,
 
  
 
 We made a lot of progress today. We agreed that:
 
 -- vnic_type will be a top level attribute as binding:vnic_type
 
 -- BPs:
 
  * Irena's
 https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for
 binding:vnic_type
 
  * Bob to submit a BP for binding:profile in ML2. SRIOV input info
 will be encapsulated in binding:profile
 
  * Bob to submit a BP for binding:vif_details in ML2. SRIOV output
 info will be encapsulated in binding:vif_details, which may include
 other information like security parameters. For SRIOV, vlan_id and
 profileid are candidates.
 
 -- new arguments for port-create will be implicit arguments. Future
 release may make them explicit. New argument: --binding:vnic_type
 {virtio, direct, macvtap}. 
 
 I think that currently we can make do without the profileid as an input
 parameter from the user. The mechanism driver will return a profileid in
 the vif output.
 
  
 
 Please correct any misstatement in above.
 
  
 
 Issues: 
 
   -- do we need a common utils/driver for SRIOV generic parts to be used
 by individual Mechanism drivers that support SRIOV? More details on what
 would be included in this sriov utils/driver? I'm thinking that a
 candidate would be the helper functions to interpret the pci_slot, which
 is proposed as a string. Anything else in your mind? 
 
 */[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist
 SRIOV port related attributes/*
 
  
 
   -- what should mechanism drivers put in binding:vif_details and how
 nova would use this information? as far as I see it from the code, a VIF
 object is created and populated based on information provided by neutron
 (from get network and get port)
 
  
 
 Questions:
 
   -- nova needs to work with both ML2 and non-ML2 plugins. For regular
 plugins, binding:vnic_type will not be set, I guess. Then would it be
 treated as a virtio type? And if a non-ML2 plugin wants to support
 SRIOV, would it need to  implement vnic-type, binding:profile,
 binding:vif-details for SRIOV itself?
 
 */[IrenaB] vnic_type will be added as an additional attribute to binding
 extension. For persistency it should be added in 

[openstack-dev] [diskimage-builder]

2014-01-31 Thread Robert Nettleton
HI All,

I have a question regarding the supported platforms for the diskimage-builder 
tool.

I’ve been using Ubuntu for image generation lately, and that seems to work 
fine.  

At some point, I had hoped to run DIB on CentOS (6.4), but ran into problems 
when I tried this.  In particular, the default Python version included in 
CentOS 6.4 is 2.6.6, and it looks like DIB requires at least 2.7.  

Yesterday, I saw the following bug posting: 

https://bugs.launchpad.net/diskimage-builder/+bug/1274785

in which it looks like DIB may be running on CentOS 6.5.  

Has anyone had any luck in getting diskimage-builder to run on CentOS?  If so, 
is there any online documentation that explains the configuration/patches 
required to make this work?  

thanks,
Bob
-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Alembic migrations and absence of DROP column in sqlite

2014-01-31 Thread Boris Pavlovic
Jay,

Yep we shouldn't use migrations for sqlite at all.

The major issue that we have now is that we are not able to ensure that DB
schema created by migration  models are same (actually they are not same).

So before dropping support of migrations for sqlite  switching to model
based created schema we should add tests that will check that model 
migrations are synced.
(we are working on this)



Best regards,
Boris Pavlovic


On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev alaza...@mirantis.comwrote:

 Trevor,

 Such check could be useful on alembic side too. Good opportunity for
 contribution.

 Andrew.


 On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay tmc...@redhat.com wrote:

 Okay,  I can accept that migrations shouldn't be supported on sqlite.

 However, if that's the case then we need to fix up savanna-db-manage so
 that it checks the db connection info and throws a polite error to the
 user for attempted migrations on unsupported platforms. For example:

 Database migrations are not supported for sqlite

 Because, as a developer, when I see a sql error trace as the result of
 an operation I assume it's broken :)

 Best,

 Trevor

 On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
  On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
   I was playing with alembic migration and discovered that
   op.drop_column() doesn't work with sqlite.  This is because sqlite
   doesn't support dropping a column (broken imho, but that's another
   discussion).  Sqlite throws a syntax error.
  
   To make this work with sqlite, you have to copy the table to a
 temporary
   excluding the column(s) you don't want and delete the old one,
 followed
   by a rename of the new table.
  
   The existing 002 migration uses op.drop_column(), so I'm assuming it's
   broken, too (I need to check what the migration test is doing).  I was
   working on an 003.
  
   How do we want to handle this?  Three good options I can think of:
  
   1) don't support migrations for sqlite (I think no, but maybe)
  
   2) Extend alembic so that op.drop_column() does the right thing (more
   open-source contributions for us, yay :) )
  
   3) Add our own wrapper in savanna so that we have a drop_column()
 method
   that wraps copy/rename.
  
   Ideas, comments?
 
  Migrations should really not be run against SQLite at all -- only on the
  databases that would be used in production. I believe the general
  direction of the contributor community is to be consistent around
  testing of migrations and to not run migrations at all in unit tests
  (which use SQLite).
 
  Boris (cc'd) may have some more to say on this topic.
 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder]

2014-01-31 Thread Fox, Kevin M
Yeah, we are running it on RHEL6.5 and it seems to just work. Haven't tried 
CentOS 6.5 specifically but should work the same.

Thanks,
Kevin

From: Robert Nettleton [rnettle...@hortonworks.com]
Sent: Friday, January 31, 2014 1:34 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [diskimage-builder]

HI All,

I have a question regarding the supported platforms for the diskimage-builder 
tool.

I’ve been using Ubuntu for image generation lately, and that seems to work fine.

At some point, I had hoped to run DIB on CentOS (6.4), but ran into problems 
when I tried this.  In particular, the default Python version included in 
CentOS 6.4 is 2.6.6, and it looks like DIB requires at least 2.7.

Yesterday, I saw the following bug posting:

https://bugs.launchpad.net/diskimage-builder/+bug/1274785

in which it looks like DIB may be running on CentOS 6.5.

Has anyone had any luck in getting diskimage-builder to run on CentOS?  If so, 
is there any online documentation that explains the configuration/patches 
required to make this work?

thanks,
Bob

CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-31 Thread Devananda van der Veen
On Fri, Jan 31, 2014 at 1:03 PM, Tzu-Mainn Chen tzuma...@redhat.com wrote:

 So after reading the replies on this thread, it seems like I (and others
 advocating
 a custom scheduler) may have overthought things a bit.  The reason this
 route was
 suggested was because of conflicting goals for Icehouse:

 a) homogeneous nodes (to simplify requirements)
 b) support diverse hardware sets (to allow as many users as possible to
 try Tuskar)

 Option b) requires either a custom scheduler or forcing nodes to have the
 same attributes,
 and the answer to that question is where much of the debate lies.

 However, taking a step back, maybe the real answer is:

 a) homogeneous nodes
 b) document. . .
- **unsupported** means of demoing Tuskar (set node attributes to
 match flavors, hack
  the scheduler, etc)
- our goals of supporting heterogeneous nodes for the J-release.

 Does this seem reasonable to everyone?


+1

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican Incubation Review

2014-01-31 Thread Chad Lung
This is a follow-up to Jarret Raim's email regarding Barbican's incubation
review:

http://lists.openstack.org/pipermail/openstack-dev/2014-January/025860.html

Please note that the PR for Barbican's DevStack integration can now be
found here:

https://review.openstack.org/#/c/70512/

Thanks for any feedback or comments.

Chad Lung
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] A pair of mode keywords

2014-01-31 Thread Shixiong Shang
Hi, Sean:

Thanks a bunch for the new code. I glimpsed it through once and if I understand 
it correctly, the value of the two parameters are saved as the attributes of a 
subnet. In other words, I can retrieve the values by:

subnet.ipv6_ra_mode
subnet.ipv6_address_mode

Is that correct? Would you please confirm?

Thanks and have a great weekend!

Shixiong






 On Jan 30, 2014, at 6:59 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:
 
 I just pushed a new patch that adds the two new attributes to Subnets.
 
 https://review.openstack.org/#/c/52983/
 
 It's a very rough draft - but I wanted to get it out the door so people
 had sufficient time to take a look, so we can discuss at the next IRC
 meeting.
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev