Re: [openstack-dev] Attach an USB disk to VM [nova]

2014-09-15 Thread Denis Makogon
Agreed with Max.
With nova you can use file injection mechanism. You just need to build a
dictionary of file paths and file content. But I do agree that it's not the
same as you want. But it's more than
valid way to inject files.

Best regards,
Denis Makogon

понедельник, 15 сентября 2014 г. пользователь Maksym Lobur написал:

 Try to use Nova Metadata Serivce [1] or Nova Config Drive [2]. There are
 options to pass Key-Value data as well as whole files during VM boot.

 [1]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
 [2]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-drive.html

 Best regards,
 Max Lobur,
 OpenStack Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 On Sun, Sep 14, 2014 at 10:21 PM, pratik maru fipuzz...@gmail.com
 javascript:_e(%7B%7D,'cvml','fipuzz...@gmail.com'); wrote:

 Hi Xian,

 Thanks for replying.

 I have some data which i wants to be passed to VM. To pass this data,  I
 have planned this to attach as an usb disk and this disk will be used
 inside the vm to read the data.

 What I am looking is for the functionality similar to -usb option with
 qemu.kvm command.

 Please let me know, how it can be achieved in openstack environment.

 Thanks
 fipuzzles

 On Mon, Sep 15, 2014 at 8:14 AM, Xiandong Meng mengxiand...@gmail.com
 javascript:_e(%7B%7D,'cvml','mengxiand...@gmail.com'); wrote:

 What is your concrete user scenario for this request?
 Where do you expect to plugin the USB disk? On the compute node that
 hosts the VM or from somewhere else?

 On Mon, Sep 15, 2014 at 3:01 AM, pratik maru fipuzz...@gmail.com
 javascript:_e(%7B%7D,'cvml','fipuzz...@gmail.com'); wrote:

 Hi,

 Is there any way to attach an USB disk as an external disk to VM while
 booting up the VM ?

 Any help in this respect will be really helpful.


 Thanks
 fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,

 Xiandong Meng javascript:_e(%7B%7D,'cvml','mengxiand...@gmail.com');
 mengxiand...@gmail.com
 javascript:_e(%7B%7D,'cvml','mengxiand...@gmail.com');

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tempest error

2014-09-15 Thread Nikesh Kumar Mahalka
Hi I deployed a Icehouse devstack on ubuntu 14.04.
When i am running tempest test on volume,i am getting errors.
 I also attached my cinder.conf and tempest.conf file.

I am running tempest tests by below command:
./run_tempest.sh tempest.api.volume

*Below is error:*

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 128, in wrapper
return f(self, *func_args, **func_kwargs)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_actions.py,
line 105, in test_volume_upload
self.image_client.wait_for_image_status(image_id, 'active')
  File /opt/stack/tempest/tempest/services/image/v1/json/image_client.py,
line 304, in wait_for_image_status
raise exceptions.TimeoutException(message)
TimeoutException: Request timed out
Details: (VolumesV2ActionsTestXML:test_volume_upload) Time Limit Exceeded!
(196s)while waiting for active, but we got saving.

Ran 248 tests in 2671.199s

FAILED (failures=26)



Regrads
Nikesh


tempest.conf
Description: Binary data


cinder.conf
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-15 Thread Germy Lure
Obviously, to a vendor's plugin/driver, the most important thing is API.Yes?
NB API for a monolithic plugin or a service plugin and SB API for a service
driver or agent, even MD. That's the basic.
Now we have released a set of NB APIs with relative stability. The SB APIs'
standardization are needed.

Some comments inline.



On Fri, Sep 12, 2014 at 5:18 PM, Kevin Benton blak...@gmail.com wrote:

  So my suggestion is remove all vendors' plugins and drivers except
 opensource as built-in.

 Yes, I think this is currently the view held by the PTL (Kyle) and some of
 the other cores so what you're suggesting will definitely come up at the
 summit.

Good!



  Why do we need a different repo to store vendors' codes? That's not the
 community business.
  I think only a proper architecture and normal NBSB API can bring a
 clear separation between plugins(or drivers) and core code, not a
 different repo.

 The problem is that that architecture won't stay stable if there is no
 shared community plugin depending on its stability. Let me ask you the
 inverse question. Why do you think the reference driver should stay in the
 core repo?

 A separate repo won't have an impact on what is packaged and released so
 it should have no impact on user experience, complete versions,
 providing code examples,  or developing new features. In fact, it will
 likely help with the last two because it will provide a clear delineation
 between what a plugin is responsible for vs. what the core API is
 responsible for. And, because new cores can be added faster to the open
 source plugins repo due to a smaller code base to learn, it will help with
 developing new features by reducing reviewer load.

OK, the key point is that vendors' code should be kept by themselves NOT by
the community. But in the same time, the community should provide
some open source reference as standard examples for those new cores and
vendors.
U are right, A separate repo won't have an impact on what is packaged and
released. The open source can stays in the core repo or a different one.
In any case, we need them there for referencing and version releasing.
Any vendor would not maintain the open source codes, the community only.



 On Fri, Sep 12, 2014 at 1:50 AM, Germy Lure germy.l...@gmail.com wrote:



 On Fri, Sep 12, 2014 at 11:11 AM, Kevin Benton blak...@gmail.com wrote:


  Maybe I missed something, but what's the solution?

 There isn't one yet. That's why it's going to be discussed at the summit.

 So my suggestion is remove all vendors' plugins and drivers except
 opensource as built-in.
 By leaving open source plugins and drivers in the tree , we can resolve
 such problems:
   1)release a workable and COMPLETE version
   2)user experience(especially for beginners)
   3)provide code example to learn for new contributors and vendors
   4)develop and verify new features



  I think we should release a workable version.

 Definitely. But that doesn't have anything to do with it living in the
 same repository. By putting it in a different repo, it provides smaller
 code bases to learn for new contributors wanting to become a core developer
 in addition to a clear separation between plugins and core code.

 Why do we need a different repo to store vendors' codes? That's not the
 community business.
 I think only a proper architecture and normal NBSB API can bring a
 clear separation between plugins(or drivers) and core code, not a
 different repo.
 Of course, if the community provides a wiki page for vendors to add
 hyperlink of their codes, I think it's perfect.


  Besides of user experience, the open source drivers are also used for
 developing and verifying new features, even small-scale case.

 Sure, but this also isn't affected by the code being in a separate repo.

 See comments above.


  The community should and just need focus on the Neutron core and
 provide framework for vendors' devices.

 I agree, but without the open source drivers being separated as well,
 it's very difficult for the framework for external drivers to be stable
 enough to be useful.

 Architecture and API. The community should ensure core and API stable
 enough and high quality. Vendors for external drivers.
 Who provides, who maintains(including development, storage, distribution,
 quality, etc).


 On Thu, Sep 11, 2014 at 7:24 PM, Germy Lure germy.l...@gmail.com
 wrote:

 Some comments inline.

 BR,
 Germy

 On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton blak...@gmail.com
 wrote:

 This has been brought up several times already and I believe is going
 to be discussed at the Kilo summit.

 Maybe I missed something, but what's the solution?


 I agree that reviewing third party patches eats community time.
 However, claiming that the community pays 46% of it's energy to maintain
 vendor-specific code doesn't make any sense. LOC in the repo has very
 little to do with ongoing required maintenance. Assuming the APIs for the
 plugins stay consistent, there should be few 

Re: [openstack-dev] Attach an USB disk to VM [nova]

2014-09-15 Thread Serg Melikyan
If data that are you planning to pass to the VM is considerably large
Metadata mechanism may be used for passing at least a link to the source of
data and simple shell script that will executed on VM with CloudInit right
after the boot to obtain data via simple cURL.

On Mon, Sep 15, 2014 at 10:20 AM, Denis Makogon dmako...@mirantis.com
wrote:

 Agreed with Max.
 With nova you can use file injection mechanism. You just need to build a
 dictionary of file paths and file content. But I do agree that it's not the
 same as you want. But it's more than
 valid way to inject files.

 Best regards,
 Denis Makogon

 понедельник, 15 сентября 2014 г. пользователь Maksym Lobur написал:

 Try to use Nova Metadata Serivce [1] or Nova Config Drive [2]. There are
 options to pass Key-Value data as well as whole files during VM boot.

 [1]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
 [2]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-drive.html

 Best regards,
 Max Lobur,
 OpenStack Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 On Sun, Sep 14, 2014 at 10:21 PM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi Xian,

 Thanks for replying.

 I have some data which i wants to be passed to VM. To pass this data,  I
 have planned this to attach as an usb disk and this disk will be used
 inside the vm to read the data.

 What I am looking is for the functionality similar to -usb option with
 qemu.kvm command.

 Please let me know, how it can be achieved in openstack environment.

 Thanks
 fipuzzles

 On Mon, Sep 15, 2014 at 8:14 AM, Xiandong Meng mengxiand...@gmail.com
 wrote:

 What is your concrete user scenario for this request?
 Where do you expect to plugin the USB disk? On the compute node that
 hosts the VM or from somewhere else?

 On Mon, Sep 15, 2014 at 3:01 AM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi,

 Is there any way to attach an USB disk as an external disk to VM while
 booting up the VM ?

 Any help in this respect will be really helpful.


 Thanks
 fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,

 Xiandong Meng
 mengxiand...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Attach an USB disk to VM [nova]

2014-09-15 Thread Serg Melikyan
Other approach to the way I have described in the mail above is to use
openstack/os-collect-config
https://github.com/openstack/os-collect-config project
to handle downloading and running shell script.

On Mon, Sep 15, 2014 at 10:35 AM, Serg Melikyan smelik...@mirantis.com
wrote:

 If data that are you planning to pass to the VM is considerably large
 Metadata mechanism may be used for passing at least a link to the source of
 data and simple shell script that will executed on VM with CloudInit right
 after the boot to obtain data via simple cURL.

 On Mon, Sep 15, 2014 at 10:20 AM, Denis Makogon dmako...@mirantis.com
 wrote:

 Agreed with Max.
 With nova you can use file injection mechanism. You just need to build a
 dictionary of file paths and file content. But I do agree that it's not the
 same as you want. But it's more than
 valid way to inject files.

 Best regards,
 Denis Makogon

 понедельник, 15 сентября 2014 г. пользователь Maksym Lobur написал:

 Try to use Nova Metadata Serivce [1] or Nova Config Drive [2]. There are
 options to pass Key-Value data as well as whole files during VM boot.

 [1]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
 [2]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-drive.html

 Best regards,
 Max Lobur,
 OpenStack Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 On Sun, Sep 14, 2014 at 10:21 PM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi Xian,

 Thanks for replying.

 I have some data which i wants to be passed to VM. To pass this data,
  I have planned this to attach as an usb disk and this disk will be used
 inside the vm to read the data.

 What I am looking is for the functionality similar to -usb option
 with qemu.kvm command.

 Please let me know, how it can be achieved in openstack environment.

 Thanks
 fipuzzles

 On Mon, Sep 15, 2014 at 8:14 AM, Xiandong Meng mengxiand...@gmail.com
 wrote:

 What is your concrete user scenario for this request?
 Where do you expect to plugin the USB disk? On the compute node that
 hosts the VM or from somewhere else?

 On Mon, Sep 15, 2014 at 3:01 AM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi,

 Is there any way to attach an USB disk as an external disk to VM
 while booting up the VM ?

 Any help in this respect will be really helpful.


 Thanks
 fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,

 Xiandong Meng
 mengxiand...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] China blocking access to OpenStack git review push

2014-09-15 Thread ZZelle
Hi,

You can use git-credentials-cache or git-credentials-store to persist your
password:

# cache[1]: safer but cleaned after timeout or reboot
git config --global credential.https://review.openstack.org..helper=cache
--timeout XXX
# or
# store[2]: permanent but less safer, stored in a plain file ... no clean
git config --global credential.https://review.openstack.org.helper=store


Cedric,
ZZelle@ IRC

[1]http://git-scm.com/docs/git-credential-cache
[2]http://git-scm.com/docs/git-credential-store

On Mon, Sep 15, 2014 at 5:31 AM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:


  As an alternative to pushing via ssh you can push via https over port
  443 which may bypass this port blockage. Both latest git review and the
  version of gerrit that we are running support this.
 
  The first step is to generate a gerrit http password, this will be used
  to authenticate against Gerrit. Go to
  https://review.openstack.org/#/settings/http-password and generate a
  password there (note this is independent of your launchpad openid
  password).
 
  Next step is to get some code clone it from eg
  https://git.openstack.org/openstack-dev/sandbox. Now I am sure there is
  a better way to have git-review do this for you with config overrides
  somewhere but we need to add a git remote in that repo called 'gerrit'.
  By default all of our .gitreview files set this up for ssh so we will
  manually add one. `git remote add gerrit
  https://usern...@review.openstack.org/openstack-dev/sandbox`. Finally
  run `git review -s` to get the needed commit hook and now you are ready
  to push code with `git review` as you normally would. Note when git
  review asks for a password it will want the password we generated in the
  first step.
 
  I am pretty sure this is can be made easier and the manual git remote
  step is not required if you set up some overrides in git(review) config
  files. Maybe the folks that added https support for git review can fill
  us in.
 
  Clark
 

 Thanks, Clark.  The HTTPS way worked for me.  There is one additional
 inconvenience, though.  Everytime I do 'git review', I have to input the
 password twice, and the password has to be input using a GNOME window
 instead from command line (SSH'ed).

 Are you aware of 1) a place to save this password so that I don't have
 to input it every time? 2) a configuration option that allow me to input
 password from remote console?  I'm not considering a X11 forward at the
 moment.

 Thanks.

 Regards,
   - Qiming


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Allow for per-subnet dhcp options

2014-09-15 Thread Xu Han Peng
Maybe this blueprint can meet some of your requirements? It allows 
specification of MTU for a network instead of a subnet, though.


https://review.openstack.org/#/c/105989/

Xu Han

On 09/12/2014 01:01 AM, Jonathan Proulx wrote:

Hi All,

I'm hoping to get this blueprint
https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet
some love...seems it's been hanging around since January so my
assumption is it's not going anywhere.

As a private cloud operator I make heavy use of vlan based provider
networks to plug VMs into exiting datacenter networks.

Some of these are Jumbo frame networks and some use standard 1500 MTUs
so I really want to specify the MTU per subnet, there is currently no
way to do this.  I can get it globally in dnsmasq.conf or I can set it
per port using extra-dhcp-opt neither of which really do what I need.

Given that extra-dhcp-opt is implemented per port is seems to me that
making a similar implementation per subnet would not be a difficult
task for someone familiar with the code.

I'm not that person but if you are, then you can be my Neutron hero
for the next release cycle :)

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-15 Thread Flavio Percoco
On 09/12/2014 07:13 PM, Clint Byrum wrote:
 Excerpts from Thierry Carrez's message of 2014-09-12 02:16:42 -0700:
 Clint Byrum wrote:
 Excerpts from Flavio Percoco's message of 2014-09-11 04:14:30 -0700:
 Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal is
 to optimize Zaqar for delivering messages and supporting different
 messaging patterns.

 Awesome! Just please don't expect people to get excited about it for
 the lighter weight queueing workloads that you've claimed as use cases.

 I totally see Horizon using it to keep events for users. I see Heat
 using it for stack events as well. I would bet that Trove would benefit
 from being able to communicate messages to users.

 But I think in between Zaqar and the backends will likely be a lighter
 weight queue-only service that the users can just subscribe to when they
 don't want an inbox. And I think that lighter weight queue service is
 far more important for OpenStack than the full blown random access
 inbox.

 I think the reason such a thing has not appeared is because we were all
 sort of running into but Zaqar is already incubated. Now that we've
 fleshed out the difference, I think those of us that need a lightweight
 multi-tenant queue service should add it to OpenStack.  Separately. I hope
 that doesn't offend you and the rest of the excellent Zaqar developers. It
 is just a different thing.

 Should we remove all the semantics that allow people to use Zaqar as a
 queue service? I don't think so either. Again, the semantics are there
 because Zaqar is using them to do its job. Whether other folks may/may
 not use Zaqar as a queue service is out of our control.

 This doesn't mean the project is broken.

 No, definitely not broken. It just isn't actually necessary for many of
 the stated use cases.

 Clint,

 If I read you correctly, you're basically saying the Zaqar is overkill
 for a lot of people who only want a multi-tenant queue service. It's
 doing A+B. Why does that prevent people who only need A from using it ?

 Is it that it's actually not doing A well, from a user perspective ?
 Like the performance sucks, or it's missing a key primitive ?

 Is it that it's unnecessarily complex to deploy, from a deployer
 perspective, and that something only doing A would be simpler, while
 covering most of the use cases?

 Is it something else ?

 I want to make sure I understand your objection. In the user
 perspective it might make sense to pursue both options as separate
 projects. In the deployer perspective case, having a project doing A+B
 and a project doing A doesn't solve anything. So this affects the
 decision we have to take next Tuesday...
 
 I believe that Zaqar does two things, inbox semantics, and queue
 semantics. I believe the queueing is a side-effect of needing some kind
 of queue to enable users to store and subscribe to messages in the
 inbox.
 
 What I'd rather see is an API for queueing, and an API for inboxes
 which integrates well with the queueing API. For instance, if a user
 says give me an inbox I think Zaqar should return a queue handle for
 sending into the inbox the same way Nova gives you a Neutron port if
 you don't give it one. You might also ask for a queue to receive push
 messages from the inbox. Point being, the queues are not the inbox,
 and the inbox is not the queues.
 
 However, if I just want a queue, just give me a queue. Don't store my
 messages in a randomly addressable space, and don't saddle the deployer
 with the burden of such storage. Put the queue API in front of a scalable
 message queue and give me a nice simple HTTP API. Users would likely be
 thrilled. Heat, Nova, Ceilometer, probably Trove and Sahara, could all
 make use of just this. Only Horizon seems to need a place to keep the
 messages around while users inspect them.
 
 Whether that is two projects, or one, separation between the two API's,
 and thus two very different types of backends, is something I think
 will lead to more deployers wanting to deploy both, so that they can
 bill usage appropriately and so that their users can choose wisely.

This is one of the use-cases we designed flavors for. One of the mail
ideas behind flavors is giving the user the choice of where they want
their messages to be stored. This certainly requires the deployer to
have installed stores that are good for each job. For example, based on
the current existing drivers, a deployer could have configured a
high-throughput flavor on top of a redis node that has been configured
to perform for this job. Alongside to this flavor, the deployer could've
configured a flavor that features durability on top of mongodb or redis.

When the user creates the queue/bucket/inbox/whatever they want to put
their messages into, they'll be able to choose where those messages
should be stored into based on their needs.

I do understand your objection is not against Zaqar being able to do
this now or not but whether an integrate API for both kind of semantics

Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Nikola Đipanov
On 09/13/2014 11:07 PM, Michael Still wrote:
 Just an observation from the last week or so...
 
 The biggest problem nova faces at the moment isn't code review latency.
 Our biggest problem is failing to fix our bugs so that the gate is
 reliable. The number of rechecks we've done in the last week to try and
 land code is truly startling.
 

This is exactly what I was saying in my ranty email from 2 weeks ago
[1]. Debt is everywhere and as any debt, it is unlikely going away on
it's own.


 I know that some people are focused by their employers on feature work,
 but those features aren't going to land in a world in which we have to
 hand walk everything through the gate.
 

The thing is that - without doing work on the code - you cannot know
where the real issues are. You cannot look at a codebase as big as Nova
and say, hmmm looks like we need to fix the resource tracker. You can
know that only if you are neck-deep in the stuff. And then you need to
agree on what is really bad and what is just distasteful, and then focus
the efforts on that. None of the things we've put in place (specs, the
way we do and organize code review and bugs) acknowledge or help this
part of the development process.

I tried to explain this in my previous ranty email [1] but I guess I
failed due to ranting :) so let me try again: Nova team needs to act as
a development team.

We are not in a place (yet?) where we can just overlook the addition of
features based on weather they are appropriate for our use case. We have
to work together on a set of important things to get Nova to where we
think it needs to be and make sure we get it done - by actually doing
it! (*)

However - I don't think freezing development of features for a cycle is
a viable option - this is just not how software in the real world gets
done. It will likely be the worst possible thing we can do, no matter
how appealing it seems to us as developers.

But we do need to be extremely strict on what we let in, and under which
conditions! As I mentioned to sdague on IRC the other day (yes, I am
quoting myself :) ): Not all features are the same - there are
features that are better, that are coded better, and are integrated
better - we should be wanting those features always! Then there are
features that are a net negative on the code - we should *never* want
those features. And then there are features in the middle - we may want
to cut those or push them back depending on a number of things that are
important. Things like: code quality, can it fit withing the current
constraints, can we let it in like that, or some work needs to happen
first. Things which we haven't been really good at considering
previously IMHO.

But you can't really judge that unless you are actively developing Nova
yourself, and have a tighter grip on the proposed code than what our
current process gives.

Peace!
N.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044722.html

(*) The only effort like this going on at the moment in Nova is the
Objects work done by dansmith (even thought there are several others
proposed) - I will let the readers judge how much of an impact it was in
only 2 short cycles, from just a single effort.

 Michael
 
 
 -- 
 Rackspace Australia
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Nikola Đipanov
On 09/14/2014 12:27 AM, Boris Pavlovic wrote:
 Michael, 
 
 I am so glad that you started this topic.
 I really like idea of  of taking a pause with features and concentrating
 on improvement of current code base. 
 
 Even if the 1 k open bugs https://bugs.launchpad.net/nova are vital
 issue, there are other things that could be addressed to improve Nova
 team throughput. 
 
 Like it was said in another thread: Nova code is current too big and
 complex to be understand by one person.
 It produces 2 issues: 
 A) There is hard to find person who can observer full project and make
 global architecture decisions including work on cross projects interactions
 (So project doesn't have straight direction of development)
 B) It's really hard to find cores, and current cores are under too heavy
 load (because of project complexity)
 
 I believe that whole current Nova functionality can be implemented in
 much simpler manner.

Just a brief comment on the sentence above.

This is a common thing to hear from coders, and is very rarely rooted in
reality IMHO. Nova does _a lot_ of things. Saying that given an
exhaustive list of features it has, we can implement them in a much
simpler manner is completely disregarding all the complexity of building
software that works within real world constraints.

 Basically, complexity was added during the process of adding a lot of
 features for years, that didn't perfectly fit to architecture of Nova. 
 And there wasn't much work on refactoring the architecture to cleanup
 these features. 
 

I agree with this of course - fixing architectural flaws is important
and needs to be an ongoing part of the process, as I mention in my other
mail to the thread. Halting all other development is not the way to do
it though.

N.

 So maybe it's proper time to think about what, why and how we are
 doing. 
 That will allows us to find simpler solutions for current functionality. 
 
 
 Best regards,
 Boris Pavlovic 
 
 
 On Sun, Sep 14, 2014 at 1:07 AM, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:
 
 Just an observation from the last week or so...
 
 The biggest problem nova faces at the moment isn't code review
 latency. Our biggest problem is failing to fix our bugs so that the
 gate is reliable. The number of rechecks we've done in the last week
 to try and land code is truly startling.
 
 I know that some people are focused by their employers on feature
 work, but those features aren't going to land in a world in which we
 have to hand walk everything through the gate.
 
 Michael
 
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Request for J3 FFE - NetApp: storage pools for scheduler

2014-09-15 Thread Thierry Carrez
Mike Perez wrote:
 On 14:24 Fri 05 Sep , Alex Meade wrote:
 Hi Cinder Folks,

 I would like to request a FFE for cinder pools support with the NetApp
 drivers[1][2].
 
 Looks like this is being reviewed now.

Looks like it merged, so I retroactively added it to RC1.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-15 Thread Thierry Carrez
Chris Friesen wrote:
 On 09/12/2014 04:59 PM, Joe Gordon wrote:
 [...]
 Can't you replace the word 'libvirt code' with 'nova code' and this
 would still be true? Do you think landing virt driver code is harder
 then landing non virt driver code? If so do you have any numbers to back
 this up?

 If the issue here is 'landing code in nova is too painful', then we
 should discuss solving that more generalized issue first, and maybe we
 conclude that pulling out the virt drivers gets us the most bang for our
 buck.  But unless we have that more general discussion, saying the right
 fix for that is to spend a large amount of time  working specifically on
 virt driver related issues seems premature.
 
 I agree that this is a nova issue in general, though I suspect that the
 virt drivers have quite separate developer communities so maybe they
 feel the pain more clearly.  But I think the solution is the same in
 both cases:
 
 1) Allow people to be responsible for a subset of the nova code
 (scheduler, virt, conductor, compute, or even just a single driver).
 They would have significant responsibility for that area of the code.
 This would serve several purposes--people with deep domain-specific
 knowledge would be able to review code that touches that domain, and it
 would free up the nova core team to look at the higher-level picture.
 For changes that cross domains, the people from the relevant domains
 would need to be involved.
 
 2) Modify the gate tests such that changes that are wholly contained
 within a single area of code are not blocked by gate-blocking-bugs in
 unrelated areas of the code.

I agree... Landing code in Nova is generally too painful, but the pain
is most apparent in areas which require specific domain expertise (like
a virt driver, where not so many -core are familiar enough with the
domain to review, while the code proposer generally is).

IMHO, like I said before, the solution to making Nova (or any other
project, actually) more fluid is to create separate and smaller areas of
expertise, and allow new people to step up and own things. Splitting
virt drivers (once the driver interface is cleaned up) is just one way
of doing it -- that just seems like a natural separation line to use if
we do split. But that would just be a first step: as more internal
interfaces are cleaned up we could (and should) split more. Smaller
groups responsible for smaller areas of code is the way to go.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-15 Thread Daniel P. Berrange
On Mon, Sep 15, 2014 at 11:00:15AM +0200, Thierry Carrez wrote:
 Chris Friesen wrote:
  On 09/12/2014 04:59 PM, Joe Gordon wrote:
  [...]
  Can't you replace the word 'libvirt code' with 'nova code' and this
  would still be true? Do you think landing virt driver code is harder
  then landing non virt driver code? If so do you have any numbers to back
  this up?
 
  If the issue here is 'landing code in nova is too painful', then we
  should discuss solving that more generalized issue first, and maybe we
  conclude that pulling out the virt drivers gets us the most bang for our
  buck.  But unless we have that more general discussion, saying the right
  fix for that is to spend a large amount of time  working specifically on
  virt driver related issues seems premature.
  
  I agree that this is a nova issue in general, though I suspect that the
  virt drivers have quite separate developer communities so maybe they
  feel the pain more clearly.  But I think the solution is the same in
  both cases:
  
  1) Allow people to be responsible for a subset of the nova code
  (scheduler, virt, conductor, compute, or even just a single driver).
  They would have significant responsibility for that area of the code.
  This would serve several purposes--people with deep domain-specific
  knowledge would be able to review code that touches that domain, and it
  would free up the nova core team to look at the higher-level picture.
  For changes that cross domains, the people from the relevant domains
  would need to be involved.
  
  2) Modify the gate tests such that changes that are wholly contained
  within a single area of code are not blocked by gate-blocking-bugs in
  unrelated areas of the code.
 
 I agree... Landing code in Nova is generally too painful, but the pain
 is most apparent in areas which require specific domain expertise (like
 a virt driver, where not so many -core are familiar enough with the
 domain to review, while the code proposer generally is).

Yes, all of Nova is suffering from the pain of merge. I am specifically
attacking only the virt drivers in my proposal because I think that has
the greatest liklihood of making a noticable improvement to the project.
Their teams are already fairly separated from the rest of nova because
of the domain expertize, and the code is also probably the most well
isolated and logically makes sense as a plugin architecture. We'd be
hard pressed to split of other chunks of Nova beyond the schedular that
we're already talking about.

 IMHO, like I said before, the solution to making Nova (or any other
 project, actually) more fluid is to create separate and smaller areas of
 expertise, and allow new people to step up and own things. Splitting
 virt drivers (once the driver interface is cleaned up) is just one way
 of doing it -- that just seems like a natural separation line to use if
 we do split. But that would just be a first step: as more internal
 interfaces are cleaned up we could (and should) split more. Smaller
 groups responsible for smaller areas of code is the way to go.

And history of OpenStack projects splitting off shows this can be very
successful too

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Daniel P. Berrange
On Fri, Sep 12, 2014 at 01:52:35PM -0400, Chris St. Pierre wrote:
 We have proposed that the allowed characters for all resource names in Nova
 (flavors, aggregates, etc.) be expanded to all printable unicode characters
 and horizontal spaces: https://review.openstack.org/#/c/119741
 
 Currently, the only allowed characters in most resource names are
 alphanumeric, space, and [.-_].
 
 We have proposed this change for two principal reasons:
 
 1. We have customers who have migrated data forward since Essex, when no
 restrictions were in place, and thus have characters in resource names that
 are disallowed in the current version of OpenStack. This is only likely to
 be useful to people migrating from Essex or earlier, since the current
 restrictions were added in Folsom.
 
 2. It's pretty much always a bad idea to add unnecessary restrictions
 without a good reason. While we don't have an immediate need to use, for
 example, the ever-useful http://codepoints.net/U+1F4A9 in a flavor name,
 it's hard to come up with a reason people *shouldn't* be allowed to use it.
 
 That said, apparently people have had a need to not be allowed to use some
 characters, but it's not clear why:
 https://bugs.launchpad.net/nova/+bug/977187
 
 So I guess if anyone knows any reason why these printable characters should
 not be joined in holy resource naming, speak now or forever hold your peace.

I would consider any place where there is a user specified, free form
string intended for end user consumption should be totally unrestricted
in the characters it allows. To arbitrarily restrict the user is a bug.
If there are current technical reasons for the restriction we should look
at what we must do to resolve them.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] All Juno dependencies uploaded to Debian

2014-09-15 Thread Thomas Goirand
Heya all!

A bit of update to my Debian packaging work I want to share with
everyone here.

Absolutely all Juno dependencies have entered Debian (well, all but
python-xstatic-jquery.bootstrap.wizard which I forgot, and which I have
just uploaded to Sid, waiting for the FTP masters to review, but it
shouldn't take too long).

This week, I'll upload the updates to the following packages (I did this
list at the end of last week, so we may need more...):
diskimage-builder=0.1.20
django-pyscss=1.0.2
oslo.config=1.4.0.0a3
oslo.i18n=0.3.0
oslo.rootwrap=1.3.0.0a1
oslo.utils=0.3.0
oslo.vmware=0.5
pycadf=0.6.0
pyghmi=0.6.11
python-glanceclient=0.14.0
python-ironicclient=0.2.1
python-keystoneclient=0.10.0
python-saharaclient=0.7.1
python-swiftclient=2.2.0
oslotest=1.1.0.0a2
zake=0.1

When done, I'll start validating Juno packages.

I am very happy that this time, we have a dependency freeze early
enough. I do understand that this may be an issue upstream, but really,
that helps me a lot doing the Debian packaging in time for the release.

Please note that since Jessie will be released with Icehouse, all of the
above will be updated in Debian Experimental only (so that I can keep
older versions for Icehouse in Sid). If anyone feels like it's a better
idea to update the above for Icehouse too, please let me know, and I'll
reconsider an update in Sid/Jessie, rather than in Experimental.

I hope we soon have all xstatic packages in use in Horizon, because I
spent a large amount of my time on these last summer. I by the way would
like to thanks Radomir Dopieralski for working on this: it's really the
good way to do things, and it helps a lot lowering the number of
embedded stuff, which are a security nightmare for distributions. It's
just a shame we didn't have it for Icehouse...

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Daniel P. Berrange
On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
 Just an observation from the last week or so...
 
 The biggest problem nova faces at the moment isn't code review latency. Our
 biggest problem is failing to fix our bugs so that the gate is reliable.
 The number of rechecks we've done in the last week to try and land code is
 truly startling.

I consider both problems to be pretty much equally as important. I don't
think solving review latency or test reliabilty in isolation is enough to
save Nova. We need to tackle both problems as a priority. I tried to avoid
getting into my concerns about testing in my mail on review team bottlenecks
since I think we should address the problems independantly / in parallel.

 I know that some people are focused by their employers on feature work, but
 those features aren't going to land in a world in which we have to hand
 walk everything through the gate.

Unfortunately the reliability of the gate systems has the highest negative
impact on productivity right at the point in the dev cycle where we need
it to have the least impact too.

If we're going to continue to raise the bar in terms of testing coverage
then we need to have a serious look at the overall approach we use for
testing because what we do today isn't going to scale, even if it is
100% reliable. We can't keep adding new CI jobs for each new nova.conf
setting that introduces a new code path, because each job has major
implications for resource consumption (number of test nodes, log storage),
not to mention reliability. I think we need to figure out a way to get
more targetted testing of features, so we can keep the overall number
of jobs lower and the tests shorter.

Instead of having a single tempest run that exercises all the Nova
functionality in one run, we need to figure out how to split it up
into independant functional areas. For example if we could isolate
tests which are affected by choice of cinder storage backend, then
we could run those subset of tests multiple times, once for each
supported cinder backend. Without this, the combinatorial explosion
of test jobs is going to kill us.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] keep old specs

2014-09-15 Thread Kevin Benton
I saw that the specs that didn't make the deadline for the feature freeze
were removed from the tree completely.[1] For easier reference, can we
instead revert that commit to restore them and then move them into a
release specific folder called 'unimplemented' or something along those
lines?

It will be nice in the future to browse through the specs for a release and
see what specs were approved but didn't make it in time. Then if someone
wants to try to propose it again, their patch can be to move the spec into
the current cycle and then they only have to make revisions rather than
redo the whole thing.

It also reduces the number of hoops to jump through to quickly search for a
spec based on keywords. Otherwise we have to checkout a commit before the
removal and then search.

Thoughts, suggestions, or anecdotes about small sailboats?

1.
https://github.com/openstack/neutron-specs/commit/77f8c806a49769322b02ea6017a1a2a39ef1cfd7
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Attach an USB disk to VM [nova]

2014-09-15 Thread pratik maru
Hi All,

Is there any way to do the same using Heat ?

Thanks
fipuzzles

On Mon, Sep 15, 2014 at 12:09 PM, Serg Melikyan smelik...@mirantis.com
wrote:

 Other approach to the way I have described in the mail above is to use
 openstack/os-collect-config
 https://github.com/openstack/os-collect-config project to handle
 downloading and running shell script.

 On Mon, Sep 15, 2014 at 10:35 AM, Serg Melikyan smelik...@mirantis.com
 wrote:

 If data that are you planning to pass to the VM is considerably large
 Metadata mechanism may be used for passing at least a link to the source of
 data and simple shell script that will executed on VM with CloudInit right
 after the boot to obtain data via simple cURL.

 On Mon, Sep 15, 2014 at 10:20 AM, Denis Makogon dmako...@mirantis.com
 wrote:

 Agreed with Max.
 With nova you can use file injection mechanism. You just need to build a
 dictionary of file paths and file content. But I do agree that it's not the
 same as you want. But it's more than
 valid way to inject files.

 Best regards,
 Denis Makogon

 понедельник, 15 сентября 2014 г. пользователь Maksym Lobur написал:

 Try to use Nova Metadata Serivce [1] or Nova Config Drive [2]. There are
 options to pass Key-Value data as well as whole files during VM boot.

 [1]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
 [2]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-drive.html

 Best regards,
 Max Lobur,
 OpenStack Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 On Sun, Sep 14, 2014 at 10:21 PM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi Xian,

 Thanks for replying.

 I have some data which i wants to be passed to VM. To pass this data,
  I have planned this to attach as an usb disk and this disk will be used
 inside the vm to read the data.

 What I am looking is for the functionality similar to -usb option
 with qemu.kvm command.

 Please let me know, how it can be achieved in openstack environment.

 Thanks
 fipuzzles

 On Mon, Sep 15, 2014 at 8:14 AM, Xiandong Meng mengxiand...@gmail.com
  wrote:

 What is your concrete user scenario for this request?
 Where do you expect to plugin the USB disk? On the compute node that
 hosts the VM or from somewhere else?

 On Mon, Sep 15, 2014 at 3:01 AM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi,

 Is there any way to attach an USB disk as an external disk to VM
 while booting up the VM ?

 Any help in this respect will be really helpful.


 Thanks
 fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,

 Xiandong Meng
 mengxiand...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Michael Still
On Mon, Sep 15, 2014 at 7:42 PM, Daniel P. Berrange berra...@redhat.com wrote:

 Unfortunately the reliability of the gate systems has the highest negative
 impact on productivity right at the point in the dev cycle where we need
 it to have the least impact too.

Agreed.

However, my instinct is that a lot of our CI unreliability isn't from
the number of permutations, but from buggy code. We have our users
telling us where to look to fix this in the form of many many bug
reports. I find it hard to believe that we couldn't improve our gate
reliability by taking fixing the bugs we currently have reported more
seriously.

Michael



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/09/14 19:08, Doug Hellmann wrote:
 
 On Sep 12, 2014, at 1:03 PM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Signed PGP part On 12/09/14 17:30, Mike Bayer wrote:
 
 On Sep 12, 2014, at 10:40 AM, Ihar Hrachyshka
 ihrac...@redhat.com wrote:
 
 Signed PGP part On 12/09/14 16:33, Mike Bayer wrote:
 I agree with this, changing the MySQL driver now is not an 
 option.
 
 That was not the proposal. The proposal was to introduce
 support to run against something different from MySQLdb + a
 gate job for that alternative. The next cycle was supposed to
 do thorough regression testing, benchmarking, etc. to decide
 whether we're ok to recommend that alternative to users.
 
 ah, well that is a great idea.  But we can have that
 throughout Kilo anyway, why not ?
 
 Sure, it's not the end of the world. We'll just need to postpone
 work till RC1 (=opening of master for new stuff), pass spec
 bureauracy (reapplying for kilo)... That's some burden, but not
 tragedy.
 
 The only thing that I'm really sad about is that Juno users won't
 be able to try out that driver on their setup just to see how it
 works, so it narrows testing base to gate while we could get some
 valuable deployment feedback in Juno already.
 
 It’s all experimental, right? And implemented in libraries? So
 those users could update oslo.db and sqlalchemy-migrate and test
 the results under Juno.

oslo.db is already bumped to the version that includes all those fixes
needed. As for sqlalchemy-migrate, we may try to work on a fix for the
library that will silently drop those COMMIT statements in SQL
scripts. That would solve the problem without touching any migration
code in nova, glance, or cinder. This is the piece that is currently
missing to run Juno with that alternative driver. Also, as Angus said,
we already can run migrations on mysqldb and then switch it for
testing, without any of the changes.

I'll work on making sure it's available to check out with Juno pieces
in addition to Kilo.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUFsBfAAoJEC5aWaUY1u57bLYH/jcbBhfPFRQg3Rklw2iYaZC4
ROHSvjMaudu+bgiqJxy1bNJEkxqQTqkJmWz1kYUhjaan4aVqBc/8aVrCMebottan
UFChNmhxtfKSF/ioAEF7AuUuggXG+nsvcFcOzBpIZ1eMMUiLtQPsWEypyDMH0c3m
sot650eoXD83VnrgpSRkDv4xJYGmhCQ2DYObIXm8j+KVlnOh8T7ElPKeeCE/Gahs
/k8ObbzkeNJr2z7oPXqvR93mQkGzNYwONtKi5KFZtoHXYL0vDvO1zQ8Oub0L7CtI
1Jvr5crNsax7hE4WxHgmdppJvdSqzzECFKhNWfUS2vM3LY24iGpv8DcX5GeVVbo=
=gMQE
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Attach an USB disk to VM [nova]

2014-09-15 Thread Serg Melikyan
Sure, take a look at Software Configuration feature of Heat
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec

Examples of SoftwareConfiguration usage may be found here:
https://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config

On Mon, Sep 15, 2014 at 1:54 PM, pratik maru fipuzz...@gmail.com wrote:

 Hi All,

 Is there any way to do the same using Heat ?

 Thanks
 fipuzzles

 On Mon, Sep 15, 2014 at 12:09 PM, Serg Melikyan smelik...@mirantis.com
 wrote:

 Other approach to the way I have described in the mail above is to use
 openstack/os-collect-config
 https://github.com/openstack/os-collect-config project to handle
 downloading and running shell script.

 On Mon, Sep 15, 2014 at 10:35 AM, Serg Melikyan smelik...@mirantis.com
 wrote:

 If data that are you planning to pass to the VM is considerably large
 Metadata mechanism may be used for passing at least a link to the source of
 data and simple shell script that will executed on VM with CloudInit right
 after the boot to obtain data via simple cURL.

 On Mon, Sep 15, 2014 at 10:20 AM, Denis Makogon dmako...@mirantis.com
 wrote:

 Agreed with Max.
 With nova you can use file injection mechanism. You just need to build
 a dictionary of file paths and file content. But I do agree that it's not
 the same as you want. But it's more than
 valid way to inject files.

 Best regards,
 Denis Makogon

 понедельник, 15 сентября 2014 г. пользователь Maksym Lobur написал:

 Try to use Nova Metadata Serivce [1] or Nova Config Drive [2]. There
 are options to pass Key-Value data as well as whole files during VM boot.

 [1]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
 [2]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-drive.html

 Best regards,
 Max Lobur,
 OpenStack Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 On Sun, Sep 14, 2014 at 10:21 PM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi Xian,

 Thanks for replying.

 I have some data which i wants to be passed to VM. To pass this data,
  I have planned this to attach as an usb disk and this disk will be used
 inside the vm to read the data.

 What I am looking is for the functionality similar to -usb option
 with qemu.kvm command.

 Please let me know, how it can be achieved in openstack environment.

 Thanks
 fipuzzles

 On Mon, Sep 15, 2014 at 8:14 AM, Xiandong Meng 
 mengxiand...@gmail.com wrote:

 What is your concrete user scenario for this request?
 Where do you expect to plugin the USB disk? On the compute node that
 hosts the VM or from somewhere else?

 On Mon, Sep 15, 2014 at 3:01 AM, pratik maru fipuzz...@gmail.com
 wrote:

 Hi,

 Is there any way to attach an USB disk as an external disk to VM
 while booting up the VM ?

 Any help in this respect will be really helpful.


 Thanks
 fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,

 Xiandong Meng
 mengxiand...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] battling stale .pyc files

2014-09-15 Thread Lucas Alvares Gomes
Hi Mike,

Thanks for bringing it up. I wanna say that I'm not an expert in
CPython, but I personally like the fix because I have had some
problems with stale .pyc in Ironic before, and they are pretty
annoying.

On Fri, Sep 12, 2014 at 4:18 PM, Mike Bayer mba...@redhat.com wrote:
 I’ve just found https://bugs.launchpad.net/nova/+bug/1368661, Unit tests 
 sometimes fail because of stale pyc files”.

 The issue as stated in the report refers to the phenomenon of .pyc files that 
 remain inappropriately, when switching branches or deleting files.

 Specifically, the kind of scenario that in my experience causes this looks 
 like this.  One version of the code has a setup like this:

mylibrary/mypackage/somemodule/__init__.py

 Then, a different version we switch to changes it to this:

mylibrary/mypackage/somemodule.py

 But somemodule/__init__.pyc will still be sitting around, and then things 
 break - the Python interpreter skips the module (or perhaps the other way 
 around. I just ran a test by hand and it seems like packages trump modules in 
 Python 2.7).

 This is an issue for sure, however the fix that is proposed I find alarming, 
 which is to use the PYTHONDONTWRITEBYTECODE=1 flag written directly into the 
 tox.ini file to disable *all* .pyc file writing, for all environments 
 unconditionally, both human and automated.

 I think that approach is a mistake.  .pyc files have a definite effect on the 
 behavior of the interpreter.   They can, for example, be the factor that 
 causes a dictionary to order its elements in one way versus another;  I’ve 
 had many relying-on-dictionary-ordering issues (which make no mistake, are 
 bugs) smoked out by the fact that a .pyc file would reveal the issue..pyc 
 files also naturally have a profound effect on performance.   I’d hate for 
 the Openstack community to just forget that .pyc files ever existed, our 
 tox.ini’s safely protecting us from them, and then we start seeing profiling 
 results getting published that forgot to run the Python interpreter in it’s 
 normal state of operation.  If we put this flag into every tox.ini, it means 
 the totality of openstack testing will not only run more slowly, it also 
 means our code will never be run within the Python runtime environment that 
 will actually be used when code is shipped.   The Python interpreter is 
 incredibly stable and predictable and a small change like this is hardly 
 something that we’d usually notice…until something worth noticing actually 
 goes wrong, and automated testing is where that should be found, not after 
 shipment.


So this ordering thing, I don't think that it's caused by the
PYTHONDONTWRITEBYTECODE, I googled that but couldn't find anything
relating this option to the way python hash things (please point me to
a document/code if I'm wrong). Are you sure you're not confusing it
with the PYTHONHASHSEED option?

So PYTHONHASHSEED yes does affect the ordering of the dict keys[1][2].
And I think that you'll find it more alarming because in the tox.ini
we are already disabling that random hash seed[3] (but note that
there's a comment there, disabling it seems to be a temporary thing)

About the performance, this also doesn't seem to be true. I don't
think .pyc affects the performance we run things at all, pyc are not
meant to be an optimization in python. It DOES affect the startup of
the application tho, because it will have to regenerate the bytecode
all the time, see [4]:

A program doesn't run any faster when it is read from a ‘.pyc’ or
‘.pyo’ file than when it is read from a ‘.py’ file; the only thing
that's faster about ‘.pyc’ or ‘.pyo’ files is the speed with which
they are loaded. 

[1] https://docs.python.org/2/using/cmdline.html#envvar-PYTHONHASHSEED
[2] https://docs.python.org/2/using/cmdline.html#cmdoption-R
[3] https://github.com/openstack/nova/blob/master/tox.ini#L12
[4] http://www.network-theory.co.uk/docs/pytut/CompiledPythonfiles.html

 The issue of the occasional unmatched .pyc file whose name happens to still 
 be imported by the application is not that frequent, and can be solved by 
 just making sure unmatched .pyc files are deleted ahead of time.I’d favor 
 a utility such as in oslo.utils which performs this simple step of finding 
 all unmatched .pyc files and deleting (taking care to be aware of __pycache__ 
 / pep3147), and can be invoked from tox.ini as a startup command.

 But guess what - suppose you totally disagree and you really want to not have 
 any .pyc files in your dev environment.   Simple!  Put 
 PYTHONDONTWRITEBYTECODE=1 into *your* environment - it doesn’t need to be in 
 tox.ini, just stick it in your .profile.   Let’s put it up on the wikis, 
 let’s put it into the dev guides, let’s go nuts.   Banish .pyc files from 
 your machine all you like.   But let’s *not* do this on our automated test 
 environments, and not force it to happen in *my* environment.


So, although I like the fix proposed and I would +1 that idea, I'm

[openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Steven Hardy
All,

Starting this thread as a follow-up to a strongly negative reaction by the
Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
subsequent very detailed justification and discussion of why they may be
useful in this spec[2].

Back in Atlanta, I had some discussions with folks interesting in making
ready state[3] preparation of bare-metal resources possible when
deploying bare-metal nodes via TripleO/Heat/Ironic.

The initial assumption is that there is some discovery step (either
automatic or static generation of a manifest of nodes), that can be input
to either Ironic or Heat.

Following discovery, but before an undercloud deploying OpenStack onto the
nodes, there are a few steps which may be desired, to get the hardware into
a state where it's ready and fully optimized for the subsequent deployment:

- Updating and aligning firmware to meet requirements of qualification or
  site policy
- Optimization of BIOS configuration to match workloads the node is
  expected to run
- Management of machine-local storage, e.g configuring local RAID for
  optimal resilience or performance.

Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
of these steps possible, but there's no easy way to either encapsulate the
(currently mostly vendor specific) data associated with each step, or to
coordinate sequencing of the steps.

What is required is some tool to take a text definition of the required
configuration, turn it into a correctly sequenced series of API calls to
Ironic, expose any data associated with those API calls, and declare
success or failure on completion.  This is what Heat does.

So the idea is to create some basic (contrib, disabled by default) Ironic
heat resources, then explore the idea of orchestrating ready-state
configuration via Heat.

Given that Devananda and I have been banging heads over this for some time
now, I'd like to get broader feedback of the idea, my interpretation of
ready state applied to the tripleo undercloud, and any alternative
implementation ideas.

Thanks!

Steve

[1] https://review.openstack.org/#/c/104222/
[2] https://review.openstack.org/#/c/120778/
[3] http://robhirschfeld.com/2014/04/25/ready-state-infrastructure/
[4] https://blueprints.launchpad.net/ironic/+spec/drac-management-driver
[5] https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
[6] https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack][VCenter][n-cpu]Error: Service n-cpu is not running

2014-09-15 Thread foss geek
Dear All,

I am using Devstack Icehouse stable version to integrate openstack with
VCenter.  I am using CentOS 6.5 64 bit.

I am facing the below issue while running ./stack.  Any pointer/help would
be greatly appreciated.

Here is related error log.


$./stack.sh

snip

2014-09-15 11:35:27.881 | + [[ -x /opt/stack/devstack/local.sh ]]
2014-09-15 11:35:27.898 | + service_check
2014-09-15 11:35:27.910 | + local service
2014-09-15 11:35:27.925 | + local failures
2014-09-15 11:35:27.936 | + SCREEN_NAME=stack
2014-09-15 11:35:27.953 | + SERVICE_DIR=/opt/stack/status
2014-09-15 11:35:27.964 | + [[ ! -d /opt/stack/status/stack ]]
2014-09-15 11:35:27.981 | ++ ls /opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:27.999 | + failures=/opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:28.006 | + for service in '$failures'
2014-09-15 11:35:28.023 | ++ basename /opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:28.034 | + service=n-cpu.failure
2014-09-15 11:35:28.051 | + service=n-cpu
*2014-09-15 11:35:28.057 | + echo 'Error: Service n-cpu is not running'*
*2014-09-15 11:35:28.074 | Error: Service n-cpu is not running*
2014-09-15 11:35:28.091 | + '[' -n /opt/stack/status/stack/n-cpu.failure ']'
*2014-09-15 11:35:28.098 | + die 1164 'More details about the above errors
can be found with screen, with ./rejoin-stack.sh'*
2014-09-15 11:35:28.109 | + local exitcode=0
2014-09-15 11:35:28.126 | [Call Trace]
2014-09-15 11:35:28.139 | ./stack.sh:1313:service_check
*2014-09-15 11:35:28.174 | /opt/stack/devstack/functions-common:1164:die*
*2014-09-15 11:35:28.184 | [ERROR]
/opt/stack/devstack/functions-common:1164 More details about the above
errors can be found with screen, with ./rejoin-stack.sh*

snip


Here is n-cpu screen log:
==

$ cd /opt/stack/nova  /usr/bin/nova-compute --config-file
/etc/nova/nova.conf  echo $! /opt/stack/status/stack/n-cpu.pid; fg ||
echo n-cpu failed to start | tee /opt/stack/status/stack/n-cpu.failure
2 eror
[1] 32476
cd /opt/stack/nova  /usr/bin/nova-compute --config-file
/etc/nova/nova.conf
2014-09-15 08:00:50.685 DEBUG nova.servicegroup.api [-] ServiceGroup driver
defined as an instance of db from (pid=32477) __new__
/opt/stack/nova/nova/servicegroup/api.py:65
2014-09-15 08:00:51.435 INFO nova.openstack.common.periodic_task [-]
Skipping periodic task _periodic_update_dns because its interval is negative
2014-09-15 08:00:52.104 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:00:52.178 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:00:52.186 INFO nova.virt.driver [-] Loading compute driver
'vmwareapi.VMwareVCDriver'
2014-09-15 08:01:21.188 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.188 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.825 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.826 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:26.208 INFO oslo.messaging._drivers.impl_rabbit [-]
Connecting to AMQP server on 10.10.2.2:5672
2014-09-15 08:01:26.235 INFO oslo.messaging._drivers.impl_rabbit [-]
Connected to AMQP server on 10.10.2.2:5672
/usr/lib/python2.6/site-packages/amqp/channel.py:616: VDeprecationWarning:
The auto_delete flag for exchanges has been deprecated and will be removed
from py-amqp v1.5.0.
  warn(VDeprecationWarning(EXCHANGE_AUTODELETE_DEPRECATED))
2014-09-15 08:01:26.244 CRITICAL nova
[req-282f0493-f7d1-4215-bba2-4cf390efc6ac None None] TypeError: __init__()
got an unexpected keyword argument 'namedtuple_as_object'

2014-09-15 08:01:26.244 TRACE nova Traceback (most recent call last):
2014-09-15 08:01:26.244 TRACE nova   File /usr/bin/nova-compute, line 10,
in module
2014-09-15 08:01:26.244 TRACE nova sys.exit(main())
2014-09-15 08:01:26.244 TRACE nova   File
/opt/stack/nova/nova/cmd/compute.py, line 72, in main
2014-09-15 08:01:26.244 TRACE nova db_allowed=CONF.conductor.use_local)
2014-09-15 08:01:26.244 TRACE nova   File
/opt/stack/nova/nova/service.py, line 274, in create
2014-09-15 08:01:26.244 TRACE nova db_allowed=db_allowed)
2014-09-15 08:01:26.244 TRACE nova   File

Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/09/14 18:00, Mike Bayer wrote:
 
 On Sep 12, 2014, at 11:56 AM, Johannes Erdfelt
 johan...@erdfelt.com wrote:
 
 On Fri, Sep 12, 2014, Doug Hellmann d...@doughellmann.com
 wrote:
 I don’t think we will want to retroactively change the
 migration scripts (that’s not something we generally like to
 do),
 
 We don't allow semantic changes to migration scripts since people
 who have already run it won't get those changes. However, we
 haven't been shy about fixing bugs that prevent the migration
 script from running (which this change would probably fall
 into).
 
 fortunately BEGIN/ COMMIT are not semantic directives. The
 migrations semantically indicated by the script are unaffected in
 any way by these run-environment settings.
 
 
 
 so we should look at changes needed to make sqlalchemy-migrate
 deal with them (by ignoring them, or working around the errors,
 or whatever).
 
 That said, I agree that sqlalchemy-migrate shouldn't be changing
 in a non-backwards compatible way.
 
 on the sqlalchemy-migrate side, the handling of it’s ill-conceived
 “sql script” feature can be further mitigated here by parsing for
 the “COMMIT” line when it breaks out the SQL and ignoring it, I’d
 favor that it emits a warning also.

I went on with ignoring COMMIT specifically in SQL scripts:
https://review.openstack.org/#/c/121517/ Though we could also ignore
other transaction managing statements in those scripts, like ROLLBACK,
they are highly unlikely to occur in migration code, so I ignore them
in the patch.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUFtXpAAoJEC5aWaUY1u57++gIAJb8JdVm5Du/6D9o18QRvH9S
vZYtXWbI3637f0bII7rTwMVc5AK3m9s6q1WVuCiNZiFdMhI7YApU2qaC3KGMcxo7
3x+R1ptgbslR9rJj0T8ohMPX4pOVd2Wd0keqNw8plytduaT3tNK6J7Lvc/wqDWkS
BDpIw6p5XWPMqbWzDdkPjIqK7rG6/bqZO8LXDsD1l/l4QjlzXB/qxyW5hFiR/ANe
iAhEAfAmDLRQMs5DFHc6UNaOoh+DjODq7V4hMSQJtwC8x6RmW0mAbBg+Ii21dugD
lqM53C9nIHmGP84jDjKy0W3aLeY0Z0m8ulUNCfGjKWZjy1ng5gRU9voxVse3Xfs=
=Vt8X
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-15 Thread Salvatore Orlando
This is a very important discussion - very closely related to the one going
on in this other thread
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045768.html
.
Unfortunately it is also a discussion that tends to easily fragment and
move in a thousand different directions.
A few months ago I was too of the opinion that vendor plugins and drivers
were the main reason of unnecessary load for the core team. I still think
that they're an unnecessary heavy load, but I reckon the problem does not
really lies with open source versus vendor code. It lies in matching
people's competencies with subsystems and proper interface across them - as
already pointed out in this thread.

I have some more comments inline, but unless growing another monster thread
I'd rather start a different, cross-project discussion (which will
hopefully not become just a cross-project monster thread!)

Salvatore

On 15 September 2014 08:29, Germy Lure germy.l...@gmail.com wrote:

 Obviously, to a vendor's plugin/driver, the most important thing is
 API.Yes?
 NB API for a monolithic plugin or a service plugin and SB API for a
 service driver or agent, even MD. That's the basic.
 Now we have released a set of NB APIs with relative stability. The SB
 APIs' standardization are needed.


The internal interface between the API and the plugins is standardized at
the moment through use of classes like [1]. A similar interface exists for
ML2 drivers [2].
At the moment the dispatch of an API call to the plugin or from a plugin to
a ML2 driver is purely a local call so these interfaces are working fairly
well at the moment. I don't know yet however whether they will be
sufficient in case plugins are split into different repos. ML2 Driver
maintainers have however been warned in the past that the driver interface
is to be considered internal and can be changed at any time. This does not
apply to the plugin interface which has been conceived in this way to
facilitate the development of out of tree plugins.

On the other hand, if by SB interfaces you are referring to the RPC
interfaces for communicating between the servers and the various plugin, I
would say that they should be considered internal at the moment.

[1]
https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L28
[2]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py


 Some comments inline.



 On Fri, Sep 12, 2014 at 5:18 PM, Kevin Benton blak...@gmail.com wrote:

  So my suggestion is remove all vendors' plugins and drivers except
 opensource as built-in.

 Yes, I think this is currently the view held by the PTL (Kyle) and some
 of the other cores so what you're suggesting will definitely come up at the
 summit.

 Good!


The discussion however will not be that different from the one we're seeing
on that huge thread on splitting out drivers, which has become in my
opinion a frankenthread.
Nevertheless, that thread points out that this is far from being merely a
neutron topic (despite neutron being the project with the highest number of
drivers and plugins).




  Why do we need a different repo to store vendors' codes? That's not the
 community business.
  I think only a proper architecture and normal NBSB API can bring a
 clear separation between plugins(or drivers) and core code, not a
 different repo.

 The problem is that that architecture won't stay stable if there is no
 shared community plugin depending on its stability. Let me ask you the
 inverse question. Why do you think the reference driver should stay in the
 core repo?

 A separate repo won't have an impact on what is packaged and released so
 it should have no impact on user experience, complete versions,
 providing code examples,  or developing new features. In fact, it will
 likely help with the last two because it will provide a clear delineation
 between what a plugin is responsible for vs. what the core API is
 responsible for. And, because new cores can be added faster to the open
 source plugins repo due to a smaller code base to learn, it will help with
 developing new features by reducing reviewer load.

 OK, the key point is that vendors' code should be kept by themselves NOT
 by the community. But in the same time, the community should provide
 some open source reference as standard examples for those new cores and
 vendors.
 U are right, A separate repo won't have an impact on what is packaged and
 released. The open source can stays in the core repo or a different one.
 In any case, we need them there for referencing and version releasing.
 Any vendor would not maintain the open source codes, the community only.


I think that we are probably focusing too much on the separate repo
issue, which is probably being seen as punitive for drivers and plugins.
The separate repo would be just a possible tool for achieving the goal of
reducing the review load imposed by drivers on the core team while keeping
them part of the integrated release.

Re: [openstack-dev] [Neutron] keep old specs

2014-09-15 Thread Kyle Mestery
On Mon, Sep 15, 2014 at 4:52 AM, Kevin Benton blak...@gmail.com wrote:
 I saw that the specs that didn't make the deadline for the feature freeze
 were removed from the tree completely.[1] For easier reference, can we
 instead revert that commit to restore them and then move them into a release
 specific folder called 'unimplemented' or something along those lines?

No, I don't think there's value to keeping specs along which never
made a release. The point of the specs repo is to track things which
made the release.

 It will be nice in the future to browse through the specs for a release and
 see what specs were approved but didn't make it in time. Then if someone
 wants to try to propose it again, their patch can be to move the spec into
 the current cycle and then they only have to make revisions rather than redo
 the whole thing.

It should be easy to re-propose the specs for inclusion in Kilo once
that opens up. You can grab a version of the repo before the removal
commit, pull out the spec, update it and re-propose it.

 It also reduces the number of hoops to jump through to quickly search for a
 spec based on keywords. Otherwise we have to checkout a commit before the
 removal and then search.

 Thoughts, suggestions, or anecdotes about small sailboats?

 1.
 https://github.com/openstack/neutron-specs/commit/77f8c806a49769322b02ea6017a1a2a39ef1cfd7


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Devstack][VCenter][n-cpu]Error: Service n-cpu is not running

2014-09-15 Thread Gary Kotton
Hi,
I am not sure why your setup is not working. The Vmware configuration looks 
correct. Please note that the ML2 plugin is currently not supported when 
working with the Vmware driver. You should use traditional nova networking. 
Please note that there are plans in K to add this support.
Thanks
Gary

From: foss geek thefossg...@gmail.commailto:thefossg...@gmail.com
Date: Monday, September 15, 2014 at 2:44 PM
To: openst...@lists.openstack.orgmailto:openst...@lists.openstack.org 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org, 
OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [Openstack] [Devstack][VCenter][n-cpu]Error: Service n-cpu is not 
running

Dear All,

I am using Devstack Icehouse stable version to integrate openstack with 
VCenter.  I am using CentOS 6.5 64 bit.

I am facing the below issue while running ./stack.  Any pointer/help would be 
greatly appreciated.

Here is related error log.


$./stack.sh

snip

2014-09-15 11:35:27.881 | + [[ -x /opt/stack/devstack/local.sh ]]
2014-09-15 11:35:27.898 | + service_check
2014-09-15 11:35:27.910 | + local service
2014-09-15 11:35:27.925 | + local failures
2014-09-15 11:35:27.936 | + SCREEN_NAME=stack
2014-09-15 11:35:27.953 | + SERVICE_DIR=/opt/stack/status
2014-09-15 11:35:27.964 | + [[ ! -d /opt/stack/status/stack ]]
2014-09-15 11:35:27.981 | ++ ls /opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:27.999 | + failures=/opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:28.006 | + for service in '$failures'
2014-09-15 11:35:28.023 | ++ basename /opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:28.034 | + service=n-cpu.failure
2014-09-15 11:35:28.051 | + service=n-cpu
2014-09-15 11:35:28.057 | + echo 'Error: Service n-cpu is not running'
2014-09-15 11:35:28.074 | Error: Service n-cpu is not running
2014-09-15 11:35:28.091 | + '[' -n /opt/stack/status/stack/n-cpu.failure ']'
2014-09-15 11:35:28.098 | + die 1164 'More details about the above errors can 
be found with screen, with ./rejoin-stack.sh'
2014-09-15 11:35:28.109 | + local exitcode=0
2014-09-15 11:35:28.126 | [Call Trace]
2014-09-15 11:35:28.139 | ./stack.sh:1313:service_check
2014-09-15 11:35:28.174 | /opt/stack/devstack/functions-common:1164:die
2014-09-15 11:35:28.184 | [ERROR] /opt/stack/devstack/functions-common:1164 
More details about the above errors can be found with screen, with 
./rejoin-stack.sh

snip


Here is n-cpu screen log:
==

$ cd /opt/stack/nova  /usr/bin/nova-compute --config-file /etc/nova/nova.conf 
 echo $! /opt/stack/status/stack/n-cpu.pid; fg || echo n-cpu failed to 
start | tee /opt/stack/status/stack/n-cpu.failure 2 eror
[1] 32476
cd /opt/stack/nova  /usr/bin/nova-compute --config-file /etc/nova/nova.conf
2014-09-15 08:00:50.685 DEBUG nova.servicegroup.api [-] ServiceGroup driver 
defined as an instance of db from (pid=32477) __new__ 
/opt/stack/nova/nova/servicegroup/api.py:65
2014-09-15 08:00:51.435 INFO nova.openstack.common.periodic_task [-] Skipping 
periodic task _periodic_update_dns because its interval is negative
2014-09-15 08:00:52.104 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') from (pid=32477) 
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:00:52.178 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') from (pid=32477) 
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:00:52.186 INFO nova.virt.driver [-] Loading compute driver 
'vmwareapi.VMwareVCDriver'
2014-09-15 08:01:21.188 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') from (pid=32477) 
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.188 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') from (pid=32477) 
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.825 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') from (pid=32477) 
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.826 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') from (pid=32477) 
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:26.208 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting 
to AMQP server on 
10.10.2.2:5672https://urldefense.proofpoint.com/v1/url?u=http://10.10.2.2:5672k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=f90qkLxV3WQGxElluKkFrACcdYFJfquFb3%2FxNYy4f%2Bk%3D%0As=ab611045ee9d5deff98d0b3b844a9a83f09a91a3d71338cea87894ca1de47c2b
2014-09-15 08:01:26.235 INFO oslo.messaging._drivers.impl_rabbit [-] Connected 
to AMQP server on 

[openstack-dev] [Infra] Meeting Tuesday September 16th at 19:00 UTC

2014-09-15 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday September 16th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Sean Dague
On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
 On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
 Just an observation from the last week or so...

 The biggest problem nova faces at the moment isn't code review latency. Our
 biggest problem is failing to fix our bugs so that the gate is reliable.
 The number of rechecks we've done in the last week to try and land code is
 truly startling.
 
 I consider both problems to be pretty much equally as important. I don't
 think solving review latency or test reliabilty in isolation is enough to
 save Nova. We need to tackle both problems as a priority. I tried to avoid
 getting into my concerns about testing in my mail on review team bottlenecks
 since I think we should address the problems independantly / in parallel.
 
 I know that some people are focused by their employers on feature work, but
 those features aren't going to land in a world in which we have to hand
 walk everything through the gate.
 
 Unfortunately the reliability of the gate systems has the highest negative
 impact on productivity right at the point in the dev cycle where we need
 it to have the least impact too.
 
 If we're going to continue to raise the bar in terms of testing coverage
 then we need to have a serious look at the overall approach we use for
 testing because what we do today isn't going to scale, even if it is
 100% reliable. We can't keep adding new CI jobs for each new nova.conf
 setting that introduces a new code path, because each job has major
 implications for resource consumption (number of test nodes, log storage),
 not to mention reliability. I think we need to figure out a way to get
 more targetted testing of features, so we can keep the overall number
 of jobs lower and the tests shorter.
 
 Instead of having a single tempest run that exercises all the Nova
 functionality in one run, we need to figure out how to split it up
 into independant functional areas. For example if we could isolate
 tests which are affected by choice of cinder storage backend, then
 we could run those subset of tests multiple times, once for each
 supported cinder backend. Without this, the combinatorial explosion
 of test jobs is going to kill us.

One of the top issues killing Nova patches last week was a unit test
race (the wsgi worker one). There is no one to blame but Nova for that.
Jay was really the only team member digging into it.

I don't disagree on the disaggregation problem, however as lots of Nova
devs are ignoring unit test fails at this point, unless that changes no
other disaggregation is going make anything better.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-15 Thread Alexis Lee
Gregory Haynes said on Tue, Sep 09, 2014 at 06:32:38PM +:
 I have been working on a meta-review of StevenK's reviews and I would
 like to propose him as a new member of our core team.

+1 from me!


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] keep old specs

2014-09-15 Thread Kevin Benton
Some of the specs had a significant amount of detail and thought put into
them. It seems like a waste to bury them in a git tree history.

By having them in a place where external parties (e.g. operators) can
easily find them, they could get more visibility and feedback for any
future revisions. Just being able to see that a feature was previously
designed out and approved can prevent a future person from wasting a bunch
of time typing up a new spec for the same feature. Hardly anyone is going
to search deleted specs from two cycles ago if it requires checking out a
commit.

Why just restrict the whole repo to being documentation of what went in?
If that's all the specs are for, why don't we just wait to create them
until after the code merges?
On Sep 15, 2014 6:16 AM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Sep 15, 2014 at 4:52 AM, Kevin Benton blak...@gmail.com wrote:
  I saw that the specs that didn't make the deadline for the feature freeze
  were removed from the tree completely.[1] For easier reference, can we
  instead revert that commit to restore them and then move them into a
 release
  specific folder called 'unimplemented' or something along those lines?
 
 No, I don't think there's value to keeping specs along which never
 made a release. The point of the specs repo is to track things which
 made the release.

  It will be nice in the future to browse through the specs for a release
 and
  see what specs were approved but didn't make it in time. Then if someone
  wants to try to propose it again, their patch can be to move the spec
 into
  the current cycle and then they only have to make revisions rather than
 redo
  the whole thing.
 
 It should be easy to re-propose the specs for inclusion in Kilo once
 that opens up. You can grab a version of the repo before the removal
 commit, pull out the spec, update it and re-propose it.

  It also reduces the number of hoops to jump through to quickly search
 for a
  spec based on keywords. Otherwise we have to checkout a commit before the
  removal and then search.
 
  Thoughts, suggestions, or anecdotes about small sailboats?
 
  1.
 
 https://github.com/openstack/neutron-specs/commit/77f8c806a49769322b02ea6017a1a2a39ef1cfd7
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] keep old specs

2014-09-15 Thread Russell Bryant
On 09/15/2014 10:01 AM, Kevin Benton wrote:
 Some of the specs had a significant amount of detail and thought put
 into them. It seems like a waste to bury them in a git tree history.
 
 By having them in a place where external parties (e.g. operators) can
 easily find them, they could get more visibility and feedback for any
 future revisions. Just being able to see that a feature was previously
 designed out and approved can prevent a future person from wasting a
 bunch of time typing up a new spec for the same feature. Hardly anyone
 is going to search deleted specs from two cycles ago if it requires
 checking out a commit.
 
 Why just restrict the whole repo to being documentation of what went
 in?  If that's all the specs are for, why don't we just wait to create
 them until after the code merges?

FWIW, I agree with you that it makes sense to keep them in a directory
that makes it clear that they were not completed.

There's a ton of useful info in them.  Even if they get re-proposed,
it's still useful to see the difference in the proposal as it evolved
between releases.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Chris St. Pierre
On Mon, Sep 15, 2014 at 4:34 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 To arbitrarily restrict the user is a bug.


QFT.

This is why I don't feel like a blueprint should be necessary -- this is a
fairly simple changes that fixes what's pretty undeniably a bug. I also
don't see much consensus on whether or not I need to go through the
interminable blueprint process to get this accepted.

So since everyone seems to think that this is at least not a bad idea, and
since no one seems to know why it was originally changed, what stands
between me and a +2?

Thanks.

-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Daniel P. Berrange
On Mon, Sep 15, 2014 at 09:21:45AM -0500, Chris St. Pierre wrote:
 On Mon, Sep 15, 2014 at 4:34 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
 
  To arbitrarily restrict the user is a bug.
 
 
 QFT.
 
 This is why I don't feel like a blueprint should be necessary -- this is a
 fairly simple changes that fixes what's pretty undeniably a bug. I also
 don't see much consensus on whether or not I need to go through the
 interminable blueprint process to get this accepted.
 
 So since everyone seems to think that this is at least not a bad idea, and
 since no one seems to know why it was originally changed, what stands
 between me and a +2?

Submit a fix for it, I'll happily +2 it without a blueprint. We're going
to be adopting a more lenient policy on what needs a blueprint in kilo
and so I don't think this would need one in that proposal anyway.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.i18n 0.4.0 released

2014-09-15 Thread Doug Hellmann
The Oslo team has released version 0.4.0 of oslo.i18n. This version fixes a 
missing dependency on the six library. We expect this to be the last 
pre-release of oslo.i18n before the final release on Thursday.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Russell Bryant
On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
 On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
 Just an observation from the last week or so...

 The biggest problem nova faces at the moment isn't code review latency. Our
 biggest problem is failing to fix our bugs so that the gate is reliable.
 The number of rechecks we've done in the last week to try and land code is
 truly startling.
 
 I consider both problems to be pretty much equally as important. I don't
 think solving review latency or test reliabilty in isolation is enough to
 save Nova. We need to tackle both problems as a priority. I tried to avoid
 getting into my concerns about testing in my mail on review team bottlenecks
 since I think we should address the problems independantly / in parallel.

Agreed with this.  I don't think we can afford to ignore either one of them.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Core team tidy-up

2014-09-15 Thread Zane Bitter
From time to time we have people drift away from core reviewing 
activity on Heat for the long term due to changing employers, changing 
roles or just changing priorities.


It's time for a bit of a clean-up, so I have removed Liang Chen and 
Steve Dake from the heat-core group. We thank them for their past 
efforts on Heat - particularly Steve, who was the founder of the 
project. It goes without saying that we'd be happy to fast-track either 
of them back into the core team should their attention shift back toward 
Heat reviews.


I'm aware that there are some other folks who have been busy with other 
tasks recently and not able to contribute to reviews; however I believe 
that all remaining members of the core team are planning to be actively 
reviewing in the near future. I'll work with the Kilo PTL to follow up 
over the next few months and see whether that proves to be the case or 
if other things come up.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.config 1.4.0.0a5 released

2014-09-15 Thread Doug Hellmann
The Oslo team is pleased to announce the release of oslo.config 1.4.0.0a5. We 
expect this to be the final alpha of oslo.config before the final release of 
1.4 on Thursday.

This update includes a fix for variable substitution for deprecated or moved 
options.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.serialization 0.3.0 released

2014-09-15 Thread Doug Hellmann
The Oslo team is pleased to announce the release of version 0.3.0 of 
oslo.serialization. This version updates the dependencies of oslo.serialization 
to be consistent with the other libraries. We expect this to be the last update 
before we release 1.0.0 on Thursday

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-15 Thread Brant Knudson
On Wed, Sep 10, 2014 at 9:14 AM, Sean Dague s...@dague.net wrote:

 Going through the untriaged Nova bugs, and there are a few on a similar
 pattern:

 Nova operation in progress takes a while
 Crosses keystone token expiration time
 Timeout thrown
 Operation fails
 Terrible 500 error sent back to user

 It seems like we should have a standard pattern that on token expiration
 the underlying code at least gives one retry to try to establish a new
 token to complete the flow, however as far as I can tell *no* clients do
 this.

 I know we had to add that into Tempest because tempest runs can exceed 1
 hr, and we want to avoid random fails just because we cross a token
 expiration boundary.

 Anyone closer to the clients that can comment here?

 -Sean


Currently, a service with a token can't always refresh a new token, because
the service doesn't always have the user's credentials (which is good...
the service shouldn't have the user's credentials), and even if the
credentials were available the service might not be able to use them to
authenticate (not all authentication is done using username and password).

The most obvious solution to me is to have the identity server provides an
api where, given a token, you can get a new token with an expiration time
of your choice. Use of the API would be limited to service users. When a
service gets a token that it wants to send on to another service it first
uses the existing token to get a new token with whatever expiration time it
thinks would be adequate. If the service knows that it's done with the
token it will hopefully revoke the new token to keep the token database
clean.

The only thing missing from the existing auth API for getting a token from
a token is being able to set the expiration time --
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3.md#authentication-authentication
. Keystone will also have to be enhanced to validate that if the
token-from-token request has a new expiration time the requestor has the
required role.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Error in deploying ironicon Ubuntu 12.04

2014-09-15 Thread Peeyush Gupta

Hi,

I reran the script after commenting out the docker.io line. Here is the 
error I am getting:


2014-09-15 13:25:04.977 | + mysql -uroot -ppassword -h127.0.0.1 -e 'DROP 
DATABASE IF EXISTS nova;'
2014-09-15 13:25:04.981 | + mysql -uroot -ppassword -h127.0.0.1 -e 
'CREATE DATABASE nova CHARACTER SET latin1;'

2014-09-15 13:25:04.985 | + /usr/local/bin/nova-manage db sync
2014-09-15 13:25:04.985 | /opt/stack/devstack/lib/nova: line 591: 
/usr/local/bin/nova-manage: No such file or directory

2014-09-15 13:25:04.986 | + exit_trap
2014-09-15 13:25:04.986 | + local r=1
2014-09-15 13:25:04.986 | ++ jobs -p
2014-09-15 13:25:04.987 | + jobs=
2014-09-15 13:25:04.987 | + [[ -n '' ]]
2014-09-15 13:25:04.987 | + kill_spinner
2014-09-15 13:25:04.987 | + '[' '!' -z '' ']'
2014-09-15 13:25:04.988 | + [[ 1 -ne 0 ]]
2014-09-15 13:25:04.988 | + echo 'Error on exit'
2014-09-15 13:25:04.988 | Error on exit
2014-09-15 13:25:04.988 | + [[ -z /opt/stack ]]
2014-09-15 13:25:04.988 | + /opt/stack/devstack/tools/worlddump.py -d 
/opt/stack

2014-09-15 13:25:05.060 | + exit 1

I have nova installed on my machine, but not nova-manage. Any idea why 
is this happening now?


On 09/11/2014 06:49 PM, Jim Rollenhagen wrote:


On September 11, 2014 3:52:59 AM PDT, Lucas Alvares Gomes 
lucasago...@gmail.com wrote:

Oh, it's because Precise doesn't have the docker.io package[1] (nor
docker).

AFAIK the -infra team is now using Trusty in gate, so it won't be a
problem. But if you think that we should still support Ironic DevStack
with Precise please file a bug about it so the Ironic team can take a
look on that.

[1]
http://packages.ubuntu.com/search?suite=trustysection=allarch=anykeywords=docker.iosearchon=names

Cheers,
Lucas

On Thu, Sep 11, 2014 at 11:12 AM, Peeyush gpeey...@linux.vnet.ibm.com
wrote:

Hi all,

I have been trying to deploy Openstack-ironic on a Ubuntu 12.04 VM.
I encountered the following error:

2014-09-11 10:08:11.166 | Reading package lists...
2014-09-11 10:08:11.471 | Building dependency tree...
2014-09-11 10:08:11.475 | Reading state information...
2014-09-11 10:08:11.610 | E: Unable to locate package docker.io
2014-09-11 10:08:11.610 | E: Couldn't find any package by regex

'docker.io'

2014-09-11 10:08:11.611 | + exit_trap
2014-09-11 10:08:11.612 | + local r=100
2014-09-11 10:08:11.612 | ++ jobs -p
2014-09-11 10:08:11.612 | + jobs=
2014-09-11 10:08:11.612 | + [[ -n '' ]]
2014-09-11 10:08:11.612 | + kill_spinner
2014-09-11 10:08:11.613 | + '[' '!' -z '' ']'
2014-09-11 10:08:11.613 | + [[ 100 -ne 0 ]]
2014-09-11 10:08:11.613 | + echo 'Error on exit'
2014-09-11 10:08:11.613 | Error on exit
2014-09-11 10:08:11.613 | + [[ -z /opt/stack ]]
2014-09-11 10:08:11.613 | + ./tools/worlddump.py -d /opt/stack
2014-09-11 10:08:11.655 | + exit 100

I tried to make it work on a separate machine, but got the same

error.

I understand that it could be because script is looking for docker.io
package,
but I guess only docker package is available. I tried to install

docker.io,

but couldn't
find it.

Can you please help me out to resolve this?

Ouch. I added this as a dependency in devstack for building IPA.

As Lucas said, it works fine in 14.04. In 12.04, and if using Ironic with the 
PXE driver (default), you can likely remove that line from 
devstack/files/apts/ironic. I won't promise that everything will work after 
that, but chances are good.

// jim

Thanks,

--
Peeyush Gupta
gpeey...@linux.vnet.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Peeyush Gupta
gpeey...@linux.vnet.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread James Slagle
On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy sha...@redhat.com wrote:
 All,

 Starting this thread as a follow-up to a strongly negative reaction by the
 Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
 subsequent very detailed justification and discussion of why they may be
 useful in this spec[2].

 Back in Atlanta, I had some discussions with folks interesting in making
 ready state[3] preparation of bare-metal resources possible when
 deploying bare-metal nodes via TripleO/Heat/Ironic.

After a cursory reading of the references, it seems there's a couple of issues:
- are the features to move hardware to a ready-state even going to
be in Ironic proper, whether that means in ironic at all or just in
contrib.
- assuming some of the features are there, should Heat have any Ironic
resources given that Ironic's API is admin-only.


 The initial assumption is that there is some discovery step (either
 automatic or static generation of a manifest of nodes), that can be input
 to either Ironic or Heat.

I think it makes a lot of sense to use Heat to do the bulk
registration of nodes via Ironic. I understand the argument that the
Ironic API should be admin-only a little bit for the non-TripleO
case, but for TripleO, we only have admins interfacing with the
Undercloud. The user of a TripleO undercloud is the deployer/operator
and in some scenarios this may not be the undercloud admin. So,
talking about TripleO, I don't really buy that the Ironic API is
admin-only.

Therefore, why not have some declarative Heat resources for things
like Ironic nodes, that the deployer can make use of in a Heat
template to do bulk node registration?

The alternative listed in the spec:

Don’t implement the resources and rely on scripts which directly
interact with the Ironic API, prior to any orchestration via Heat.

would just be a bit silly IMO. That goes against one of the main
drivers of TripleO, which is to use OpenStack wherever possible. Why
go off and write some other thing that is going to parse a
json/yaml/csv of nodes and orchestrate a bunch of Ironic api calls?
Why would it be ok for that other thing to use Ironic's admin-only
API yet claim it's not ok for Heat on the undercloud to do so?


 Following discovery, but before an undercloud deploying OpenStack onto the
 nodes, there are a few steps which may be desired, to get the hardware into
 a state where it's ready and fully optimized for the subsequent deployment:

 - Updating and aligning firmware to meet requirements of qualification or
   site policy
 - Optimization of BIOS configuration to match workloads the node is
   expected to run
 - Management of machine-local storage, e.g configuring local RAID for
   optimal resilience or performance.

 Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
 of these steps possible, but there's no easy way to either encapsulate the
 (currently mostly vendor specific) data associated with each step, or to
 coordinate sequencing of the steps.

 What is required is some tool to take a text definition of the required
 configuration, turn it into a correctly sequenced series of API calls to
 Ironic, expose any data associated with those API calls, and declare
 success or failure on completion.  This is what Heat does.

 So the idea is to create some basic (contrib, disabled by default) Ironic
 heat resources, then explore the idea of orchestrating ready-state
 configuration via Heat.

 Given that Devananda and I have been banging heads over this for some time
 now, I'd like to get broader feedback of the idea, my interpretation of
 ready state applied to the tripleo undercloud, and any alternative
 implementation ideas.

My opinion is that if the features are in Ironic, they should be
exposed via Heat resources for orchestration. If the TripleO case is
too much of a one-off (which I don't really think it is), then sure,
keep it all in contrib so that no one gets confused about why the
resources are there.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Heat dependency visualisation

2014-09-15 Thread Alexis Lee
For your amusement,

https://github.com/lxsli/heat-viz

This produces HTML which shows which StructuredDeployments (boxes)
depends_on each other (bold arrows). It also shows the
StructuredDeployments which StructuredConfigs (ovals) feed into (normal
arrows).

Both CFN + HOT format files should be supported. Thanks to Steve Baker
for the code I nicked, ahem, reused from merge.py.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-15 Thread Radomir Dopieralski
On 12/09/14 17:11, Doug Hellmann wrote:

 I also use git-hooks with a post-checkout script to remove pyc files any time 
 I change between branches, which is especially helpful if the different 
 branches have code being moved around:
 
 git-hooks: https://github.com/icefox/git-hooks
 
 The script:
 
 $ cat ~/.git_hooks/post-checkout/remove_pyc
 #!/bin/sh
 echo Removing pyc files from `pwd`
 find . -name '*.pyc' | xargs rm -f
 exit 0

Good thing that python modules can't have spaces in their names! But for
the future, find has a -delete parameter that won't break horribly on
strange filenames.

find . -name '*.pyc' -delete

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.middleware 0.1.0 release

2014-09-15 Thread gordon chung
on behalf of the Oslo team, we're pleased to the announce the initial public 
release of oslo.middleware (verison 0.1.0). this library contains WSGI 
middleware, previously available under openstack/common/middleware, that 
provides additional functionality to the api pipeline.the oslo.middleware 
library is intended to be adopted for Kilo as the middleware code part of 
oslo-incubator is deprecated.please report any issues using the oslo.middleware 
tracker: https://bugs.launchpad.net/oslo.middleware.cheers,gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Steven Hardy
On Mon, Sep 15, 2014 at 11:15:21AM -0400, James Slagle wrote:
 On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy sha...@redhat.com wrote:
  All,
 
  Starting this thread as a follow-up to a strongly negative reaction by the
  Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
  subsequent very detailed justification and discussion of why they may be
  useful in this spec[2].
 
  Back in Atlanta, I had some discussions with folks interesting in making
  ready state[3] preparation of bare-metal resources possible when
  deploying bare-metal nodes via TripleO/Heat/Ironic.
 
 After a cursory reading of the references, it seems there's a couple of 
 issues:
 - are the features to move hardware to a ready-state even going to
 be in Ironic proper, whether that means in ironic at all or just in
 contrib.

Intially at least, it sounds like much of what ready-state entails would
be via Ironic vendor-specific extensions.

But that's not necessaily an insurmountable problem IMO - encapsulating the
vendor-specific stuff inside some provider resource templates would
actually enable easier substitution of different vendors stuff based on
deployment time choices (user input or data from autodiscovery).

 - assuming some of the features are there, should Heat have any Ironic
 resources given that Ironic's API is admin-only.

Yes, we have some admin-only resources already, but they're restricted to
contrib, as generally we'd rather keep the main tree (default enabled)
resources accessible to all normal users.

So, given that all actors in the undercloud will be admins, as noted in the
spec, I don't think this is a problem given that the primary use-case for
this stuff is TripleO.

  The initial assumption is that there is some discovery step (either
  automatic or static generation of a manifest of nodes), that can be input
  to either Ironic or Heat.
 
 I think it makes a lot of sense to use Heat to do the bulk
 registration of nodes via Ironic. I understand the argument that the
 Ironic API should be admin-only a little bit for the non-TripleO
 case, but for TripleO, we only have admins interfacing with the
 Undercloud. The user of a TripleO undercloud is the deployer/operator
 and in some scenarios this may not be the undercloud admin. So,
 talking about TripleO, I don't really buy that the Ironic API is
 admin-only.

\o/

 Therefore, why not have some declarative Heat resources for things
 like Ironic nodes, that the deployer can make use of in a Heat
 template to do bulk node registration?
 
 The alternative listed in the spec:
 
 Don’t implement the resources and rely on scripts which directly
 interact with the Ironic API, prior to any orchestration via Heat.
 
 would just be a bit silly IMO. That goes against one of the main
 drivers of TripleO, which is to use OpenStack wherever possible. Why
 go off and write some other thing that is going to parse a
 json/yaml/csv of nodes and orchestrate a bunch of Ironic api calls?
 Why would it be ok for that other thing to use Ironic's admin-only
 API yet claim it's not ok for Heat on the undercloud to do so?

Yeah this mirrors my thoughts, I suppose the alternative is somewhat
flippant, but I was trying to illustrate that exposing the Ironic API via
Heat provides some interesting opportunities for reuse of existing Heat
capabilities.

For example, today, I've been looking at the steps required for driving
autodiscovery:

https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno

Driving this process looks a lot like application orchestration:

1. Take some input (IPMI credentials and MAC addresses)
2. Maybe build an image and ramdisk(could drop credentials in)
3. Interact with the Ironic API to register nodes in maintenance mode
4. Boot the nodes, monitor state, wait for a signal back containing some
   data obtained during discovery (same as WaitConditions or
   SoftwareDeployment resources in Heat..)
5. Shutdown the nodes and mark them ready for use by nova

At some point near the end of this sequence, you could optionally insert
the ready state workflow described in the spec.

So I guess my question then becomes, regardless of ready state, what is
expected to drive the steps above if it's not Heat?

I'm not really clear what the plan is but it certainly seems like it'd be
a win from a TripleO perspective if the exiting tooling (e.g Heat) could be
reused?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Jay Pipes

On 09/15/2014 10:21 AM, Chris St. Pierre wrote:

On Mon, Sep 15, 2014 at 4:34 AM, Daniel P. Berrange berra...@redhat.com
mailto:berra...@redhat.com wrote:

To arbitrarily restrict the user is a bug.


QFT.

This is why I don't feel like a blueprint should be necessary -- this is
a fairly simple changes that fixes what's pretty undeniably a bug. I
also don't see much consensus on whether or not I need to go through the
interminable blueprint process to get this accepted.

So since everyone seems to think that this is at least not a bad idea,
and since no one seems to know why it was originally changed,


I believe I did:

http://lists.openstack.org/pipermail/openstack-dev/2014-September/045924.html

 what

stands between me and a +2?


Bug fix priorities, feature freeze exceptions, and review load.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Adding hp1 back running tripleo CI

2014-09-15 Thread Derek Higgins
I've been running overcloud CI tests on hp1 to establish if its ready to
turn back on running real CI, I'd like to add this back in soon but
first have some numbers we should look at and make some decisions

The hp1 cloud throws up more false negatives then rh1, nearly all of
these are either problems within the nova bm driver or the neutron l3
agent, things improve from a pass rate of somewhere around 40% to about
85% with the following 2 patches
https://review.openstack.org/#/c/121492/ # ensure l3 agent doesn't fail
if neutron-server isn't ready
https://review.openstack.org/#/c/121155/ # Increase sleep times in
nova-bm driver

With these 2 patches I think the pass rate is acceptable but there is a
difference in runtime, overcloud jobs run in about 140 minutes (rh1 is
averaging about 95 minues)

We are using VM's with 2G of memory, with 3G VM's the runtime goes down
to about 120 minutes, this is an option to save a little time but we end
up loosing 33% of our capacity (in simultanious jobs)

How would people feel about turning back on hp1 and increasing the
timeout to allow for the increased runtimes?

While making changes we should also consider increasing switching back
to x86_64 and bumping VM's to 4G essentially halving the number of jobs
we can simultaneously run, but CI would test what most deployments would
actually be using.

Also its worth noting the test I have been using to compare jobs is the
F20 overcloud job, something has happened recently causing this job to
run slower then it used to run (possibly upto 30 minutes slower), I'll
now try to get to the bottom of this. So the times may not end up being
as high as referenced above but I'm assuming the relative differences
between the two clouds wont change.

thoughts?
Derek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Jay Pipes

On 09/15/2014 04:01 AM, Nikola Đipanov wrote:

On 09/13/2014 11:07 PM, Michael Still wrote:

Just an observation from the last week or so...

The biggest problem nova faces at the moment isn't code review latency.
Our biggest problem is failing to fix our bugs so that the gate is
reliable. The number of rechecks we've done in the last week to try and
land code is truly startling.



This is exactly what I was saying in my ranty email from 2 weeks ago
[1]. Debt is everywhere and as any debt, it is unlikely going away on
it's own.



I know that some people are focused by their employers on feature work,
but those features aren't going to land in a world in which we have to
hand walk everything through the gate.



The thing is that - without doing work on the code - you cannot know
where the real issues are. You cannot look at a codebase as big as Nova
and say, hmmm looks like we need to fix the resource tracker. You can
know that only if you are neck-deep in the stuff. And then you need to
agree on what is really bad and what is just distasteful, and then focus
the efforts on that. None of the things we've put in place (specs, the
way we do and organize code review and bugs) acknowledge or help this
part of the development process.

I tried to explain this in my previous ranty email [1] but I guess I
failed due to ranting :) so let me try again: Nova team needs to act as
a development team.

We are not in a place (yet?) where we can just overlook the addition of
features based on weather they are appropriate for our use case. We have
to work together on a set of important things to get Nova to where we
think it needs to be and make sure we get it done - by actually doing
it! (*)

However - I don't think freezing development of features for a cycle is
a viable option - this is just not how software in the real world gets
done. It will likely be the worst possible thing we can do, no matter
how appealing it seems to us as developers.

But we do need to be extremely strict on what we let in, and under which
conditions! As I mentioned to sdague on IRC the other day (yes, I am
quoting myself :) ): Not all features are the same - there are
features that are better, that are coded better, and are integrated
better - we should be wanting those features always! Then there are
features that are a net negative on the code - we should *never* want
those features. And then there are features in the middle - we may want
to cut those or push them back depending on a number of things that are
important. Things like: code quality, can it fit withing the current
constraints, can we let it in like that, or some work needs to happen
first. Things which we haven't been really good at considering
previously IMHO.

But you can't really judge that unless you are actively developing Nova
yourself, and have a tighter grip on the proposed code than what our
current process gives.

Peace!
N.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044722.html

(*) The only effort like this going on at the moment in Nova is the
Objects work done by dansmith (even thought there are several others
proposed) - I will let the readers judge how much of an impact it was in
only 2 short cycles, from just a single effort.


+1 Well said, Nikola.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-15 Thread Zane Bitter

On 14/09/14 11:09, Clint Byrum wrote:

Excerpts from Gauvain Pocentek's message of 2014-09-04 22:29:05 -0700:

Hi,

A bit of background: I'm working on the publication of the HOT
resources reference on docs.openstack.org. This book is mostly
autogenerated from the heat source code, using the sphinx XML output. To
avoid publishing several references (one per released version, as is
done for the OpenStack config-reference), I'd like to add information
about the support status of each resource (when they appeared, when
they've been deprecated, and so on).

So the plan is to use the SupportStatus class and its `version`
attribute (see https://review.openstack.org/#/c/116443/ ). And the
question is, what information should the version attribute hold?
Possibilities include the release code name (Icehouse, Juno), or the
release version (2014.1, 2014.2). But this wouldn't be useful for users
of clouds continuously deployed.

  From my documenter point of view, using the code name seems the right
option, because it fits with the rest of the documentation.

What do you think would be the best choice from the heat devs POV?


What we ship in-tree is the standard library for Heat. I think Heat
should not tie things to the release of OpenStack, but only to itself.


Standard Library implies that everyone has it available, but in 
reality operators can (and will, and do) deploy any combination of 
resource types that they want.



The idea is to simply version the standard library of resources separately
even from the language. Added resources and properties would be minor
bumps, deprecating or removing anything would be a major bump. Users then
just need an API call that allows querying the standard library version.


We already have API calls to actually inspect resource types. I don't 
think a semantic version number is helpful here, since the different 
existing combinations of resources types are not expressible linearly.


There's no really good answer here, but the only real answer is making 
sure it's easy for people to generate the docs themselves for their 
actual deployment.



With this scheme, we can provide a gate test that prevents breaking the
rules, and automatically generate the docs still. Doing this would sync
better with continuous deployers who will be running Juno well before
there is a 2014.2.


Maybe continuous deployers should continuously deploy their own docs? 
For any given cloud the only thing that matters is what it supports 
right now.



Anyway, Heat largely exists to support portability of apps between
OpenStack clouds. Many many OpenStack clouds don't run one release,
and we don't require them to do so. So tying to the release is, IMO,
a poor coice.


The original question was about docs.openstack.org, and in that context 
I think tying it to the release version is a good choice, because 
that's... how OpenStack is released. Individual clouds, however, really 
need to deploy their own docs that document what they actually support.


The flip side of this, of course, is that whatever we use for the 
version strings on docs.openstack.org will all make its way into all the 
other documentation that gets built, and I do understand your point in 
that context. But versioning the standard library of plugins as if it 
were a monolithic, always-available thing seems wrong to me.



We do the same thing with HOT's internals, so why not also
do the standard library this way?


The current process for HOT is for every OpenStack development cycle 
(Juno is the first to use this) to give it a 'version' string that is 
the expected date of the next release (in the future), and continuous 
deployers who use the new one before that date are on their own (i.e. 
it's not considered stable). So not really comparable.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Keith Basil
On Sep 15, 2014, at 12:00 PM, Steven Hardy wrote:

 On Mon, Sep 15, 2014 at 11:15:21AM -0400, James Slagle wrote:
 On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy sha...@redhat.com wrote:
 All,
 
 Starting this thread as a follow-up to a strongly negative reaction by the
 Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
 subsequent very detailed justification and discussion of why they may be
 useful in this spec[2].
 
 Back in Atlanta, I had some discussions with folks interesting in making
 ready state[3] preparation of bare-metal resources possible when
 deploying bare-metal nodes via TripleO/Heat/Ironic.

Additional thinking and background behind Ready State.

http://robhirschfeld.com/2014/04/25/ready-state-infrastructure/

-k


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-15 Thread Anita Kuno
On 09/12/2014 11:54 AM, Thierry Carrez wrote:
 Anita Kuno wrote:
 My question involves third party discussions. Now I know at least
 Neutron is going to have a chat about drivers which involves third party
 ci accounts as a supportive aspect of that discussion, but I am
 wondering about the framework for a discussion which I can recommend
 attendees of the third party meetings to attend. They are shaping up to
 be an attentive, forward thinking group and are supporting each other
 which I was hoping for from the beginning so I am very heartened by our
 progress. I am feeling that as a group folks have questions and concerns
 they would appreciate the opportunity to air in a mutually constructive
 venue.

 What day and where would be the mutually constructive venue?

 I held off on Joe's thread since third party ci affects 4 or 5 programs,
 not enough to qualify in my mind as a topic that is OpenStack wide, but
 the programs it affects are quite affected, so I do feel it is time to
 mention it.
 
 I think those discussions could happen in a cross-project workshop.
 We'll run 2 or 3 of those in parallel all day Tuesday, so there is
 definitely room there.
 
Thanks Thierry:

The etherpad to co-ordinate discussions and priortization of third party
items has been created:
https://etherpad.openstack.org/p/kilo-third-party-items

It will be announced at the third party meeting today:
https://wiki.openstack.org/wiki/Meetings/ThirdParty#09.2F15.2F14
and is linked on this etherpad:
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics which
is linked from this page: https://wiki.openstack.org/wiki/Summit/Planning

Be sure to read the instructions at the top of the etherpad, which state:
This is the planning document for the third party items we would like to
discuss at Kilo Summit in Paris, November 2014
Please add items below, they will be discussed at the weekly third party
meeting and top priorities will be selected. If there is an item which
is of particular importance to you, I highly recommend that you or a
delegate regularly attend the weekly third party meetings so that others
may understand your perspective and take it into account when item
priority is decided. While we welcome your submission and especially
your participation, adding items to this etherpad is not a guarantee
that your item will be dicussed during the alloted time, we will have to
prioritize items to fit our timeslot.

Weekly third party meeting:
https://wiki.openstack.org/wiki/Meetings/ThirdParty

Please bring any questions to todays or a subsequent third party meeting
so we may discuss them.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-09-15 08:15:21 -0700:
 On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy sha...@redhat.com wrote:
  All,
 
  Starting this thread as a follow-up to a strongly negative reaction by the
  Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
  subsequent very detailed justification and discussion of why they may be
  useful in this spec[2].
 
  Back in Atlanta, I had some discussions with folks interesting in making
  ready state[3] preparation of bare-metal resources possible when
  deploying bare-metal nodes via TripleO/Heat/Ironic.
 
 After a cursory reading of the references, it seems there's a couple of 
 issues:
 - are the features to move hardware to a ready-state even going to
 be in Ironic proper, whether that means in ironic at all or just in
 contrib.
 - assuming some of the features are there, should Heat have any Ironic
 resources given that Ironic's API is admin-only.
 
 
  The initial assumption is that there is some discovery step (either
  automatic or static generation of a manifest of nodes), that can be input
  to either Ironic or Heat.
 
 I think it makes a lot of sense to use Heat to do the bulk
 registration of nodes via Ironic. I understand the argument that the
 Ironic API should be admin-only a little bit for the non-TripleO
 case, but for TripleO, we only have admins interfacing with the
 Undercloud. The user of a TripleO undercloud is the deployer/operator
 and in some scenarios this may not be the undercloud admin. So,
 talking about TripleO, I don't really buy that the Ironic API is
 admin-only.
 
 Therefore, why not have some declarative Heat resources for things
 like Ironic nodes, that the deployer can make use of in a Heat
 template to do bulk node registration?
 
 The alternative listed in the spec:
 
 Don’t implement the resources and rely on scripts which directly
 interact with the Ironic API, prior to any orchestration via Heat.
 
 would just be a bit silly IMO. That goes against one of the main
 drivers of TripleO, which is to use OpenStack wherever possible. Why
 go off and write some other thing that is going to parse a
 json/yaml/csv of nodes and orchestrate a bunch of Ironic api calls?
 Why would it be ok for that other thing to use Ironic's admin-only
 API yet claim it's not ok for Heat on the undercloud to do so?
 

An alternative that is missed, is to just define a bulk loading format
for hardware, or adopt an existing one (I find it hard to believe there
isn't already an open format for this), and make use of it in Ironic.

The analogy I'd use is shipping dry goods in a refrigerated truck.
It's heavier, has a bit less capacity, and unnecessary features.  If all
you have is the refrigerated truck, ok. But we're talking about _building_
a special dry-goods add-on to our refrigerated truck (Heat) to avoid
building the same thing into the regular trucks we already have (Ironic).

  Following discovery, but before an undercloud deploying OpenStack onto the
  nodes, there are a few steps which may be desired, to get the hardware into
  a state where it's ready and fully optimized for the subsequent deployment:
 
  - Updating and aligning firmware to meet requirements of qualification or
site policy
  - Optimization of BIOS configuration to match workloads the node is
expected to run
  - Management of machine-local storage, e.g configuring local RAID for
optimal resilience or performance.
 
  Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
  of these steps possible, but there's no easy way to either encapsulate the
  (currently mostly vendor specific) data associated with each step, or to
  coordinate sequencing of the steps.
 
  What is required is some tool to take a text definition of the required
  configuration, turn it into a correctly sequenced series of API calls to
  Ironic, expose any data associated with those API calls, and declare
  success or failure on completion.  This is what Heat does.
 
  So the idea is to create some basic (contrib, disabled by default) Ironic
  heat resources, then explore the idea of orchestrating ready-state
  configuration via Heat.
 
  Given that Devananda and I have been banging heads over this for some time
  now, I'd like to get broader feedback of the idea, my interpretation of
  ready state applied to the tripleo undercloud, and any alternative
  implementation ideas.
 
 My opinion is that if the features are in Ironic, they should be
 exposed via Heat resources for orchestration. If the TripleO case is
 too much of a one-off (which I don't really think it is), then sure,
 keep it all in contrib so that no one gets confused about why the
 resources are there.
 

And I think if this is a common thing that Ironic users need to do,
then Ironic should do it, not Heat.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Steven Hardy
On Mon, Sep 15, 2014 at 09:50:24AM -0700, Clint Byrum wrote:
 Excerpts from Steven Hardy's message of 2014-09-15 04:44:24 -0700:
  All,
  
  Starting this thread as a follow-up to a strongly negative reaction by the
  Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
  subsequent very detailed justification and discussion of why they may be
  useful in this spec[2].
  
  Back in Atlanta, I had some discussions with folks interesting in making
  ready state[3] preparation of bare-metal resources possible when
  deploying bare-metal nodes via TripleO/Heat/Ironic.
  
  The initial assumption is that there is some discovery step (either
  automatic or static generation of a manifest of nodes), that can be input
  to either Ironic or Heat.
  
  Following discovery, but before an undercloud deploying OpenStack onto the
  nodes, there are a few steps which may be desired, to get the hardware into
  a state where it's ready and fully optimized for the subsequent deployment:
  
  - Updating and aligning firmware to meet requirements of qualification or
site policy
  - Optimization of BIOS configuration to match workloads the node is
expected to run
  - Management of machine-local storage, e.g configuring local RAID for
optimal resilience or performance.
  
  Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
  of these steps possible, but there's no easy way to either encapsulate the
  (currently mostly vendor specific) data associated with each step, or to
  coordinate sequencing of the steps.
  
 
 First, Ironic is hidden under Nova as far as TripleO is concerned. So
 mucking with the servers underneath Nova during deployment is a difficult
 proposition. Would I look up the Ironic node ID of the nova server,
 and then optimize it for the workload after the workload arrived? Why
 wouldn't I just do that optimization before the deployment?

That's exactly what I'm proposing - a series of preparatory steps performed
before the node is visible to nova, before the deployment.

The whole point is that Ironic is hidden under nova, and provides no way to
perform these pre-deploy steps via interaction with nova.

 
  What is required is some tool to take a text definition of the required
  configuration, turn it into a correctly sequenced series of API calls to
  Ironic, expose any data associated with those API calls, and declare
  success or failure on completion.  This is what Heat does.
  
 
 I'd rather see Ironic define or adopt a narrow scope document format
 that it can consume for bulk loading. Heat is extremely generic, and thus
 carries a ton of complexity for what is probably doable with a CSV file.

Perhaps you can read the spec - it's not really about the bulk-load part,
it's about orchestrating the steps to prepare the node, after it's
registered with Ironic, but before it's ready to have the stuff deployed to
it.

What tool do you think will just do that optimization before the
deployment? (snark not intended, I genuinely want to know, is it scripts
in TripleO, some sysadmin pre-deploy steps, magic in Ironic?)

  So the idea is to create some basic (contrib, disabled by default) Ironic
  heat resources, then explore the idea of orchestrating ready-state
  configuration via Heat.
  
  Given that Devananda and I have been banging heads over this for some time
  now, I'd like to get broader feedback of the idea, my interpretation of
  ready state applied to the tripleo undercloud, and any alternative
  implementation ideas.
  
 
 I think there may be value in being able to tie Ironic calls to other
 OpenStack API calls. I'm dubious that this is an important idea, but
 I think if somebody wants to step forward with their use case for it,
 then the resources might make sense. However, I realy don't see the
 _enrollment_ phase as capturing any value that isn't entirely offset by
 added complexity.

I've stepped forward with a use-case for it, in the spec, and it's not
about the enrollment phase (although Heat could do that, if for example it
was used to manage the series of steps required for autodiscovery).

If there's a better way to do this, I'm very happy to hear about it, but I
am getting a little tired of folks telling me the idea is dubious without
offering any alternative solution to the requirement :(

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

2014-09-15 Thread Davanum Srinivas
+1 to Rados for oslo.vmware team

-- dims

On Mon, Sep 15, 2014 at 12:37 PM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 I would like to propose Radoslav to be a core team member. Over the course
 of the J cycle he has been great with the reviews, bug fixes and updates to
 the project.
 Can the other core team members please update with your votes if you agree
 or not.
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

2014-09-15 Thread Arnaud Legendre
+1

On Sep 15, 2014, at 9:37 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
I would like to propose Radoslav to be a core team member. Over the course of 
the J cycle he has been great with the reviews, bug fixes and updates to the 
project.
Can the other core team members please update with your votes if you agree or 
not.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Cluster implementation is grabbing instance's guts

2014-09-15 Thread Lowery, Mathew
I agree with your suggestion to stop hitting the service_statuses table 
directly and instead hit the instance model. But now I have an observation:

Nova is already being called here as part of the polling done by 
FreshInstanceTasks#create_instance 
(https://github.com/openstack/trove/blob/06196fcf67b27f0308381da192da5cc8ae65b157/trove/taskmanager/models.py#L413)
 and I think that a call to the instance model (which would hit Nova also) will 
double up on those Nova calls. So is the implication that we cannot re-use 
create_instance verbatim and rather we should (1) use instance.status in 
ClusterTasks#create_cluster and (2) modify FreshInstanceTasks#create_instance 
it doesn't poll the instance when cluster_config is not None?

It could be we need to introduce another status besides BUILD to instance 
statuses, or we need to introduce a new internal property to the SimpleInstance 
base class we can check.

I don't understand the above. What does the new status give you?

From: Tim Simpson tim.simp...@rackspace.commailto:tim.simp...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, September 11, 2014 at 12:52 PM
To: OpenStack Development Mailing List (not for usage questions) 
?[openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org]?
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: eBay SF, mlowery mlow...@ebaysf.commailto:mlow...@ebaysf.com, 
McReynolds, Auston amcreyno...@ebay.commailto:amcreyno...@ebay.com
Subject: [openstack-dev] [Trove] Cluster implementation is grabbing instance's 
gutsHi guys, I was looking through the clustering code today and noticed a lot 
of it is grabbing what I'd call the guts of the instance models code. The best 
example is here: https:/

Hi everyone,

I was looking through the clustering code today and noticed a lot of it is 
grabbing what I'd call the guts of the instance models code.

The best example is here: 
https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89

In the _all_instances_ready function, I would have expected 
trove.instance.models.load_any_instance to be called for each instance ID and 
it's status to be checked.

Instead, the service_status is being called directly. That is a big mistake. 
For now it works, but in general it mixes the concern of what is an instance 
stauts? to code outside of the instance class itself.

For an example of why this is bad, look at the method 
_instance_ids_with_failures. The code is checking for failures by seeing if 
the service status is failed. What if the Nova server or Cinder volume have 
tanked instead? The code won't work as expected.

It could be we need to introduce another status besides BUILD to instance 
statuses, or we need to introduce a new internal property to the SimpleInstance 
base class we can check. But whatever we do we should add this extra logic to 
the instance class itself rather than put it in the clustering models code.

This is a minor nitpick but I think we should fix it before too much time 
passes.

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-15 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2014-09-15 00:57:05 -0700:
 On 09/12/2014 07:13 PM, Clint Byrum wrote:
  Excerpts from Thierry Carrez's message of 2014-09-12 02:16:42 -0700:
  Clint Byrum wrote:
  Excerpts from Flavio Percoco's message of 2014-09-11 04:14:30 -0700:
  Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal is
  to optimize Zaqar for delivering messages and supporting different
  messaging patterns.
 
  Awesome! Just please don't expect people to get excited about it for
  the lighter weight queueing workloads that you've claimed as use cases.
 
  I totally see Horizon using it to keep events for users. I see Heat
  using it for stack events as well. I would bet that Trove would benefit
  from being able to communicate messages to users.
 
  But I think in between Zaqar and the backends will likely be a lighter
  weight queue-only service that the users can just subscribe to when they
  don't want an inbox. And I think that lighter weight queue service is
  far more important for OpenStack than the full blown random access
  inbox.
 
  I think the reason such a thing has not appeared is because we were all
  sort of running into but Zaqar is already incubated. Now that we've
  fleshed out the difference, I think those of us that need a lightweight
  multi-tenant queue service should add it to OpenStack.  Separately. I hope
  that doesn't offend you and the rest of the excellent Zaqar developers. It
  is just a different thing.
 
  Should we remove all the semantics that allow people to use Zaqar as a
  queue service? I don't think so either. Again, the semantics are there
  because Zaqar is using them to do its job. Whether other folks may/may
  not use Zaqar as a queue service is out of our control.
 
  This doesn't mean the project is broken.
 
  No, definitely not broken. It just isn't actually necessary for many of
  the stated use cases.
 
  Clint,
 
  If I read you correctly, you're basically saying the Zaqar is overkill
  for a lot of people who only want a multi-tenant queue service. It's
  doing A+B. Why does that prevent people who only need A from using it ?
 
  Is it that it's actually not doing A well, from a user perspective ?
  Like the performance sucks, or it's missing a key primitive ?
 
  Is it that it's unnecessarily complex to deploy, from a deployer
  perspective, and that something only doing A would be simpler, while
  covering most of the use cases?
 
  Is it something else ?
 
  I want to make sure I understand your objection. In the user
  perspective it might make sense to pursue both options as separate
  projects. In the deployer perspective case, having a project doing A+B
  and a project doing A doesn't solve anything. So this affects the
  decision we have to take next Tuesday...
  
  I believe that Zaqar does two things, inbox semantics, and queue
  semantics. I believe the queueing is a side-effect of needing some kind
  of queue to enable users to store and subscribe to messages in the
  inbox.
  
  What I'd rather see is an API for queueing, and an API for inboxes
  which integrates well with the queueing API. For instance, if a user
  says give me an inbox I think Zaqar should return a queue handle for
  sending into the inbox the same way Nova gives you a Neutron port if
  you don't give it one. You might also ask for a queue to receive push
  messages from the inbox. Point being, the queues are not the inbox,
  and the inbox is not the queues.
  
  However, if I just want a queue, just give me a queue. Don't store my
  messages in a randomly addressable space, and don't saddle the deployer
  with the burden of such storage. Put the queue API in front of a scalable
  message queue and give me a nice simple HTTP API. Users would likely be
  thrilled. Heat, Nova, Ceilometer, probably Trove and Sahara, could all
  make use of just this. Only Horizon seems to need a place to keep the
  messages around while users inspect them.
  
  Whether that is two projects, or one, separation between the two API's,
  and thus two very different types of backends, is something I think
  will lead to more deployers wanting to deploy both, so that they can
  bill usage appropriately and so that their users can choose wisely.
 
 This is one of the use-cases we designed flavors for. One of the mail
 ideas behind flavors is giving the user the choice of where they want
 their messages to be stored. This certainly requires the deployer to
 have installed stores that are good for each job. For example, based on
 the current existing drivers, a deployer could have configured a
 high-throughput flavor on top of a redis node that has been configured
 to perform for this job. Alongside to this flavor, the deployer could've
 configured a flavor that features durability on top of mongodb or redis.
 
 When the user creates the queue/bucket/inbox/whatever they want to put
 their messages into, they'll be able to choose where those messages
 should be stored 

[openstack-dev] [neutron] Weekly meeting is today at 2100 UTC

2014-09-15 Thread Kyle Mestery
Just a note that per our new rotating meeting schedule, we'll hold the
Neutron meeting today at 2100 UTC in #openstack-meeting. Please feel
free to add items to the meeting agenda [1].

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Jay Faulkner
Steven,

It's important to note that two of the blueprints you reference: 

https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

are both very unlikely to land in Ironic -- these are configuration and 
discovery pieces that best fit inside a operator-deployed CMDB, rather than 
Ironic trying to extend its scope significantly to include these type of 
functions. I expect the scoping or Ironic with regards to hardware 
discovery/interrogation as well as configuration of hardware (like I will 
outline below) to be hot topics in Ironic design summit sessions at Paris.

A good way of looking at it is that Ironic is responsible for hardware *at 
provision time*. Registering the nodes in Ironic, as well as hardware 
settings/maintenance/etc while a workload is provisioned is left to the 
operators' CMDB. 

This means what Ironic *can* do is modify the configuration of a node at 
provision time based on information passed down the provisioning pipeline. For 
instance, if you wanted to configure certain firmware pieces at provision time, 
you could do something like this:

Nova flavor sets capability:vm_hypervisor in the flavor that maps to the Ironic 
node. This would map to an Ironic driver that exposes vm_hypervisor as a 
capability, and upon seeing capability:vm_hypervisor has been requested, could 
then configure the firmware/BIOS of the machine to 'hypervisor friendly' 
settings, such as VT bit on and Turbo mode off. You could map multiple 
different combinations of capabilities as different Ironic flavors, and have 
them all represent different configurations of the same pool of nodes. So, you 
end up with two categories of abilities: inherent abilities of the node (such 
as amount of RAM or CPU installed), and configurable abilities (i.e. things 
than can be turned on/off at provision time on demand) -- or perhaps, in the 
future, even things like RAM and CPU will be dynamically provisioned into nodes 
at provision time. 

-Jay Faulkner


From: Steven Hardy sha...@redhat.com
Sent: Monday, September 15, 2014 4:44 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and 
ready state orchestration

All,

Starting this thread as a follow-up to a strongly negative reaction by the
Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
subsequent very detailed justification and discussion of why they may be
useful in this spec[2].

Back in Atlanta, I had some discussions with folks interesting in making
ready state[3] preparation of bare-metal resources possible when
deploying bare-metal nodes via TripleO/Heat/Ironic.

The initial assumption is that there is some discovery step (either
automatic or static generation of a manifest of nodes), that can be input
to either Ironic or Heat.

Following discovery, but before an undercloud deploying OpenStack onto the
nodes, there are a few steps which may be desired, to get the hardware into
a state where it's ready and fully optimized for the subsequent deployment:

- Updating and aligning firmware to meet requirements of qualification or
  site policy
- Optimization of BIOS configuration to match workloads the node is
  expected to run
- Management of machine-local storage, e.g configuring local RAID for
  optimal resilience or performance.

Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
of these steps possible, but there's no easy way to either encapsulate the
(currently mostly vendor specific) data associated with each step, or to
coordinate sequencing of the steps.

What is required is some tool to take a text definition of the required
configuration, turn it into a correctly sequenced series of API calls to
Ironic, expose any data associated with those API calls, and declare
success or failure on completion.  This is what Heat does.

So the idea is to create some basic (contrib, disabled by default) Ironic
heat resources, then explore the idea of orchestrating ready-state
configuration via Heat.

Given that Devananda and I have been banging heads over this for some time
now, I'd like to get broader feedback of the idea, my interpretation of
ready state applied to the tripleo undercloud, and any alternative
implementation ideas.

Thanks!

Steve

[1] https://review.openstack.org/#/c/104222/
[2] https://review.openstack.org/#/c/120778/
[3] http://robhirschfeld.com/2014/04/25/ready-state-infrastructure/
[4] https://blueprints.launchpad.net/ironic/+spec/drac-management-driver
[5] https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
[6] https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

2014-09-15 Thread Sabari Murugesan
+1

On Sep 15, 2014, at 10:18 AM, Arnaud Legendre wrote:

+1

On Sep 15, 2014, at 9:37 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
I would like to propose Radoslav to be a core team member. Over the course of 
the J cycle he has been great with the reviews, bug fixes and updates to the 
project.
Can the other core team members please update with your votes if you agree or 
not.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-15 Thread Matt Riedemann



On 9/10/2014 11:08 AM, Kyle Mestery wrote:

On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:



On 9/9/2014 4:19 PM, Sean Dague wrote:


As we try to stabilize OpenStack Juno, many server projects need to get
out final client releases that expose new features of their servers.
While this seems like not a big deal, each of these clients releases
ends up having possibly destabilizing impacts on the OpenStack whole (as
the clients do double duty in cross communicating between services).

As such in the release meeting today it was agreed clients should have
their final release by Sept 18th. We'll start applying the dependency
freeze to oslo and clients shortly after that, all other requirements
should be frozen at this point unless there is a high priority bug
around them.

 -Sean



Thanks for bringing this up. We do our own packaging and need time for legal
clearances and having the final client releases done in a reasonable time
before rc1 is helpful.  I've been pinging a few projects to do a final
client release relatively soon.  python-neutronclient has a release this
week and I think John was planning a python-cinderclient release this week
also.


Just a slight correction: python-neutronclient will have a final
release once the L3 HA CLI changes land [1].

Thanks,
Kyle

[1] https://review.openstack.org/#/c/108378/


--

Thanks,

Matt Riedemann



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



python-cinderclient 1.1.0 was released on Saturday:

https://pypi.python.org/pypi/python-cinderclient/1.1.0

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread James Slagle
On Mon, Sep 15, 2014 at 12:59 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from James Slagle's message of 2014-09-15 08:15:21 -0700:
 On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy sha...@redhat.com wrote:
  Following discovery, but before an undercloud deploying OpenStack onto the
  nodes, there are a few steps which may be desired, to get the hardware into
  a state where it's ready and fully optimized for the subsequent deployment:
 
  - Updating and aligning firmware to meet requirements of qualification or
site policy
  - Optimization of BIOS configuration to match workloads the node is
expected to run
  - Management of machine-local storage, e.g configuring local RAID for
optimal resilience or performance.
 
  Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
  of these steps possible, but there's no easy way to either encapsulate the
  (currently mostly vendor specific) data associated with each step, or to
  coordinate sequencing of the steps.
 
  What is required is some tool to take a text definition of the required
  configuration, turn it into a correctly sequenced series of API calls to
  Ironic, expose any data associated with those API calls, and declare
  success or failure on completion.  This is what Heat does.
 
  So the idea is to create some basic (contrib, disabled by default) Ironic
  heat resources, then explore the idea of orchestrating ready-state
  configuration via Heat.
 
  Given that Devananda and I have been banging heads over this for some time
  now, I'd like to get broader feedback of the idea, my interpretation of
  ready state applied to the tripleo undercloud, and any alternative
  implementation ideas.

 My opinion is that if the features are in Ironic, they should be
 exposed via Heat resources for orchestration. If the TripleO case is
 too much of a one-off (which I don't really think it is), then sure,
 keep it all in contrib so that no one gets confused about why the
 resources are there.


 And I think if this is a common thing that Ironic users need to do,
 then Ironic should do it, not Heat.

I would think Heat would be well suited for the case where you want to
orchestrate a workflow on top of existing Ironic API's for managing
the infrastructure lifecycle, of which attaining ready state is one
such use case.

It's a fair point that if these things are common enough to all users
that they should just be done in Ironic. To what extent such an API in
Ironic would just end up orchestrating other Ironic API's the same way
Heat might do it would be hard to tell. It seems like that's the added
complexity in my view vs. a set of simple Heat resources and taking
advantage of all the orchestration that Heat already offers.

I know this use case isn't just about enrolling nodes (apologies if I
implied that in my earlier response). That was just one such use that
jumped out at me in which it might be nice to use Heat. I think about
how os-cloud-config registers nodes today. It has to create the node,
then create the port (2 separate calls). And, it also needs the
ability to update registered nodes[1]. This logic is going to end up
living in os-cloud-config.

And perhaps the answer is no, but it seems to me Heat could do this
sort of thing easier already if it had the resources defined to do so.
It'd be neat to have a yaml file of all your defined nodes, and use
stack-create to register them in Ironic. When you need to add some new
ones, update the yaml, and then stack-update.

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/043782.html
(thread crossed into Sept)

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Jim Rollenhagen
On Mon, Sep 15, 2014 at 12:44:24PM +0100, Steven Hardy wrote:
 All,
 
 Starting this thread as a follow-up to a strongly negative reaction by the
 Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
 subsequent very detailed justification and discussion of why they may be
 useful in this spec[2].
 
 Back in Atlanta, I had some discussions with folks interesting in making
 ready state[3] preparation of bare-metal resources possible when
 deploying bare-metal nodes via TripleO/Heat/Ironic.
 
 The initial assumption is that there is some discovery step (either
 automatic or static generation of a manifest of nodes), that can be input
 to either Ironic or Heat.

We've discussed this a *lot* within Ironic, and have decided that
auto-discovery (with registration) is out of scope for Ironic. In my
opinion, this is straightforward enough for operators to write small
scripts to take a CSV/JSON/whatever file and register the nodes in that
file with Ironic. This is what we've done at Rackspace, and it's really
not that annoying; the hard part is dealing with incorrect data from
the (vendor|DC team|whatever).

That said, I like the thought of Ironic having a bulk-registration
feature with some sort of specified format (I imagine this would just be
a simple JSON list of node objects).

We are likely doing a session on discovery in general in Paris. It seems
like the main topic will be about how to interface with external
inventory management systems to coordinate node discovery. Maybe Heat is
a valid tool to integrate with here, maybe not.

 Following discovery, but before an undercloud deploying OpenStack onto the
 nodes, there are a few steps which may be desired, to get the hardware into
 a state where it's ready and fully optimized for the subsequent deployment:

These pieces are mostly being done downstream, and (IMO) in scope for
Ironic in the Kilo cycle. More below.

 - Updating and aligning firmware to meet requirements of qualification or
   site policy

Rackspace does this today as part of what we call decommissioning.
There are patches up for review for both ironic-python-agent (IPA) [1] and
Ironic [2] itself. We have support for 1) flashing a BIOS on a node, and
2) Writing a set of BIOS settings to a node (these are embedded in the agent
image as a set, not through an Ironic API). These are both implemented as
a hardware manager plugin, and so can easily be vendor-specific.

I expect this to land upstream in the Kilo release.

 - Optimization of BIOS configuration to match workloads the node is
   expected to run

The Ironic team has also discussed this, mostly at the last mid-cycle
meetup. We'll likely have a session on capabilities, which we think
might be the best way to handle this case. Essentially, a node can be
tagged with arbitrary capabilities, e.g. hypervisor, which Nova
(flavors?) could use for scheduling, and Ironic drivers could use to do
per-provisioning work, like setting BIOS settings. This may even tie in
with the next point.

Looks like Jay just ninja'd me a bit on this point. :)

 - Management of machine-local storage, e.g configuring local RAID for
   optimal resilience or performance.

I don't see why Ironic couldn't do something with this in Kilo. It's
dangerously close to the inventory management line, however I think
it's reasonable for a user to specify that his or her root partition
should be on a RAID or a specific disk out of many in the node.

 Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
 of these steps possible, but there's no easy way to either encapsulate the
 (currently mostly vendor specific) data associated with each step, or to
 coordinate sequencing of the steps.

It's important to remember that just because a blueprint/spec exists,
does not mean it will be approved. :) I don't expect the DRAC
discovery blueprint to go through, and the DRAC RAID blueprint is
questionable, with regards to scope.

 What is required is some tool to take a text definition of the required
 configuration, turn it into a correctly sequenced series of API calls to
 Ironic, expose any data associated with those API calls, and declare
 success or failure on completion.  This is what Heat does.

This is a fair point, however none of these use cases have code landed
in mainline Ironic, and certainly don't have APIs exposed, with the
exception of node registration. Is it useful to start writing plumbing to
talk to APIs that don't exist?

All that said, I don't think it's unreasonable for Heat to talk directly
to Ironic, but only if there's a valid use case that Ironic can't (or
won't) provide a solution for.

// jim

[1] https://review.openstack.org/104379
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:decom-nodes,n,z

 So the idea is to create some basic (contrib, disabled by default) Ironic
 heat resources, then explore the idea of orchestrating ready-state
 configuration via Heat.
 
 Given that 

[openstack-dev] How to inject files inside VM using Heat [heat]

2014-09-15 Thread pratik maru
Hi All,

I am trying to inject a file from outside into a guest using heat, what
heat properties can i use for the same ?

If I am not wrong, there is an option in nova boot --file to do the same,
do we have an equivalent option in heat also ?

Thanks in advance.

Regards
Fipuzzles
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Cancel the IRC meeting this week?

2014-09-15 Thread Dugger, Donald D
I'd like to propose that we defer the meeting this week and reconvene next 
Tues, 9/23.  I don't think there's much new to talk about right now and we're 
waiting for a write up on the claims process.  I'd like to get that write up 
when it's ready, have everyone review it and then we can talk more concretely 
on what our next steps should be.

If anyone does have a topic they want to talk about let me know but, failing 
anything new, let's wait a week.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-15 Thread Eric Blake
On 09/15/2014 09:29 AM, Radomir Dopieralski wrote:
 On 12/09/14 17:11, Doug Hellmann wrote:
 
 I also use git-hooks with a post-checkout script to remove pyc files any 
 time I change between branches, which is especially helpful if the different 
 branches have code being moved around:

 git-hooks: https://github.com/icefox/git-hooks

 The script:

 $ cat ~/.git_hooks/post-checkout/remove_pyc
 #!/bin/sh
 echo Removing pyc files from `pwd`
 find . -name '*.pyc' | xargs rm -f
 exit 0
 
 Good thing that python modules can't have spaces in their names! But for
 the future, find has a -delete parameter that won't break horribly on
 strange filenames.
 
 find . -name '*.pyc' -delete

GNU find has that as an extension, but POSIX does not guarantee it, and
BSD find lacks it.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Dmitry Tantsur
On Mon, 2014-09-15 at 11:04 -0700, Jim Rollenhagen wrote:
 On Mon, Sep 15, 2014 at 12:44:24PM +0100, Steven Hardy wrote:
  All,
  
  Starting this thread as a follow-up to a strongly negative reaction by the
  Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
  subsequent very detailed justification and discussion of why they may be
  useful in this spec[2].
  
  Back in Atlanta, I had some discussions with folks interesting in making
  ready state[3] preparation of bare-metal resources possible when
  deploying bare-metal nodes via TripleO/Heat/Ironic.
  
  The initial assumption is that there is some discovery step (either
  automatic or static generation of a manifest of nodes), that can be input
  to either Ironic or Heat.
 
 We've discussed this a *lot* within Ironic, and have decided that
 auto-discovery (with registration) is out of scope for Ironic. 
Even if there is such an agreement, it's the first time I hear about it.
All previous discussions _I'm aware of_ (e.g. midcycle) ended up with
we can discover only things that are required for scheduling. When did
it change?

 In my
 opinion, this is straightforward enough for operators to write small
 scripts to take a CSV/JSON/whatever file and register the nodes in that
 file with Ironic. This is what we've done at Rackspace, and it's really
 not that annoying; the hard part is dealing with incorrect data from
 the (vendor|DC team|whatever).
Provided this CSV contains all the required data, not only IPMI
credentials, which IIRC is often the case.

 
 That said, I like the thought of Ironic having a bulk-registration
 feature with some sort of specified format (I imagine this would just be
 a simple JSON list of node objects).
 
 We are likely doing a session on discovery in general in Paris. It seems
 like the main topic will be about how to interface with external
 inventory management systems to coordinate node discovery. Maybe Heat is
 a valid tool to integrate with here, maybe not.
 
  Following discovery, but before an undercloud deploying OpenStack onto the
  nodes, there are a few steps which may be desired, to get the hardware into
  a state where it's ready and fully optimized for the subsequent deployment:
 
 These pieces are mostly being done downstream, and (IMO) in scope for
 Ironic in the Kilo cycle. More below.
 
  - Updating and aligning firmware to meet requirements of qualification or
site policy
 
 Rackspace does this today as part of what we call decommissioning.
 There are patches up for review for both ironic-python-agent (IPA) [1] and
 Ironic [2] itself. We have support for 1) flashing a BIOS on a node, and
 2) Writing a set of BIOS settings to a node (these are embedded in the agent
 image as a set, not through an Ironic API). These are both implemented as
 a hardware manager plugin, and so can easily be vendor-specific.
 
 I expect this to land upstream in the Kilo release.
 
  - Optimization of BIOS configuration to match workloads the node is
expected to run
 
 The Ironic team has also discussed this, mostly at the last mid-cycle
 meetup. We'll likely have a session on capabilities, which we think
 might be the best way to handle this case. Essentially, a node can be
 tagged with arbitrary capabilities, e.g. hypervisor, which Nova
 (flavors?) could use for scheduling, and Ironic drivers could use to do
 per-provisioning work, like setting BIOS settings. This may even tie in
 with the next point.
 
 Looks like Jay just ninja'd me a bit on this point. :)
 
  - Management of machine-local storage, e.g configuring local RAID for
optimal resilience or performance.
 
 I don't see why Ironic couldn't do something with this in Kilo. It's
 dangerously close to the inventory management line, however I think
 it's reasonable for a user to specify that his or her root partition
 should be on a RAID or a specific disk out of many in the node.
 
  Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
  of these steps possible, but there's no easy way to either encapsulate the
  (currently mostly vendor specific) data associated with each step, or to
  coordinate sequencing of the steps.
 
 It's important to remember that just because a blueprint/spec exists,
 does not mean it will be approved. :) I don't expect the DRAC
 discovery blueprint to go through, and the DRAC RAID blueprint is
 questionable, with regards to scope.
 
  What is required is some tool to take a text definition of the required
  configuration, turn it into a correctly sequenced series of API calls to
  Ironic, expose any data associated with those API calls, and declare
  success or failure on completion.  This is what Heat does.
 
 This is a fair point, however none of these use cases have code landed
 in mainline Ironic, and certainly don't have APIs exposed, with the
 exception of node registration. Is it useful to start writing plumbing to
 talk to APIs that don't exist?
 
 All that said, I don't think it's 

Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Zane Bitter

On 15/09/14 12:00, Steven Hardy wrote:

For example, today, I've been looking at the steps required for driving
autodiscovery:

https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno

Driving this process looks a lot like application orchestration:

1. Take some input (IPMI credentials and MAC addresses)
2. Maybe build an image and ramdisk(could drop credentials in)
3. Interact with the Ironic API to register nodes in maintenance mode
4. Boot the nodes, monitor state, wait for a signal back containing some
data obtained during discovery (same as WaitConditions or
SoftwareDeployment resources in Heat..)
5. Shutdown the nodes and mark them ready for use by nova

At some point near the end of this sequence, you could optionally insert
the ready state workflow described in the spec.

So I guess my question then becomes, regardless of ready state, what is
expected to drive the steps above if it's not Heat?


Maybe you're not explaining it the way you intended to, because I was 
more or less on board until I read this, which doesn't sound like Heat's 
domain at all. It sounds like a workflow, of the kind that could in 
future be handled by something like Mistral.


Only step 3 sounds like a job for Heat (i.e. involves a declarative 
representation of some underlying resources). I think it certainly makes 
sense to use Heat for that if it can have some meaningful input into the 
lifecycle (i.e. you can usefully update to add and remove nodes, and 
unregister them all on delete). If not then it's hard to see what value 
Heat adds over and above a for-loop.


For the record, IMO the fact that the Ironic API is operator-facing 
should *not* be an obstacle here. We have an established policy for how 
to handle such APIs - write the plugins, maintain them in /contrib - and 
this proposal is entirely consistent with that.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Chris St. Pierre
On Mon, Sep 15, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote:

 I believe I did:

 http://lists.openstack.org/pipermail/openstack-dev/2014-
 September/045924.html


Sorry, missed your explanation. I think Sean's suggestion -- to keep ID
fields restricted, but de-restrict name fields -- walks a nice middle
ground between database bloat/performance concerns and user experience.


  what

 stands between me and a +2?


 Bug fix priorities, feature freeze exceptions, and review load.
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Well, sure. I meant other than that. :)

My review is at https://review.openstack.org/#/c/119421/ if anyone does
find time to +N it. Thanks all!

-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-09-15 10:10:05 -0700:
 On Mon, Sep 15, 2014 at 09:50:24AM -0700, Clint Byrum wrote:
  Excerpts from Steven Hardy's message of 2014-09-15 04:44:24 -0700:
   All,
   
   Starting this thread as a follow-up to a strongly negative reaction by the
   Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
   subsequent very detailed justification and discussion of why they may be
   useful in this spec[2].
   
   Back in Atlanta, I had some discussions with folks interesting in making
   ready state[3] preparation of bare-metal resources possible when
   deploying bare-metal nodes via TripleO/Heat/Ironic.
   
   The initial assumption is that there is some discovery step (either
   automatic or static generation of a manifest of nodes), that can be input
   to either Ironic or Heat.
   
   Following discovery, but before an undercloud deploying OpenStack onto the
   nodes, there are a few steps which may be desired, to get the hardware 
   into
   a state where it's ready and fully optimized for the subsequent 
   deployment:
   
   - Updating and aligning firmware to meet requirements of qualification or
 site policy
   - Optimization of BIOS configuration to match workloads the node is
 expected to run
   - Management of machine-local storage, e.g configuring local RAID for
 optimal resilience or performance.
   
   Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
   of these steps possible, but there's no easy way to either encapsulate the
   (currently mostly vendor specific) data associated with each step, or to
   coordinate sequencing of the steps.
   
  
  First, Ironic is hidden under Nova as far as TripleO is concerned. So
  mucking with the servers underneath Nova during deployment is a difficult
  proposition. Would I look up the Ironic node ID of the nova server,
  and then optimize it for the workload after the workload arrived? Why
  wouldn't I just do that optimization before the deployment?
 
 That's exactly what I'm proposing - a series of preparatory steps performed
 before the node is visible to nova, before the deployment.
 

Ok good, so I didn't misunderstand. I'm having trouble seeing where Heat
is a good fit there.

 The whole point is that Ironic is hidden under nova, and provides no way to
 perform these pre-deploy steps via interaction with nova.
 
  
   What is required is some tool to take a text definition of the required
   configuration, turn it into a correctly sequenced series of API calls to
   Ironic, expose any data associated with those API calls, and declare
   success or failure on completion.  This is what Heat does.
   
  
  I'd rather see Ironic define or adopt a narrow scope document format
  that it can consume for bulk loading. Heat is extremely generic, and thus
  carries a ton of complexity for what is probably doable with a CSV file.
 
 Perhaps you can read the spec - it's not really about the bulk-load part,
 it's about orchestrating the steps to prepare the node, after it's
 registered with Ironic, but before it's ready to have the stuff deployed to
 it.
 

Sounds like workflow to me. :-P

 What tool do you think will just do that optimization before the
 deployment? (snark not intended, I genuinely want to know, is it scripts
 in TripleO, some sysadmin pre-deploy steps, magic in Ironic?)


If it can all be done by calls to the ironic client with the node ID and
parameters from the user, I'd suggest that this is a simple workflow
and can be done in the step prior to 'heat stack-create'. I don't see
any reason to keep a bunch of records around in Heat to describe what
happened, identically, for Ironic nodes. It is an ephemeral step in the
evolution of the system, not something we need to edit on a regular basis.

My new bar for whether something is a good fit for Heat is what happens
to my workload when I update it. If I go into my Ironic pre-registration
stack and change things around, the likely case is that my box reboots
to re-apply BIOS updates with the new paramters. And there is a missing
dependency expression when using the orchestration tool to do the
workflow job. It may actually be necessary to always do these things to
the hardware in a certain sequence. But editting the Heat template and
updating has no way to express that.

To contrast this with developing it in a workflow control language
(like bash), it is imperative so I am consciously deciding to re-apply
those things by running it. If I only want to do one step, I just do
the one step.

Basically, the imperative model is rigid, sharp, and pointy, but the
declarative model is soft and maleable, and full of unexpected sharp
pointy things. I think users are more comfortable with knowing where
the sharp and pointy things are, than stumbling on them.

   So the idea is to create some basic (contrib, disabled by default) Ironic
   heat resources, then explore the idea of orchestrating ready-state
   

Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-15 Thread Zane Bitter

On 15/09/14 13:28, Clint Byrum wrote:

Excerpts from Flavio Percoco's message of 2014-09-15 00:57:05 -0700:

On 09/12/2014 07:13 PM, Clint Byrum wrote:

Excerpts from Thierry Carrez's message of 2014-09-12 02:16:42 -0700:

Clint Byrum wrote:

Excerpts from Flavio Percoco's message of 2014-09-11 04:14:30 -0700:

Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal is
to optimize Zaqar for delivering messages and supporting different
messaging patterns.


Awesome! Just please don't expect people to get excited about it for
the lighter weight queueing workloads that you've claimed as use cases.

I totally see Horizon using it to keep events for users. I see Heat
using it for stack events as well. I would bet that Trove would benefit
from being able to communicate messages to users.

But I think in between Zaqar and the backends will likely be a lighter
weight queue-only service that the users can just subscribe to when they
don't want an inbox. And I think that lighter weight queue service is
far more important for OpenStack than the full blown random access
inbox.

I think the reason such a thing has not appeared is because we were all
sort of running into but Zaqar is already incubated. Now that we've
fleshed out the difference, I think those of us that need a lightweight
multi-tenant queue service should add it to OpenStack.  Separately. I hope
that doesn't offend you and the rest of the excellent Zaqar developers. It
is just a different thing.


Should we remove all the semantics that allow people to use Zaqar as a
queue service? I don't think so either. Again, the semantics are there
because Zaqar is using them to do its job. Whether other folks may/may
not use Zaqar as a queue service is out of our control.

This doesn't mean the project is broken.


No, definitely not broken. It just isn't actually necessary for many of
the stated use cases.


Clint,

If I read you correctly, you're basically saying the Zaqar is overkill
for a lot of people who only want a multi-tenant queue service. It's
doing A+B. Why does that prevent people who only need A from using it ?

Is it that it's actually not doing A well, from a user perspective ?
Like the performance sucks, or it's missing a key primitive ?

Is it that it's unnecessarily complex to deploy, from a deployer
perspective, and that something only doing A would be simpler, while
covering most of the use cases?

Is it something else ?

I want to make sure I understand your objection. In the user
perspective it might make sense to pursue both options as separate
projects. In the deployer perspective case, having a project doing A+B
and a project doing A doesn't solve anything. So this affects the
decision we have to take next Tuesday...


I believe that Zaqar does two things, inbox semantics, and queue
semantics. I believe the queueing is a side-effect of needing some kind
of queue to enable users to store and subscribe to messages in the
inbox.

What I'd rather see is an API for queueing, and an API for inboxes
which integrates well with the queueing API. For instance, if a user
says give me an inbox I think Zaqar should return a queue handle for
sending into the inbox the same way Nova gives you a Neutron port if
you don't give it one. You might also ask for a queue to receive push
messages from the inbox. Point being, the queues are not the inbox,
and the inbox is not the queues.

However, if I just want a queue, just give me a queue. Don't store my
messages in a randomly addressable space, and don't saddle the deployer
with the burden of such storage. Put the queue API in front of a scalable
message queue and give me a nice simple HTTP API. Users would likely be
thrilled. Heat, Nova, Ceilometer, probably Trove and Sahara, could all
make use of just this. Only Horizon seems to need a place to keep the
messages around while users inspect them.

Whether that is two projects, or one, separation between the two API's,
and thus two very different types of backends, is something I think
will lead to more deployers wanting to deploy both, so that they can
bill usage appropriately and so that their users can choose wisely.


This is one of the use-cases we designed flavors for. One of the mail
ideas behind flavors is giving the user the choice of where they want
their messages to be stored. This certainly requires the deployer to
have installed stores that are good for each job. For example, based on
the current existing drivers, a deployer could have configured a
high-throughput flavor on top of a redis node that has been configured
to perform for this job. Alongside to this flavor, the deployer could've
configured a flavor that features durability on top of mongodb or redis.

When the user creates the queue/bucket/inbox/whatever they want to put
their messages into, they'll be able to choose where those messages
should be stored into based on their needs.

I do understand your objection is not against Zaqar being able to do
this now or not but 

Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-15 Thread Chris St. Pierre
Linking clearly isn't my strong suit:
https://review.openstack.org/#/c/119741/

On Mon, Sep 15, 2014 at 1:58 PM, Chris St. Pierre stpie...@metacloud.com
wrote:

 On Mon, Sep 15, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote:

 I believe I did:

 http://lists.openstack.org/pipermail/openstack-dev/2014-
 September/045924.html


 Sorry, missed your explanation. I think Sean's suggestion -- to keep ID
 fields restricted, but de-restrict name fields -- walks a nice middle
 ground between database bloat/performance concerns and user experience.


  what

 stands between me and a +2?


 Bug fix priorities, feature freeze exceptions, and review load.
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Well, sure. I meant other than that. :)

 My review is at https://review.openstack.org/#/c/119421/ if anyone does
 find time to +N it. Thanks all!

 --
 Chris St. Pierre
 Senior Software Engineer
 metacloud.com




-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-15 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-09-15 12:05:09 -0700:
 On 15/09/14 13:28, Clint Byrum wrote:
  Excerpts from Flavio Percoco's message of 2014-09-15 00:57:05 -0700:
  On 09/12/2014 07:13 PM, Clint Byrum wrote:
  Excerpts from Thierry Carrez's message of 2014-09-12 02:16:42 -0700:
  Clint Byrum wrote:
  Excerpts from Flavio Percoco's message of 2014-09-11 04:14:30 -0700:
  Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal 
  is
  to optimize Zaqar for delivering messages and supporting different
  messaging patterns.
 
  Awesome! Just please don't expect people to get excited about it for
  the lighter weight queueing workloads that you've claimed as use cases.
 
  I totally see Horizon using it to keep events for users. I see Heat
  using it for stack events as well. I would bet that Trove would benefit
  from being able to communicate messages to users.
 
  But I think in between Zaqar and the backends will likely be a lighter
  weight queue-only service that the users can just subscribe to when they
  don't want an inbox. And I think that lighter weight queue service is
  far more important for OpenStack than the full blown random access
  inbox.
 
  I think the reason such a thing has not appeared is because we were all
  sort of running into but Zaqar is already incubated. Now that we've
  fleshed out the difference, I think those of us that need a lightweight
  multi-tenant queue service should add it to OpenStack.  Separately. I 
  hope
  that doesn't offend you and the rest of the excellent Zaqar developers. 
  It
  is just a different thing.
 
  Should we remove all the semantics that allow people to use Zaqar as a
  queue service? I don't think so either. Again, the semantics are there
  because Zaqar is using them to do its job. Whether other folks may/may
  not use Zaqar as a queue service is out of our control.
 
  This doesn't mean the project is broken.
 
  No, definitely not broken. It just isn't actually necessary for many of
  the stated use cases.
 
  Clint,
 
  If I read you correctly, you're basically saying the Zaqar is overkill
  for a lot of people who only want a multi-tenant queue service. It's
  doing A+B. Why does that prevent people who only need A from using it ?
 
  Is it that it's actually not doing A well, from a user perspective ?
  Like the performance sucks, or it's missing a key primitive ?
 
  Is it that it's unnecessarily complex to deploy, from a deployer
  perspective, and that something only doing A would be simpler, while
  covering most of the use cases?
 
  Is it something else ?
 
  I want to make sure I understand your objection. In the user
  perspective it might make sense to pursue both options as separate
  projects. In the deployer perspective case, having a project doing A+B
  and a project doing A doesn't solve anything. So this affects the
  decision we have to take next Tuesday...
 
  I believe that Zaqar does two things, inbox semantics, and queue
  semantics. I believe the queueing is a side-effect of needing some kind
  of queue to enable users to store and subscribe to messages in the
  inbox.
 
  What I'd rather see is an API for queueing, and an API for inboxes
  which integrates well with the queueing API. For instance, if a user
  says give me an inbox I think Zaqar should return a queue handle for
  sending into the inbox the same way Nova gives you a Neutron port if
  you don't give it one. You might also ask for a queue to receive push
  messages from the inbox. Point being, the queues are not the inbox,
  and the inbox is not the queues.
 
  However, if I just want a queue, just give me a queue. Don't store my
  messages in a randomly addressable space, and don't saddle the deployer
  with the burden of such storage. Put the queue API in front of a scalable
  message queue and give me a nice simple HTTP API. Users would likely be
  thrilled. Heat, Nova, Ceilometer, probably Trove and Sahara, could all
  make use of just this. Only Horizon seems to need a place to keep the
  messages around while users inspect them.
 
  Whether that is two projects, or one, separation between the two API's,
  and thus two very different types of backends, is something I think
  will lead to more deployers wanting to deploy both, so that they can
  bill usage appropriately and so that their users can choose wisely.
 
  This is one of the use-cases we designed flavors for. One of the mail
  ideas behind flavors is giving the user the choice of where they want
  their messages to be stored. This certainly requires the deployer to
  have installed stores that are good for each job. For example, based on
  the current existing drivers, a deployer could have configured a
  high-throughput flavor on top of a redis node that has been configured
  to perform for this job. Alongside to this flavor, the deployer could've
  configured a flavor that features durability on top of mongodb or redis.
 
  When the user creates the 

Re: [openstack-dev] How to inject files inside VM using Heat [heat]

2014-09-15 Thread Denis Makogon
On Mon, Sep 15, 2014 at 9:06 PM, pratik maru fipuzz...@gmail.com wrote:

 Hi All,

 I am trying to inject a file from outside into a guest using heat, what
 heat properties can i use for the same ?


You might take a look at
https://github.com/openstack/heat-templates/blob/7ec1eb98707dc759c699ad59d46e098e6c06e42c/cfn/F17/PuppetMaster_Single_Instance.template#L80-L156

Also you are able to parametrize file content.


 If I am not wrong, there is an option in nova boot --file to do the
 same, do we have an equivalent option in heat also ?


Correct.


 Thanks in advance.

 Regards
 Fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Steven Hardy
On Mon, Sep 15, 2014 at 05:51:43PM +, Jay Faulkner wrote:
 Steven,
 
 It's important to note that two of the blueprints you reference: 
 
 https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
 https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery
 
 are both very unlikely to land in Ironic -- these are configuration and 
 discovery pieces that best fit inside a operator-deployed CMDB, rather than 
 Ironic trying to extend its scope significantly to include these type of 
 functions. I expect the scoping or Ironic with regards to hardware 
 discovery/interrogation as well as configuration of hardware (like I will 
 outline below) to be hot topics in Ironic design summit sessions at Paris.

Hmm, okay - not sure I really get how a CMDB is going to help you configure
your RAID arrays in an automated way?

Or are you subscribing to the legacy datacentre model where a sysadmin
configures a bunch of boxes via whatever method, puts their details into
the CMDB, then feeds those details into Ironic?

 A good way of looking at it is that Ironic is responsible for hardware *at 
 provision time*. Registering the nodes in Ironic, as well as hardware 
 settings/maintenance/etc while a workload is provisioned is left to the 
 operators' CMDB. 
 
 This means what Ironic *can* do is modify the configuration of a node at 
 provision time based on information passed down the provisioning pipeline. 
 For instance, if you wanted to configure certain firmware pieces at provision 
 time, you could do something like this:
 
 Nova flavor sets capability:vm_hypervisor in the flavor that maps to the 
 Ironic node. This would map to an Ironic driver that exposes vm_hypervisor as 
 a capability, and upon seeing capability:vm_hypervisor has been requested, 
 could then configure the firmware/BIOS of the machine to 'hypervisor 
 friendly' settings, such as VT bit on and Turbo mode off. You could map 
 multiple different combinations of capabilities as different Ironic flavors, 
 and have them all represent different configurations of the same pool of 
 nodes. So, you end up with two categories of abilities: inherent abilities of 
 the node (such as amount of RAM or CPU installed), and configurable abilities 
 (i.e. things than can be turned on/off at provision time on demand) -- or 
 perhaps, in the future, even things like RAM and CPU will be dynamically 
 provisioned into nodes at provision time.

So you advocate pushing all the vendor-specific stuff down into various
Ironic drivers, interesting - is any of what you describe above possible
today?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Responsibilities for controller drivers

2014-09-15 Thread Stephen Balukoff
Hi Brandon!

My responses in-line:

On Fri, Sep 12, 2014 at 11:27 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 IN IRC the topic came up about supporting many-to-many load balancers to
 amphorae.  I believe a consensus was made that allowing only one-to-many
 load balancers to amphorae would be the first step forward, and
 re-evaluate later, since colocation and apolocation will need to work
 (which brings up another topic, defining what it actually means to be
 colocated: On the same amphorae, on the same amphorae host, on the same
 cell/cluster, on the same data center/availability zone. That should be
 something we discuss later, but not right now).

 I am fine with that decisions, but Doug brought up a good point that
 this could very well just be a decision for the controller driver and
 Octavia shouldn't mandate this for all drivers.  So I think we need to
 clearly define what decisions are the responsibility of the controller
 driver versus what decisions are mandated by Octavia's construct.


In my mind, the only thing dictated by the controller to the driver here
would be things related to colocation / apolocation. So in order to fully
have that discussion here, we first need to have a conversation about what
these things actually mean in the context of Octavia and/or get specific
requirements from operators here.  The reference driver (ie. haproxy
amphora) will of course have to follow a given behavior here as well, and
there's the possibility that even if we don't dictate behavior in one way
or another, operators and users may come to expect the behavior of the
reference driver here to become the defacto requirements.



 Items I can come up with off the top of my head:

 1) LB:Amphora - M:N vs 1:N


My opinion:  For simplicity, first revision should be 1:N, but leave open
the possibility of M:N at a later date, depending on what people require.
That is to say, we'll only do 1:N at first so we can have simpler
scheduling algorithms for now, but let's not paint ourselves into a corner
in other portions of the code by assuming there will only ever be one LB on
an amphora.


 2) VIPs:LB - M:N vs 1:N


So, I would revise that to be N:1 or 1:1. I don't think we'll ever want to
support a case where multiple LBs share the same VIP. (Multiple amphorae
per VIP, yes... but not multiple LBs per VIP. LBs are logical constructs
that also provide for good separation of concerns, particularly around
security.)

The most solid use case for N:1 that I've heard is the IPv6 use case, where
a user wants to expose the exact same services over IPv4 and IPv6, and
therefore it makes sense to be able to have multiple VIPs per load
balancer. (In fact, I'm not aware of other use cases here that hold any
water.) Having said this, we're quite a ways from IPv6 being ready for use
in the underlying networking infrastructure.  So...  again, I would say
let's go with 1:1 for now to make things simple for scheduling, but not
paint ourselves into a corner here architecturally in other areas of the
code by assuming there will only ever be one VIP per LB.

3) Pool:HMs - 1:N vs 1:1


Does anyone have a solid use case for having more than one health monitor
per pool?  (And how do you resolve conflicts in health monitor check
results?)  I can't think of one, so 1:1 has my vote here.




 I'm sure there are others.  I'm sure each one will need to be evaluated
 on a case-by-case basis.  We will be walking a fine line between
 flexibility and complexity.  We just need to define how far over that
 line and in which direction we are willing to go.

 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Jay Pipes

On 09/15/2014 04:08 PM, Steven Hardy wrote:

On Mon, Sep 15, 2014 at 05:51:43PM +, Jay Faulkner wrote:

Steven,

It's important to note that two of the blueprints you reference:

https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

are both very unlikely to land in Ironic -- these are configuration
and discovery pieces that best fit inside a operator-deployed CMDB,
rather than Ironic trying to extend its scope significantly to
include these type of functions. I expect the scoping or Ironic
with regards to hardware discovery/interrogation as well as
configuration of hardware (like I will outline below) to be hot
topics in Ironic design summit sessions at Paris.


Hmm, okay - not sure I really get how a CMDB is going to help you
configure your RAID arrays in an automated way?


FWIW, we used Chef to configure all of our RAID stuff at ATT. It worked 
just fine for Dell and LSI controllers.


-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-15 Thread Anne Gentle
On Mon, Sep 15, 2014 at 11:31 AM, Zane Bitter zbit...@redhat.com wrote:

 On 14/09/14 11:09, Clint Byrum wrote:

 Excerpts from Gauvain Pocentek's message of 2014-09-04 22:29:05 -0700:

 Hi,

 A bit of background: I'm working on the publication of the HOT
 resources reference on docs.openstack.org. This book is mostly
 autogenerated from the heat source code, using the sphinx XML output. To
 avoid publishing several references (one per released version, as is
 done for the OpenStack config-reference), I'd like to add information
 about the support status of each resource (when they appeared, when
 they've been deprecated, and so on).

 So the plan is to use the SupportStatus class and its `version`
 attribute (see https://review.openstack.org/#/c/116443/ ). And the
 question is, what information should the version attribute hold?
 Possibilities include the release code name (Icehouse, Juno), or the
 release version (2014.1, 2014.2). But this wouldn't be useful for users
 of clouds continuously deployed.

   From my documenter point of view, using the code name seems the right
 option, because it fits with the rest of the documentation.

 What do you think would be the best choice from the heat devs POV?


 What we ship in-tree is the standard library for Heat. I think Heat
 should not tie things to the release of OpenStack, but only to itself.


 Standard Library implies that everyone has it available, but in reality
 operators can (and will, and do) deploy any combination of resource types
 that they want.

  The idea is to simply version the standard library of resources separately
 even from the language. Added resources and properties would be minor
 bumps, deprecating or removing anything would be a major bump. Users then
 just need an API call that allows querying the standard library version.


 We already have API calls to actually inspect resource types. I don't
 think a semantic version number is helpful here, since the different
 existing combinations of resources types are not expressible linearly.

 There's no really good answer here, but the only real answer is making
 sure it's easy for people to generate the docs themselves for their actual
 deployment.


In my observations there could be a few private clouds generating user docs
based on upstream, but there have to be many, many more private clouds than
public, so it's better if docs.openstack.org can take on the work here for
sharing the docs burden for users specifically.

Because we've had multiple inputs asking for heat docs that are released,
I'd like to see Gauvain's naming/numbering go into upstream.




  With this scheme, we can provide a gate test that prevents breaking the
 rules, and automatically generate the docs still. Doing this would sync
 better with continuous deployers who will be running Juno well before
 there is a 2014.2.


 Maybe continuous deployers should continuously deploy their own docs? For
 any given cloud the only thing that matters is what it supports right now.

  Anyway, Heat largely exists to support portability of apps between
 OpenStack clouds. Many many OpenStack clouds don't run one release,
 and we don't require them to do so. So tying to the release is, IMO,
 a poor coice.


 The original question was about docs.openstack.org, and in that context I
 think tying it to the release version is a good choice, because that's...
 how OpenStack is released. Individual clouds, however, really need to
 deploy their own docs that document what they actually support.


We only really release two types of documents - the Install Guides and the
Configuration Reference. We have purposely continuously released user guide
info and the HOT templates fall under that category. So this document will
be updated any time someone gets a patch merged. Because of this CI for
docs, labels are critical to aid understanding.

Anne



 The flip side of this, of course, is that whatever we use for the version
 strings on docs.openstack.org will all make its way into all the other
 documentation that gets built, and I do understand your point in that
 context. But versioning the standard library of plugins as if it were a
 monolithic, always-available thing seems wrong to me.

  We do the same thing with HOT's internals, so why not also
 do the standard library this way?


 The current process for HOT is for every OpenStack development cycle (Juno
 is the first to use this) to give it a 'version' string that is the
 expected date of the next release (in the future), and continuous deployers
 who use the new one before that date are on their own (i.e. it's not
 considered stable). So not really comparable.

 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-15 Thread Brant Knudson
On Mon, Sep 15, 2014 at 1:25 PM, Eric Blake ebl...@redhat.com wrote:

 On 09/15/2014 09:29 AM, Radomir Dopieralski wrote:
  On 12/09/14 17:11, Doug Hellmann wrote:
 
  I also use git-hooks with a post-checkout script to remove pyc files
 any time I change between branches, which is especially helpful if the
 different branches have code being moved around:
 
  git-hooks: https://github.com/icefox/git-hooks
 
  The script:
 
  $ cat ~/.git_hooks/post-checkout/remove_pyc
  #!/bin/sh
  echo Removing pyc files from `pwd`
  find . -name '*.pyc' | xargs rm -f
  exit 0
 
  Good thing that python modules can't have spaces in their names! But for
  the future, find has a -delete parameter that won't break horribly on
  strange filenames.
 
  find . -name '*.pyc' -delete

 GNU find has that as an extension, but POSIX does not guarantee it, and
 BSD find lacks it.


The workaround is -print0: find . -name '*.pyc' -print0 | xargs -0 rm -f

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Responsibilities for controller drivers

2014-09-15 Thread Adam Harwell
I pretty much completely agree with Stephen here, other than believing we 
should do N:1 on VIPs (item 2 in your list) from the start. We know we're doing 
IPv6 this way, and I'd rather not put off support for it at the 
controller/driver/whatever layer just because the underlying infrastructure 
isn't there yet. I'd like to be 100% ready when it is, not wait until the 
network is ready and then do a refactor.

--Adam

https://keybase.io/rm_you


From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, September 15, 2014 1:33 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Octavia] Responsibilities for controller drivers

Hi Brandon!

My responses in-line:

On Fri, Sep 12, 2014 at 11:27 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
IN IRC the topic came up about supporting many-to-many load balancers to
amphorae.  I believe a consensus was made that allowing only one-to-many
load balancers to amphorae would be the first step forward, and
re-evaluate later, since colocation and apolocation will need to work
(which brings up another topic, defining what it actually means to be
colocated: On the same amphorae, on the same amphorae host, on the same
cell/cluster, on the same data center/availability zone. That should be
something we discuss later, but not right now).

I am fine with that decisions, but Doug brought up a good point that
this could very well just be a decision for the controller driver and
Octavia shouldn't mandate this for all drivers.  So I think we need to
clearly define what decisions are the responsibility of the controller
driver versus what decisions are mandated by Octavia's construct.

In my mind, the only thing dictated by the controller to the driver here would 
be things related to colocation / apolocation. So in order to fully have that 
discussion here, we first need to have a conversation about what these things 
actually mean in the context of Octavia and/or get specific requirements from 
operators here.  The reference driver (ie. haproxy amphora) will of course have 
to follow a given behavior here as well, and there's the possibility that even 
if we don't dictate behavior in one way or another, operators and users may 
come to expect the behavior of the reference driver here to become the defacto 
requirements.


Items I can come up with off the top of my head:

1) LB:Amphora - M:N vs 1:N

My opinion:  For simplicity, first revision should be 1:N, but leave open the 
possibility of M:N at a later date, depending on what people require. That is 
to say, we'll only do 1:N at first so we can have simpler scheduling algorithms 
for now, but let's not paint ourselves into a corner in other portions of the 
code by assuming there will only ever be one LB on an amphora.

2) VIPs:LB - M:N vs 1:N

So, I would revise that to be N:1 or 1:1. I don't think we'll ever want to 
support a case where multiple LBs share the same VIP. (Multiple amphorae per 
VIP, yes... but not multiple LBs per VIP. LBs are logical constructs that also 
provide for good separation of concerns, particularly around security.)

The most solid use case for N:1 that I've heard is the IPv6 use case, where a 
user wants to expose the exact same services over IPv4 and IPv6, and therefore 
it makes sense to be able to have multiple VIPs per load balancer. (In fact, 
I'm not aware of other use cases here that hold any water.) Having said this, 
we're quite a ways from IPv6 being ready for use in the underlying networking 
infrastructure.  So...  again, I would say let's go with 1:1 for now to make 
things simple for scheduling, but not paint ourselves into a corner here 
architecturally in other areas of the code by assuming there will only ever be 
one VIP per LB.

3) Pool:HMs - 1:N vs 1:1

Does anyone have a solid use case for having more than one health monitor per 
pool?  (And how do you resolve conflicts in health monitor check results?)  I 
can't think of one, so 1:1 has my vote here.



I'm sure there are others.  I'm sure each one will need to be evaluated
on a case-by-case basis.  We will be walking a fine line between
flexibility and complexity.  We just need to define how far over that
line and in which direction we are willing to go.

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-15 Thread Eric Blake
On 09/15/2014 03:02 PM, Brant Knudson wrote:

 Good thing that python modules can't have spaces in their names! But for
 the future, find has a -delete parameter that won't break horribly on
 strange filenames.

 find . -name '*.pyc' -delete

 GNU find has that as an extension, but POSIX does not guarantee it, and
 BSD find lacks it.


 The workaround is -print0: find . -name '*.pyc' -print0 | xargs -0 rm -f

Alas, both find -print0 and xargs -0 are also a GNU extensions not
required by POSIX.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-15 Thread Eric Blake
On 09/15/2014 03:15 PM, Eric Blake wrote:
 On 09/15/2014 03:02 PM, Brant Knudson wrote:
 
 Good thing that python modules can't have spaces in their names! But for
 the future, find has a -delete parameter that won't break horribly on
 strange filenames.

 find . -name '*.pyc' -delete

 GNU find has that as an extension, but POSIX does not guarantee it, and
 BSD find lacks it.


 The workaround is -print0: find . -name '*.pyc' -print0 | xargs -0 rm -f
 
 Alas, both find -print0 and xargs -0 are also a GNU extensions not
 required by POSIX.

find . -name '*.pyc' -exec rm -f \{} +

is POSIX.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-15 Thread Zane Bitter

On 15/09/14 16:55, Anne Gentle wrote:

On Mon, Sep 15, 2014 at 11:31 AM, Zane Bitter zbit...@redhat.com wrote:


On 14/09/14 11:09, Clint Byrum wrote:


Excerpts from Gauvain Pocentek's message of 2014-09-04 22:29:05 -0700:


Hi,

A bit of background: I'm working on the publication of the HOT
resources reference on docs.openstack.org. This book is mostly
autogenerated from the heat source code, using the sphinx XML output. To
avoid publishing several references (one per released version, as is
done for the OpenStack config-reference), I'd like to add information
about the support status of each resource (when they appeared, when
they've been deprecated, and so on).

So the plan is to use the SupportStatus class and its `version`
attribute (see https://review.openstack.org/#/c/116443/ ). And the
question is, what information should the version attribute hold?
Possibilities include the release code name (Icehouse, Juno), or the
release version (2014.1, 2014.2). But this wouldn't be useful for users
of clouds continuously deployed.

   From my documenter point of view, using the code name seems the right
option, because it fits with the rest of the documentation.

What do you think would be the best choice from the heat devs POV?



What we ship in-tree is the standard library for Heat. I think Heat
should not tie things to the release of OpenStack, but only to itself.



Standard Library implies that everyone has it available, but in reality
operators can (and will, and do) deploy any combination of resource types
that they want.

  The idea is to simply version the standard library of resources separately

even from the language. Added resources and properties would be minor
bumps, deprecating or removing anything would be a major bump. Users then
just need an API call that allows querying the standard library version.



We already have API calls to actually inspect resource types. I don't
think a semantic version number is helpful here, since the different
existing combinations of resources types are not expressible linearly.

There's no really good answer here, but the only real answer is making
sure it's easy for people to generate the docs themselves for their actual
deployment.



In my observations there could be a few private clouds generating user docs
based on upstream, but there have to be many, many more private clouds than
public, so it's better if docs.openstack.org can take on the work here for
sharing the docs burden for users specifically.


Yes, although we have to recognise that those docs will only be accurate 
for clouds that installed Heat right out of the box. Every operator is 
free to add, remove or even override any resource types they like.



Because we've had multiple inputs asking for heat docs that are released,
I'd like to see Gauvain's naming/numbering go into upstream.


Right, I totally agree :) Gauvain's scheme seems like the right one to 
me; I was arguing against Clint's suggestion.



  With this scheme, we can provide a gate test that prevents breaking the

rules, and automatically generate the docs still. Doing this would sync
better with continuous deployers who will be running Juno well before
there is a 2014.2.



Maybe continuous deployers should continuously deploy their own docs? For
any given cloud the only thing that matters is what it supports right now.

  Anyway, Heat largely exists to support portability of apps between

OpenStack clouds. Many many OpenStack clouds don't run one release,
and we don't require them to do so. So tying to the release is, IMO,
a poor coice.



The original question was about docs.openstack.org, and in that context I
think tying it to the release version is a good choice, because that's...
how OpenStack is released. Individual clouds, however, really need to
deploy their own docs that document what they actually support.



We only really release two types of documents - the Install Guides and the
Configuration Reference. We have purposely continuously released user guide
info and the HOT templates fall under that category. So this document will
be updated any time someone gets a patch merged. Because of this CI for
docs, labels are critical to aid understanding.


+1, right now we constantly get questions from people reading the docs 
but running Icehouse about why some brand new feature or other that we 
merged yesterday doesn't work. So there is no disputing the need to add 
some sort of versioning information, and I think the first supported 
release is probably the right information to add.


cheers,
Zane.


Anne




The flip side of this, of course, is that whatever we use for the version
strings on docs.openstack.org will all make its way into all the other
documentation that gets built, and I do understand your point in that
context. But versioning the standard library of plugins as if it were a
monolithic, always-available thing seems wrong to me.

  We do the same thing with HOT's internals, so why not also

do the 

Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Michael Still
On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant rbry...@redhat.com wrote:
 On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
 On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
 Just an observation from the last week or so...

 The biggest problem nova faces at the moment isn't code review latency. Our
 biggest problem is failing to fix our bugs so that the gate is reliable.
 The number of rechecks we've done in the last week to try and land code is
 truly startling.

 I consider both problems to be pretty much equally as important. I don't
 think solving review latency or test reliabilty in isolation is enough to
 save Nova. We need to tackle both problems as a priority. I tried to avoid
 getting into my concerns about testing in my mail on review team bottlenecks
 since I think we should address the problems independantly / in parallel.

 Agreed with this.  I don't think we can afford to ignore either one of them.

Yes, that was my point. I don't mind us debating how to rearrange
hypervisor drivers. However, if we think that will solve all our
problems we are confused.

So, how do we get people to start taking bugs / gate failures more seriously?

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-15 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-09-15 09:31:33 -0700:
 On 14/09/14 11:09, Clint Byrum wrote:
  Excerpts from Gauvain Pocentek's message of 2014-09-04 22:29:05 -0700:
  Hi,
 
  A bit of background: I'm working on the publication of the HOT
  resources reference on docs.openstack.org. This book is mostly
  autogenerated from the heat source code, using the sphinx XML output. To
  avoid publishing several references (one per released version, as is
  done for the OpenStack config-reference), I'd like to add information
  about the support status of each resource (when they appeared, when
  they've been deprecated, and so on).
 
  So the plan is to use the SupportStatus class and its `version`
  attribute (see https://review.openstack.org/#/c/116443/ ). And the
  question is, what information should the version attribute hold?
  Possibilities include the release code name (Icehouse, Juno), or the
  release version (2014.1, 2014.2). But this wouldn't be useful for users
  of clouds continuously deployed.
 
From my documenter point of view, using the code name seems the right
  option, because it fits with the rest of the documentation.
 
  What do you think would be the best choice from the heat devs POV?
 
  What we ship in-tree is the standard library for Heat. I think Heat
  should not tie things to the release of OpenStack, but only to itself.
 
 Standard Library implies that everyone has it available, but in 
 reality operators can (and will, and do) deploy any combination of 
 resource types that they want.
 

Mmk, I guess I was being too optimistic about how homogeneous OpenStack
clouds might be.

  The idea is to simply version the standard library of resources separately
  even from the language. Added resources and properties would be minor
  bumps, deprecating or removing anything would be a major bump. Users then
  just need an API call that allows querying the standard library version.
 
 We already have API calls to actually inspect resource types. I don't 
 think a semantic version number is helpful here, since the different 
 existing combinations of resources types are not expressible linearly.
 
 There's no really good answer here, but the only real answer is making 
 sure it's easy for people to generate the docs themselves for their 
 actual deployment.
 

That's an interesting idea. By any chance do we have something that
publishes the docs directly from source tree into swift? Might make it
easier if we could just do that as part of code pushes for those who run
clouds from source.

  With this scheme, we can provide a gate test that prevents breaking the
  rules, and automatically generate the docs still. Doing this would sync
  better with continuous deployers who will be running Juno well before
  there is a 2014.2.
 
 Maybe continuous deployers should continuously deploy their own docs? 
 For any given cloud the only thing that matters is what it supports 
 right now.


Thats an interesting idea, but I like what the user wants is to see how
this cloud is different than the other clouds.

  Anyway, Heat largely exists to support portability of apps between
  OpenStack clouds. Many many OpenStack clouds don't run one release,
  and we don't require them to do so. So tying to the release is, IMO,
  a poor coice.
 
 The original question was about docs.openstack.org, and in that context 
 I think tying it to the release version is a good choice, because 
 that's... how OpenStack is released. Individual clouds, however, really 
 need to deploy their own docs that document what they actually support.
 

Yeah I hadn't thought of that before. I like the idea but I wonder how
practical it is for CD private clouds.

 The flip side of this, of course, is that whatever we use for the 
 version strings on docs.openstack.org will all make its way into all the 
 other documentation that gets built, and I do understand your point in 
 that context. But versioning the standard library of plugins as if it 
 were a monolithic, always-available thing seems wrong to me.


Yeah I think it is too optimistic in retrospect.

  We do the same thing with HOT's internals, so why not also
  do the standard library this way?
 
 The current process for HOT is for every OpenStack development cycle 
 (Juno is the first to use this) to give it a 'version' string that is 
 the expected date of the next release (in the future), and continuous 
 deployers who use the new one before that date are on their own (i.e. 
 it's not considered stable). So not really comparable.
 

I think there's a difference between a CD operator making it available,
and saying they support it. Just like a new API version in OpenStack, it
may be there, but they may communicate to users it is alpha until after
it gets released upstream. I think that is the same for this, and so I
think that using the version number is probably fine.

___
OpenStack-dev mailing list

Re: [openstack-dev] [tripleo] Adding hp1 back running tripleo CI

2014-09-15 Thread Gregory Haynes
This is a total shot in the dark, but a couple of us ran into issues
with the Ubuntu Trusty kernel (I know I hit it on HP hardware) that was
causing severely degraded performance for TripleO. This fixed with a
recently released kernel in Trusty... maybe you could be running into
this?

-Greg

 Also its worth noting the test I have been using to compare jobs is the
 F20 overcloud job, something has happened recently causing this job to
 run slower then it used to run (possibly upto 30 minutes slower), I'll
 now try to get to the bottom of this. So the times may not end up being
 as high as referenced above but I'm assuming the relative differences
 between the two clouds wont change.
 
 thoughts?
 Derek
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] final review push for releases thursday

2014-09-15 Thread Doug Hellmann
We’re down to 2 bugs, both of which have patches up for review.

James Carey has a fix for the decoding error we’re seeing in mask_password(). 
It needs to land in oslo.utils and oslo-incubator:

- utils: https://review.openstack.org/#/c/121657/
- incubator: https://review.openstack.org/#/c/121632/

Robert Collins has a fix for pbr breaking on tags that don’t look like version 
numbers, with 1 dependency:

- https://review.openstack.org/#/c/114403/
- https://review.openstack.org/#/c/108271/ (dependency)

If we knock these out Tuesday we can cut releases of the related libs to give 
us a day or so with unit tests running before the final versions are tagged on 
Thursday.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Brant Knudson
On Mon, Sep 15, 2014 at 4:30 PM, Michael Still mi...@stillhq.com wrote:

 On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant rbry...@redhat.com
 wrote:
  On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
  On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
  Just an observation from the last week or so...
 
  The biggest problem nova faces at the moment isn't code review
 latency. Our
  biggest problem is failing to fix our bugs so that the gate is
 reliable.
  The number of rechecks we've done in the last week to try and land
 code is
  truly startling.
 
  I consider both problems to be pretty much equally as important. I don't
  think solving review latency or test reliabilty in isolation is enough
 to
  save Nova. We need to tackle both problems as a priority. I tried to
 avoid
  getting into my concerns about testing in my mail on review team
 bottlenecks
  since I think we should address the problems independantly / in
 parallel.
 
  Agreed with this.  I don't think we can afford to ignore either one of
 them.

 Yes, that was my point. I don't mind us debating how to rearrange
 hypervisor drivers. However, if we think that will solve all our
 problems we are confused.

 So, how do we get people to start taking bugs / gate failures more
 seriously?

 Michael


What do you think about having an irc channel for working through gate
bugs? I've always found looking at gate failures frustrating because I seem
to be expected to work through these by myself, and maybe somebody's
already looking at it or has more information that I don't know about.
There have been times already where a gate bug that could have left
everything broken for a while wound up fixed pretty quickly because we were
able to find the right person hanging out in irc. Sometimes all it takes is
for someone with the right knowledge to be there. A hypothetical exchange:

rechecker: I got this error where the tempest-foo test failed ... http://...
tempest-expert: That test calls the compute-bar nova API
nova-expert: That API calls the network-baz neutron API
neutron-expert: When you call that API you need to also call this other API
to poll for it to be done... is nova doing that?
nova-expert: Nope. Fix on the way.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Sean Dague
On 09/15/2014 05:52 PM, Brant Knudson wrote:
 
 
 On Mon, Sep 15, 2014 at 4:30 PM, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:
 
 On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
  On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
  On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
  Just an observation from the last week or so...
 
  The biggest problem nova faces at the moment isn't code review 
 latency. Our
  biggest problem is failing to fix our bugs so that the gate is 
 reliable.
  The number of rechecks we've done in the last week to try and land 
 code is
  truly startling.
 
  I consider both problems to be pretty much equally as important. I 
 don't
  think solving review latency or test reliabilty in isolation is enough 
 to
  save Nova. We need to tackle both problems as a priority. I tried to 
 avoid
  getting into my concerns about testing in my mail on review team 
 bottlenecks
  since I think we should address the problems independantly / in 
 parallel.
 
  Agreed with this.  I don't think we can afford to ignore either one of 
 them.
 
 Yes, that was my point. I don't mind us debating how to rearrange
 hypervisor drivers. However, if we think that will solve all our
 problems we are confused.
 
 So, how do we get people to start taking bugs / gate failures more
 seriously?
 
 Michael
 
 
 What do you think about having an irc channel for working through gate
 bugs? I've always found looking at gate failures frustrating because I
 seem to be expected to work through these by myself, and maybe
 somebody's already looking at it or has more information that I don't
 know about. There have been times already where a gate bug that could
 have left everything broken for a while wound up fixed pretty quickly
 because we were able to find the right person hanging out in irc.
 Sometimes all it takes is for someone with the right knowledge to be
 there. A hypothetical exchange:
 
 rechecker: I got this error where the tempest-foo test failed ... http://...
 tempest-expert: That test calls the compute-bar nova API
 nova-expert: That API calls the network-baz neutron API
 neutron-expert: When you call that API you need to also call this other
 API to poll for it to be done... is nova doing that?
 nova-expert: Nope. Fix on the way.

Honestly, the #openstack-qa channel is a completely appropriate place
for that. Plus it already has a lot of the tempest experts.
Realistically anyone that works on these kinds of fixes tend to be there.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Jay Pipes

On 09/15/2014 05:30 PM, Michael Still wrote:

On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant rbry...@redhat.com wrote:

On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:

On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:

Just an observation from the last week or so...

The biggest problem nova faces at the moment isn't code review latency. Our
biggest problem is failing to fix our bugs so that the gate is reliable.
The number of rechecks we've done in the last week to try and land code is
truly startling.


I consider both problems to be pretty much equally as important. I don't
think solving review latency or test reliabilty in isolation is enough to
save Nova. We need to tackle both problems as a priority. I tried to avoid
getting into my concerns about testing in my mail on review team bottlenecks
since I think we should address the problems independantly / in parallel.


Agreed with this.  I don't think we can afford to ignore either one of them.


Yes, that was my point. I don't mind us debating how to rearrange
hypervisor drivers. However, if we think that will solve all our
problems we are confused.

So, how do we get people to start taking bugs / gate failures more seriously?


A few suggestions:

1) Bug bounties

Money talks. I know it sounds silly, but lots of developers get paid to 
work on features. Not as many have financial incentive to fix bugs.


It doesn't need to be a huge amount. And I think the wall of fame 
respect reward for top bug fixers or gate unblockers would be a good 
incentive as well.


The foundation has a budget. I can't think of a better way to effect 
positive change than allocating $10-20K to paying bug bounties.


2) Videos discussing gate tools and diagnostics techniques

I hope I'm not bursting any of Sean Dague's bubble, but one thing we've 
been discussing, together with Dan Smith, is having a weekly or 
bi-weekly Youtube show where we discuss Nova development topics, with 
deep dives into common but hairy parts of the Nova codebase. The idea is 
to grow Nova contributors' knowledge of more parts of Nova than just one 
particular area they might be paid to work on.


I think a weekly or bi-weekly show that focuses on bug and gate issues 
would be a really great idea, and I'd be happy to play a role in this. 
The Chef+OpenStack community does weekly Youtube recordings of their 
status meetings and AFAICT, it's pretty successful.


3) Provide a clearer way to understand what is a gate/CI/infra issue and 
what is a project bug


Sometimes it's pretty hard to determine whether something in the E-R 
check page is due to something in the infra scripts, some transient 
issue in the upstream CI platform (or part of it), or actually a bug in 
one or more of the OpenStack projects.


Perhaps there is a way to identify/categorize gate failures (in the form 
of E-R recheck queries) on some meta status page, that would either be 
populated manually or through some clever analysis to better direct 
would-be gate block fixers to where they need to focus?


Anyway, just a few ideas,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Davanum Srinivas
Sean,

I have tabs opened to:
http://status.openstack.org/elastic-recheck/gate.html
http://status.openstack.org/elastic-recheck/data/uncategorized.html

and periodically catch up on openstack-qa on IRC as well, i just did
not realize this wsgi gate bug was hurting the gate this much.

So, could we somehow indicate (email? or one of the web pages above?)
where occassional helpers can watch and pitch in when needed.

thanks,
dims


On Mon, Sep 15, 2014 at 5:55 PM, Sean Dague s...@dague.net wrote:
 On 09/15/2014 05:52 PM, Brant Knudson wrote:


 On Mon, Sep 15, 2014 at 4:30 PM, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:

 On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
  On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
  On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
  Just an observation from the last week or so...
 
  The biggest problem nova faces at the moment isn't code review 
 latency. Our
  biggest problem is failing to fix our bugs so that the gate is 
 reliable.
  The number of rechecks we've done in the last week to try and land 
 code is
  truly startling.
 
  I consider both problems to be pretty much equally as important. I 
 don't
  think solving review latency or test reliabilty in isolation is 
 enough to
  save Nova. We need to tackle both problems as a priority. I tried to 
 avoid
  getting into my concerns about testing in my mail on review team 
 bottlenecks
  since I think we should address the problems independantly / in 
 parallel.
 
  Agreed with this.  I don't think we can afford to ignore either one of 
 them.

 Yes, that was my point. I don't mind us debating how to rearrange
 hypervisor drivers. However, if we think that will solve all our
 problems we are confused.

 So, how do we get people to start taking bugs / gate failures more
 seriously?

 Michael


 What do you think about having an irc channel for working through gate
 bugs? I've always found looking at gate failures frustrating because I
 seem to be expected to work through these by myself, and maybe
 somebody's already looking at it or has more information that I don't
 know about. There have been times already where a gate bug that could
 have left everything broken for a while wound up fixed pretty quickly
 because we were able to find the right person hanging out in irc.
 Sometimes all it takes is for someone with the right knowledge to be
 there. A hypothetical exchange:

 rechecker: I got this error where the tempest-foo test failed ... http://...
 tempest-expert: That test calls the compute-bar nova API
 nova-expert: That API calls the network-baz neutron API
 neutron-expert: When you call that API you need to also call this other
 API to poll for it to be done... is nova doing that?
 nova-expert: Nope. Fix on the way.

 Honestly, the #openstack-qa channel is a completely appropriate place
 for that. Plus it already has a lot of the tempest experts.
 Realistically anyone that works on these kinds of fixes tend to be there.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-15 Thread Mark Washenberger
Hi there logging experts,

We've recently had a little disagreement in the glance team about the
appropriate log levels for http requests that end up failing due to user
errors. An example would be a request to get an image that does not exist,
which results in a 404 Not Found request.

On one hand, this event is an error, so DEBUG or INFO seem a little too
low. On the other hand, this error doesn't generally require any kind of
operator investigation or indicate any actual failure of the service, so
perhaps it is excessive to log it at WARN or ERROR.

Please provide feedback to help us resolve this dispute if you feel you can!

Thanks,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-15 Thread Sean Dague
On 09/15/2014 07:00 PM, Mark Washenberger wrote:
 Hi there logging experts,
 
 We've recently had a little disagreement in the glance team about the
 appropriate log levels for http requests that end up failing due to user
 errors. An example would be a request to get an image that does not
 exist, which results in a 404 Not Found request.
 
 On one hand, this event is an error, so DEBUG or INFO seem a little too
 low. On the other hand, this error doesn't generally require any kind of
 operator investigation or indicate any actual failure of the service, so
 perhaps it is excessive to log it at WARN or ERROR.
 
 Please provide feedback to help us resolve this dispute if you feel you can!

My feeling is this is an INFO level. There is really nothing the admin
should care about here.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-15 Thread Clint Byrum
Excerpts from Sean Dague's message of 2014-09-15 16:02:04 -0700:
 On 09/15/2014 07:00 PM, Mark Washenberger wrote:
  Hi there logging experts,
  
  We've recently had a little disagreement in the glance team about the
  appropriate log levels for http requests that end up failing due to user
  errors. An example would be a request to get an image that does not
  exist, which results in a 404 Not Found request.
  
  On one hand, this event is an error, so DEBUG or INFO seem a little too
  low. On the other hand, this error doesn't generally require any kind of
  operator investigation or indicate any actual failure of the service, so
  perhaps it is excessive to log it at WARN or ERROR.
  
  Please provide feedback to help us resolve this dispute if you feel you can!
 
 My feeling is this is an INFO level. There is really nothing the admin
 should care about here.

Agree with Sean. INFO are useful for investigations. WARN and ERROR are
cause for alarm.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] Help with interpreting the log level guidelines

2014-09-15 Thread Jay Pipes

On 09/15/2014 07:00 PM, Mark Washenberger wrote:

Hi there logging experts,

We've recently had a little disagreement in the glance team about the
appropriate log levels for http requests that end up failing due to user
errors. An example would be a request to get an image that does not
exist, which results in a 404 Not Found request.

On one hand, this event is an error, so DEBUG or INFO seem a little too
low.


But it's not an error. I mean, it's an error for the user, but the 
software (Glance) has not acted in a way that is either unrecoverable or 
requires action.


I think DEBUG is the appropriate level to log this. That said, standard 
WSGI logging dictates that there be a single INFO-level log line that 
logs the URI request made and the HTTP return code sent, so there should 
already be an INFO level log line that would have the 40X return code in it.


Best,
-jay

 On the other hand, this error doesn't generally require any kind of

operator investigation or indicate any actual failure of the service, so
perhaps it is excessive to log it at WARN or ERROR.

Please provide feedback to help us resolve this dispute if you feel you can!

Thanks,
markwash


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack bootstrapping hour - Friday Sept 19th - 3pm EST

2014-09-15 Thread Richard Jones
This is a great idea, thanks!

On 16 September 2014 08:56, Sean Dague s...@dague.net wrote:

 A few of us have decided to pull together a regular (cadence to be
 determined) video series taking on deep dives inside of OpenStack,
 looking at code, explaining why things work that way, and fielding
 questions from anyone interested.

 For lack of a better title, I've declared it OpenStack Bootstrapping Hour.

 Episode 0 - Mock best practices will kick off this Friday, Sept 19th,
 from 3pm - 4pm EST. Our experts for this will be Jay Pipes and Dan
 Smith. It will be done as a Google Hangout on Air, which means there
 will be a live youtube stream while it's on, and a recorded youtube
 video that's publicly accessible once we're done.

 We'll be using an etherpad during the broadcast to provide links to the
 content people are looking at, as well as capture questions. That will
 be our backchannel, and audience participation forum, with the advantage
 that it creates a nice concise document at the end of the broadcast that
 pairs well with the video. (Also: the tech test showed that while code
 examples are perfectly viewable during in the final video, during the
 live stream they are a little hard to read, etherpad links will help
 people follow along at home).

 Assuming this turns out to be useful, we're thinking about lots of other
 deep dives. The intent is that these are indepth dives. We as a
 community have learned so many things over the last 4 years, but as
 OpenStack has gotten so large, being familiar with more than a narrow
 slice is hard. This is hopefully a part of the solution to address that.
 As I've told others, if nothing else, I'm looking forward to learning a
 ton in the process.

 Final links for the hangout + etherpad will be posted a little later in
 the week. Mostly wanted to make people aware it was coming.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0020] Disassociating floating IPs does not terminate NAT connections with Neutron L3 agent

2014-09-15 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Disassociating floating IPs does not terminate NAT connections with
Neutron L3 agent
- ---

### Summary ###
Every virtual instance is automatically assigned a private IP address.
You may optionally assign public IP addresses to instances. OpenStack
uses the term floating IP to refer to an IP address (typically
public) that can be dynamically added to a running virtual instance.
The Neutron L3 agent uses Network Address Translation (NAT) to assign
floating IPs to virtual instances. Floating IPs can be dynamically
released from a running virtual instance but any active connections are
not terminated with this release as expected when using the Neutron L3
agent.

### Affected Services / Software ###
Neutron, Icehouse, Havana, Grizzly, Folsom

### Discussion ###
When creating a virtual instance, a floating IP address is not
allocated by default. After a virtual instance is created, a user can
explicitly associate a floating IP address to that instance. Users can
create connections to the virtual instance using this floating IP
address. Also, this floating IP address can be disassociated from any
running instance without shutting that instance down.

If a user initiates a connection using the floating IP address, this
connection remains alive and accessible even after the floating IP
address is released from that instance. This potentially violates
restrictive policies which are only being applied to new connections.
These policies are ignored for pre-existing connections and the virtual
instance remains accessible from the public network.

This issue is only known to affect Neutron when using the L3 agent.
Nova networking is not affected.

### Recommended Actions ###
There is unfortunately no easy way to detect which connections were
made over a floating IP address from a virtual instance, as the NAT is
performed at the Neutron router. The only safe way of terminating all
connections made over a floating IP address is to terminate the virtual
instance itself.

The following recommendations should be followed when using the Neutron
L3 agent:

- - Only attach a floating IP address to a virtual instance when that
instance should be accessible from networks outside the cloud.
- - Terminate or stop the instance along with disassociating the floating
IP address to ensure that all connections are closed.

The Neutron development team plans to address this issue in a future
version of Neutron.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0020
Original LaunchPad Bug : https://bugs.launchpad.net/neutron/+bug/1334926
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUF3r6AAoJEJa+6E7Ri+EVo+AH/i4GhZsFD3OJWlasq+XxkqqO
W7g/6YQuKgRndl63UjnWAfpvJCA8Bl1msryb2K0tTZpDByVpgupPAf6+/NMZXvCT
37YF236Ig/a/iLNjAdHRNHzq8Bhxe7tIikm1ICUH+Hyhob7soBlAC52lEJz9cFwb
Hazo2K0jjt4TEyxAae06KsIuOV/n+tO7ginYxxv2g8DkhKik5PMi4x8j//DYFz92
+SwPvUKeWiZ3JmD1M84Yj4VgPxah6fKDtCYKdTdcv7pYJGlcac8DTXbJkoFVd6H/
v+XbBGWjg7+M7WlZJmDlC2XfWLVKBsREs3BAN/hagE6aKAyImT/gfyT0WxLpVIU=
=Gk3u
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack bootstrapping hour - Friday Sept 19th - 3pm EST

2014-09-15 Thread Rochelle.RochelleGrober
+1000
This is *great*.  Not only for newbies, but refreshers, learning different 
approaches, putting faces to the signatures, etc.  And Mock best practices is a 
brilliant starting place for developers.

I'd like to vote for a few others:
- Development environment (different ones: PyCharms, Eclipse, IDE for Docs, etc)
- Tracking down a bug: log searching, back tracing, etc.
- Fixing a bug:  From assigning in Launchpad through clone, fix, git review, 
etc.
- Writing an integrated test: setup, data recording/collection/clean tear down.

Sorry to have such a big wish list, but for people who learn experientially, 
this will be immensely useful.

--Rocky

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Monday, September 15, 2014 3:56 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all] OpenStack bootstrapping hour - Friday Sept 19th 
- 3pm EST

A few of us have decided to pull together a regular (cadence to be
determined) video series taking on deep dives inside of OpenStack,
looking at code, explaining why things work that way, and fielding
questions from anyone interested.

For lack of a better title, I've declared it OpenStack Bootstrapping Hour.

Episode 0 - Mock best practices will kick off this Friday, Sept 19th,
from 3pm - 4pm EST. Our experts for this will be Jay Pipes and Dan
Smith. It will be done as a Google Hangout on Air, which means there
will be a live youtube stream while it's on, and a recorded youtube
video that's publicly accessible once we're done.

We'll be using an etherpad during the broadcast to provide links to the
content people are looking at, as well as capture questions. That will
be our backchannel, and audience participation forum, with the advantage
that it creates a nice concise document at the end of the broadcast that
pairs well with the video. (Also: the tech test showed that while code
examples are perfectly viewable during in the final video, during the
live stream they are a little hard to read, etherpad links will help
people follow along at home).

Assuming this turns out to be useful, we're thinking about lots of other
deep dives. The intent is that these are indepth dives. We as a
community have learned so many things over the last 4 years, but as
OpenStack has gotten so large, being familiar with more than a narrow
slice is hard. This is hopefully a part of the solution to address that.
As I've told others, if nothing else, I'm looking forward to learning a
ton in the process.

Final links for the hangout + etherpad will be posted a little later in
the week. Mostly wanted to make people aware it was coming.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >