[openstack-dev] [Manila]

2014-09-05 Thread Jyoti Ranjan
Which of file system appliances are supported as of today? We are thinking
to integrate with our cloud.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-05 Thread Kyle Mestery
On Fri, Sep 5, 2014 at 9:54 AM, Robert Kukura kuk...@noironetworks.com wrote:
 Kyle,

 Please consider an FFE for
 https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding.
 This was discussed extensively at Wednesday's ML2 meeting, where the
 consensus was that it would be valuable to get this into Juno if possible.
 The patches have had core reviews from Armando, Akihiro, and yourself.
 Updates to the three patches addressing the remaining review issues will be
 posted today, along with an update to the spec to bring it in line with the
 implementation.

It's on the list for inclusion as an FFE. I hope to chat with ttx soon
about these and we'll decide which ones get FFEs at that point.

Thanks,
Kyle

 -Bob


 On 9/3/14, 8:17 AM, Kyle Mestery wrote:

 Given how deep the merge queue is (146 currently), we've effectively
 reached feature freeze in Neutron now (likely other projects as well).
 So this morning I'm going to go through and remove BPs from Juno which
 did not make the merge window. I'll also be putting temporary -2s in
 the patches to ensure they don't slip in as well. I'm looking at FFEs
 for the high priority items which are close but didn't quite make it:

 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security

 https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][feature freeze exception] Proposal for using Launcher/ProcessLauncher for launching services

2014-09-05 Thread Thierry Carrez
Kekane, Abhishek wrote:
 [...]
 If we have this feature in glance then we can able to use features like
 reload glance configuration file without restart, graceful shutdown etc.
 
 Also it will use common code like other OpenStack projects nova,
 keystone, cinder does.

I think it makes a lot of sense but it also has a lot of documentation
consequences. If it was ready to merge and had the necessary reviews
piled up I would +1 this but the way it stands now (and given Glance's
current review velocity) I'm more leaning towards -1.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila]

2014-09-05 Thread Stefano Maffulli

Hi Jyoti

This is the wrong email list: we use openstack-dev only to discuss 
future development of OpenStack project. Use the General mailing list 
or the one for Operators (check http://lists.openstack.org). 
Alternatively search for answers (and if you don't find any, ask 
questions) on https://ask.openstack.org

Cheers,
Stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][FFE] glance_store switch-over and random access to image data

2014-09-05 Thread Thierry Carrez
Flavio Percoco wrote:
 Greetings,
 
 I'd like to request a FFE for 2 features I've been working on during
 Juno which, unfortunately, haven been delayed for different reasons
 during this time.
 [...]

I would be inclined to give both a chance, but they really need to merge
quickly, and the current Glance review velocity is not exactly feeding
my hopes. +0 as far as I'm concerned, and definitely -1 if it takes more
than one week.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Kevin L. Mitchell
On Fri, 2014-09-05 at 10:26 +0100, Daniel P. Berrange wrote:
  2. Removal of drivers other than the reference implementation for each
  project could be the healthiest option
  a. Requires transparent, public, automated 3'rd party CI
  b. Requires a TRUE plugin architecture and mentality
  c. Requires a stable and well defined API
 
 As mentioned in the original mail I don't want to see a situation where
 we end up with some drivers in tree and others out of tree as it sets up
 bad dynamics within the project. Those out of tree will always have the
 impression of being second class citizens and thus there will be constant
 pressure to accept drivers back into tree. The so called 'reference'
 driver that stayed in tree would also continue to be penalized in the
 way it is today, and so its development would be disadvantaged compared
 to the out of tree drivers.

I have one quibble with the notion of not even one driver in core: I
think it is probably useful to include a dummy, do-nothing driver that
can be used for in-tree functional tests and as an example to point
those interested in writing a driver.  Then, the second-class citizen
is the one actually in the tree :)  Beyond that, I agree with this
proposal: it has never made sense to me that *all* drivers live in the
tree, and it actually offends my sense of organization to have the tree
so cluttered; we split functions when they get too big, we split modules
when they get too big, and we create subdirectories when packages get
too big, so why not split repos when they get too big?
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] default allow security group

2014-09-05 Thread Monty Taylor

Hi!

I've decided that as I have problems with OpenStack while using it in 
the service of Infra, I'm going to just start spamming the list.


Please make something like this:

neutron security-group-create default --allow-every-damn-thing

Right now, to make security groups get the hell out of our way because 
they do not provide us any value because we manage our own iptables, it 
takes adding something like 20 rules.


15:24:05  clarkb | one each for ingress and egress udp tcp over 
ipv4 then ipv6 and finaly icmp


That may be great for someone using my-first-server-pony, but for me, I 
know how the internet works, and when I ask for a server, I want it to 
just work.


Now, I know, I know - the DEPLOYER can make decisions blah blah blah.

BS

If OpenStack is going to let my deployer make the absolutely assinine 
decision that all of my network traffic should be blocked by default, it 
should give me, the USER, a get out of jail free card.


kthxbai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Ken'ichi Ohmichi
2014-09-05 21:56 GMT+09:00 Day, Phil philip@hp.com:
 Hi,

 I'd like to ask for a FFE for the 3 patchsets that implement quotas for 
 server groups.

 Server groups (which landed in Icehouse) provides a really useful 
 anti-affinity filter for scheduling that a lot of customers woudl like to 
 use, but without some form of quota control to limit the amount of 
 anti-affinity its impossible to enable it as a feature in a public cloud.

 The code itself is pretty simple - the number of files touched is a 
 side-effect of having three V2 APIs that report quota information and the 
 need to protect the change in V2 via yet another extension.

 https://review.openstack.org/#/c/104957/
 https://review.openstack.org/#/c/116073/
 https://review.openstack.org/#/c/116079/

I am happy to sponsor this work.

Thanks
Ken'ichi ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][FFE] Requesting exception for Swift trust authentication blueprint

2014-09-05 Thread Trevor McKay
Not sure how this is done, but I'm a core member for Sahara, and I
hereby sponsor it.

On Fri, 2014-09-05 at 09:57 -0400, Michael McCune wrote:
 hey folks,
 
 I am requesting an exception for the Swift trust authentication blueprint[1]. 
 This blueprint addresses a security bug in Sahara and represents a 
 significant move towards increased security for Sahara clusters. There are 
 several reviews underway[2] with 1 or 2 more starting today or monday.
 
 This feature is initially implemented as optional and as such will have 
 minimal impact on current user deployments. By default it is disabled and 
 requires no additional configuration or management from the end user.
 
 My feeling is that there has been vigorous debate and discussion surrounding 
 the implementation of this blueprint and there is consensus among the team 
 that these changes are needed. The code reviews for the bulk of the work have 
 been positive thus far and I have confidence these patches will be accepted 
 within the next week.
 
 thanks for considering this exception,
 mike
 
 
 [1]: 
 https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-authentication
 [2]: 
 https://review.openstack.org/#/q/status:open+topic:bp/edp-swift-trust-authentication,n,z
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Lucas Alvares Gomes
 I look at what we do with Ironic testing current as a guide here.
 We have tempest job that runs against Nova, that validates changes
 to nova don't break the separate Ironic git repo. So my thought
 is that all our current tempest jobs would simply work in that
 way. IOW changes to so called nova common would run jobs that
 validate the change against all the virt driver git repos. I think
 this kind of setup is pretty much mandatory for split repos to be
 viable, because I don't want to see us loose testing coverage in
 this proposed change.

Thanks Daniel for raising it this problem.

Yeah I think that what we did with Ironic while the driver is* out of
the Nova tree serves as a good example. I also think that having
drivers out of the tree is possible, making the tests run against the
nova-common and assert things didn't break is no problem. But as you
described before the process of code submission was quite painful and
required a lot of effort and coordination from the Ironic and Nova
teams, we would need to improve that.

Another problem we will have in splitting the drivers out is that
classic limitation of launchpad blueprints, we can't track tasks
across multiple projects. (This will change once Storyboard is
completed I guess).

But that's all a long-term solution. In the short term I don't have
see any real solution yet, this thing about asking companies/projects
that has a driver in Nova to help with reviews is not so bad IMO. I've
started reviewing code in Nova today and will continue doing that,
maybe aiming for core so that we can speed up the future reviews to
the Ironic driver.

Now, I let me throw a crazy idea here into the mix (it might be stupid, but):

Maybe Nova is doing much more than it should, deprecating the
baremetal and network part and splitting the scheduler out of the
project helps a lot. But, and if other parts were splitted as well,
like managing flavors, creating the instances etc... And then Nova can
be the thing that knows how to talk/manage hypervisors only and won't
have to deal with crazy cases like the Ironic where we try make real
machines looks  feel like VMs to fit into Nova, because that's
painful and I think we are going to have many limitations if we
continue to do that (I believe the same may happen with the Docker
driver).

So if we have another project on top of Nova, Ironic and
$CONTAINER_PROJECT_NAME** that abstract all the rest and only talks to
Nova when a VM is going to be deployed or Ironic when a Baremetal
machine is going to be deployed, etc... Maybe then Nova will be
considerable small and can keep all drivers in tree (hypervisor
drivers only, no Docker or Ironic).

* was tempted to write 'was' there :)
** A new project that will know how to handle the containers case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 10:25:09AM -0500, Kevin L. Mitchell wrote:
 On Fri, 2014-09-05 at 10:26 +0100, Daniel P. Berrange wrote:
   2. Removal of drivers other than the reference implementation for each
   project could be the healthiest option
   a. Requires transparent, public, automated 3'rd party CI
   b. Requires a TRUE plugin architecture and mentality
   c. Requires a stable and well defined API
  
  As mentioned in the original mail I don't want to see a situation where
  we end up with some drivers in tree and others out of tree as it sets up
  bad dynamics within the project. Those out of tree will always have the
  impression of being second class citizens and thus there will be constant
  pressure to accept drivers back into tree. The so called 'reference'
  driver that stayed in tree would also continue to be penalized in the
  way it is today, and so its development would be disadvantaged compared
  to the out of tree drivers.
 
 I have one quibble with the notion of not even one driver in core: I
 think it is probably useful to include a dummy, do-nothing driver that
 can be used for in-tree functional tests and as an example to point
 those interested in writing a driver.  Then, the second-class citizen
 is the one actually in the tree :)  Beyond that, I agree with this
 proposal: it has never made sense to me that *all* drivers live in the
 tree, and it actually offends my sense of organization to have the tree
 so cluttered; we split functions when they get too big, we split modules
 when they get too big, and we create subdirectories when packages get
 too big, so why not split repos when they get too big?

Oh sure, having a fake virt driver in tree is fine and indeed desirable
for the reasons you mention. I was exclusively thinking about the real
virt drivers in my earlier statement.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Juno-3 released

2014-09-05 Thread Ruslan Kamaldinov
I'm glad to announce Juno-3 release of Murano:
https://launchpad.net/murano/+milestone/juno-3

We've implemented 8 blueprints and closed 50 bugs. Thanks a lot to
everyone who worked on this milestone. All these efforts helped to
significantly improve quality and test-coverage of the project!

Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-05 Thread Asselin, Ramy
-1 from me (non-cinder core)

It very nice to see you're making progress. I, personally, was very confused 
about voting. 
Here's my understanding: Voting: it is the ability to provide an official +1 
-1 vote in the gerrit system.

I don't see a stable history [1]. Before requesting voting, you should enable 
your system on the cinder project itself. 
Initially, you should disable ALL gerrit comments, i.e. run in silent mode, per 
request from cinder PTL [2]. Once stable there, you can enable gerrit comments. 
At this point, everyone can see pass/fail comments with a vote=0.
Once stable there on real patches, you can request voting again, where the 
pass/fail would vote +1/-1.

Ramy
[1] http://38.111.159.9:8080/job/Coraid_CI/35/console
[2] http://lists.openstack.org/pipermail/openstack-dev/2014-August/043876.html


-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Friday, September 05, 2014 7:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Infra][Cinder] Coraid CI system

+1 from me (Cinder core)

On 5 September 2014 15:09, Mykola Grygoriev mgrygor...@mirantis.com wrote:
 Hi,

 My name is Mykola Grygoriev and I'm engineer who currently working on 
 deploying 3d party CI for Сoraid Сinder driver.

 Following instructions on

 http://ci.openstack.org/third_party.html#requesting-a-service-account

 asking for adding gerrit CI account (coraid-ci) to the Voting 
 Third-Party CI Gerrit group.



 We have already added description of Coraid CI system to wiki page - 
 https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI

 We used openstack-dev/sandbox project to test current CI 
 infrastructure with OpenStack Gerrit system. Please find our history there.

 Please have a look to results of Coraid CI system. it currently takes 
 updates from openstack/cinder project:
 http://38.111.159.9:8080/job/Coraid_CI/32/
 http://38.111.159.9:8080/job/Coraid_CI/33/

 Thank you in advance.

 --
 Best regards,
 Mykola Grygoriev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature freeze exception for virt-driver-numa-placement

2014-09-05 Thread Daniel P. Berrange
On Fri, Sep 05, 2014 at 04:20:14PM +0100, John Garbutt wrote:
 On 5 September 2014 13:59, Nikola Đipanov ndipa...@redhat.com wrote:
  Since this did not get an 'Approved' as of yet, I want to make sure that
  this is not because the number of sponsors. 2 core members have already
  sponsored it, and as per [1] cores can sponsor their own FFEs so that's 3.
 
 While I am no fan of that idea, this was already in the gate, so 2
 cores should be more than enough.
 
 Mikal has said I could approve FFEs in his absence, but given the
 conflict in the thread, I want to leave this approval to him :)
 
 I know its 10 patches, but I think we can try resolve the discussion
 on the thread, and look to approve this on Monday morning, and still
 make it before the deadline. Do shout up if that seems impossible
 and/or stupid.

If you want to delegate to Mikal for casting vote I think that
is not the end of the world. After all, few of us are going to
be seriously working over the weekend, so approving FFE tonight
vs monday morning austrailia time isn't a significant difference.
I assume you'll ping him to ensure he sees it on monday as a high
priority item.

As a general point though, I'm wary of letting the FFE proposal
be a place where we re-open the design discussions on any of the
FFE requests. These patches have been under active review and were
all approved to merge with no -2-severity objections. If it had
not been for the huge backlog this would all be in tree already
and we'd be focusing on testing it and ironing out any bugs we
identify, which is ultimately where our focus needs to be now.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][FFE] Requesting exception for Swift trust authentication blueprint

2014-09-05 Thread Thierry Carrez
Smaller review teams don't really need to line up core sponsors as much
as Nova does. As long as Sergey and myself are fine with it, you can go
for it. I'm +1 on this one becauise it's actually a security bug we need
to plug before release.

Trevor McKay wrote:
 Not sure how this is done, but I'm a core member for Sahara, and I
 hereby sponsor it.
 
 On Fri, 2014-09-05 at 09:57 -0400, Michael McCune wrote:
 hey folks,

 I am requesting an exception for the Swift trust authentication 
 blueprint[1]. This blueprint addresses a security bug in Sahara and 
 represents a significant move towards increased security for Sahara 
 clusters. There are several reviews underway[2] with 1 or 2 more starting 
 today or monday.

 This feature is initially implemented as optional and as such will have 
 minimal impact on current user deployments. By default it is disabled and 
 requires no additional configuration or management from the end user.

 My feeling is that there has been vigorous debate and discussion surrounding 
 the implementation of this blueprint and there is consensus among the team 
 that these changes are needed. The code reviews for the bulk of the work 
 have been positive thus far and I have confidence these patches will be 
 accepted within the next week.

 thanks for considering this exception,
 mike


 [1]: 
 https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-authentication
 [2]: 
 https://review.openstack.org/#/q/status:open+topic:bp/edp-swift-trust-authentication,n,z

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] default allow security group

2014-09-05 Thread Miguel Angel Ajo Pelayo

I believe your request matches this, and I agree
it'd be something good

https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-group

And also, the fact that we have hardcoded default 
security group settings. It would be good to have 
a system wide default security group settings.

https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L122





- Original Message -
 Hi!
 
 I've decided that as I have problems with OpenStack while using it in
 the service of Infra, I'm going to just start spamming the list.
 
 Please make something like this:
 
 neutron security-group-create default --allow-every-damn-thing
 
 Right now, to make security groups get the hell out of our way because
 they do not provide us any value because we manage our own iptables, it
 takes adding something like 20 rules.
 
 15:24:05  clarkb | one each for ingress and egress udp tcp over
 ipv4 then ipv6 and finaly icmp
 
 That may be great for someone using my-first-server-pony, but for me, I
 know how the internet works, and when I ask for a server, I want it to
 just work.
 
 Now, I know, I know - the DEPLOYER can make decisions blah blah blah.
 
 BS
 
 If OpenStack is going to let my deployer make the absolutely assinine
 decision that all of my network traffic should be blocked by default, it
 should give me, the USER, a get out of jail free card.
 
 kthxbai
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][feature freeze exception] Proposal for using Launcher/ProcessLauncher for launching services

2014-09-05 Thread Anne Gentle
On Wed, Sep 3, 2014 at 2:07 PM, Kekane, Abhishek 
abhishek.kek...@nttdata.com wrote:

  Hi  Erno,

 I agree that we must document what all config parameters will be reloaded
 after SIGHUP signal is processed, that's the reason why we have added
 DocImpact tag to patch https://review.openstack.org/#/c/117988/. We will
 test what parameters are reloaded and report them to the Doc team.



The use of a DocImpact flag does not mean a doc person will take the
assignment. The Glance team takes responsibility for any documentation
needed for changes.

If it really is just configuration options that change, we have an
automation script that generates new, changed, and deprecated configuration
options and pulls out the descriptions from the code itself. However you
may want more description about operating the Image Service in this way so
an additional section would help people understand.

From reading the code comments it looks like another approach is preferred,
so my describing the doc process is just so that others on the list know
how DocImpact works.

Anne


  *Our use case:*


  We want to use SIGHUP signal to reload filesystem store related config
 parameters namely filesystem_store_datadir and
 filesystem_store_datadirs which are very crucial in the production
 environment especially for people using NFS. In case, the filesystem is
 approaching at full capacity, administrator can add more storage and
 configured it via the above parameters which will be taken into effect upon
 sending SIGHUP signal. Secondly, most of the OpenStack services uses
 service framework and it does handle reloading of configuration files via
 SIGHUP signal which glance cannot without this patch.


  Thanks  Regards,


  Abhishek Kekane

  --
 *From:* Kekane, Abhishek
 *Sent:* Wednesday, September 03, 2014 9:39 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* RE: [openstack-dev] [glance][feature freeze exception]
 Proposal for using Launcher/ProcessLauncher for launching services

   Hi All,



 Please give your support me for applying the  freeze exception for using
 oslo-incubator service framework in glance, based on the following
 blueprint:



 https://blueprints.launchpad.net/glance/+spec/use-common-service-framework



 I have ensured that after making these changes everything is working
 smoothly.



 I have done the functional testing for following three scenarios:

 1.   Enabled SSL and checked requests are processed by the Api
 service before and after SIGHUP signal

 2.   Disabled SSL and  checked requests are processed by the Api
 service before and after SIGHUP signal

 3.   I have also ensured reloading of the parameters like
 ilesystem_store_datadir, filesystem_store_datadirs are  working effectively
 after sending the SIGHUP signal.



 To test 1st and 2nd I have created a python script which will send
 multiple requests to glance at a time and added a chron job to send a
 SIGHUP signal to the parent process.

 I have tested above script for 1 hour and confirmed every request has been
 processed successfully.



 Please consider this feature to be a part of Juno release.







 Thanks  Regards,



 Abhishek Kekane





 *From:* Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
 *Sent:* 02 September 2014 19:11
 *To:* OpenStack Development Mailing List (
 openstack-dev@lists.openstack.org)
 *Subject:* [openstack-dev] [glance][feature freeze exception] Proposal
 for using Launcher/ProcessLauncher for launching services



 Hi All,



 I'd like to ask for a feature freeze exception for using oslo-incubator
 service framework in glance, based on the following blueprint:



 https://blueprints.launchpad.net/glance/+spec/use-common-service-framework





 The code to implement this feature is under review at present.



 1. Sync oslo-incubator service module in glance:
 https://review.openstack.org/#/c/117135/2

 2. Use Launcher/ProcessLauncher in glance:
 https://review.openstack.org/#/c/117988/





 If we have this feature in glance then we can able to use features like
 reload glance configuration file without restart, graceful shutdown etc.

 Also it will use common code like other OpenStack projects nova, keystone,
 cinder does.





 We are ready to address all the concerns of the community if they have any.





 Thanks  Regards,



 Abhishek Kekane


 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding

 __
 Disclaimer:This email and any attachments are sent in 

Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests

2014-09-05 Thread Matthew Treinish
On Fri, Sep 05, 2014 at 09:42:17AM +1200, Steve Baker wrote:
 On 05/09/14 04:51, Matthew Treinish wrote:
  On Thu, Sep 04, 2014 at 04:32:53PM +0100, Steven Hardy wrote:
  On Thu, Sep 04, 2014 at 10:45:59AM -0400, Jay Pipes wrote:
  On 08/29/2014 05:15 PM, Zane Bitter wrote:
  On 29/08/14 14:27, Jay Pipes wrote:
  On 08/26/2014 10:14 AM, Zane Bitter wrote:
  Steve Baker has started the process of moving Heat tests out of the
  Tempest repository and into the Heat repository, and we're looking for
  some guidance on how they should be packaged in a consistent way.
  Apparently there are a few projects already packaging functional tests
  in the package projectname.tests.functional (alongside
  projectname.tests.unit for the unit tests).
 
  That strikes me as odd in our context, because while the unit tests run
  against the code in the package in which they are embedded, the
  functional tests run against some entirely different code - whatever
  OpenStack cloud you give it the auth URL and credentials for. So these
  tests run from the outside, just like their ancestors in Tempest do.
 
  There's all kinds of potential confusion here for users and packagers.
  None of it is fatal and all of it can be worked around, but if we
  refrain from doing the thing that makes zero conceptual sense then 
  there
  will be no problem to work around :)
 
  I suspect from reading the previous thread about In-tree functional
  test vision that we may actually be dealing with three categories of
  test here rather than two:
 
  * Unit tests that run against the package they are embedded in
  * Functional tests that run against the package they are embedded in
  * Integration tests that run against a specified cloud
 
  i.e. the tests we are now trying to add to Heat might be qualitatively
  different from the projectname.tests.functional suites that already
  exist in a few projects. Perhaps someone from Neutron and/or Swift can
  confirm?
 
  I'd like to propose that tests of the third type get their own 
  top-level
  package with a name of the form projectname-integrationtests (second
  choice: projectname-tempest on the principle that they're essentially
  plugins for Tempest). How would people feel about standardising that
  across OpenStack?
  By its nature, Heat is one of the only projects that would have
  integration tests of this nature. For Nova, there are some functional
  tests in nova/tests/integrated/ (yeah, badly named, I know) that are
  tests of the REST API endpoints and running service daemons (the things
  that are RPC endpoints), with a bunch of stuff faked out (like RPC
  comms, image services, authentication and the hypervisor layer itself).
  So, the integrated tests in Nova are really not testing integration
  with other projects, but rather integration of the subsystems and
  processes inside Nova.
 
  I'd support a policy that true integration tests -- tests that test the
  interaction between multiple real OpenStack service endpoints -- be left
  entirely to Tempest. Functional tests that test interaction between
  internal daemons and processes to a project should go into
  /$project/tests/functional/.
 
  For Heat, I believe tests that rely on faked-out other OpenStack
  services but stress the interaction between internal Heat
  daemons/processes should be in /heat/tests/functional/ and any tests the
  rely on working, real OpenStack service endpoints should be in Tempest.
  Well, the problem with that is that last time I checked there was
  exactly one Heat scenario test in Tempest because tempest-core doesn't
  have the bandwidth to merge all (any?) of the other ones folks submitted.
 
  So we're moving them to openstack/heat for the pure practical reason
  that it's the only way to get test coverage at all, rather than concerns
  about overloading the gate or theories about the best venue for
  cross-project integration testing.
  Hmm, speaking of passive aggressivity...
 
  Where can I see a discussion of the Heat integration tests with Tempest QA
  folks? If you give me some background on what efforts have been made 
  already
  and what is remaining to be reviewed/merged/worked on, then I can try to 
  get
  some resources dedicated to helping here.
  We recieved some fairly strong criticism from sdague[1] earlier this year,
  at which point we were  already actively working on improving test coverage
  by writing new tests for tempest.
 
  Since then, several folks, myself included, commited very significant
  amounts of additional effort to writing more tests for tempest, with some
  success.
 
  Ultimately the review latency and overhead involved in constantly rebasing
  changes between infrequent reviews has resulted in slow progress and
  significant frustration for those attempting to contribute new test cases.
 
  It's been clear for a while that tempest-core have significant bandwidth
  issues, as well as not necessarily always having the specific domain
  expertise to 

Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-05 Thread Chris Dent

On Fri, 5 Sep 2014, Monty Taylor wrote:

The tl;dr is it's like tox, except it uses docker instead of virtualenv - 
which means we can express all of our requirements, not just pip ones.


Oh thank god[1].

Seriously.

jogo started a thread about what matters for kilo and I was going
to respond with (amongst other things) get containers into the testing
scene. Seems you're way ahead of me. Docker's caching could be a
_huge_ win here.

[1] https://www.youtube.com/watch?v=om5rbtudzrg

across the project. Luckily, docker itself does an EXCELLENT job at handling 
caching and reuse - so I think we can have a set of containers that something 
in infra (waves hands) publishes to dockerhub, like:


 infra/py27
 infra/py26


I'm assuming these would get rebuilt regularly (every time global
requirements and friends are updated) on some sort of automated
hook?

Thoughts? Anybody wanna hack on it with me? I think it could wind up being a 
pretty useful tool for folks outside of OpenStack too if we get it right.


Given availability (currently an unknown) I'd like to help with this.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-05 Thread Sean Dague
On 09/05/2014 12:05 PM, Chris Dent wrote:
 On Fri, 5 Sep 2014, Monty Taylor wrote:
 
 The tl;dr is it's like tox, except it uses docker instead of
 virtualenv - which means we can express all of our requirements, not
 just pip ones.
 
 Oh thank god[1].
 
 Seriously.
 
 jogo started a thread about what matters for kilo and I was going
 to respond with (amongst other things) get containers into the testing
 scene. Seems you're way ahead of me. Docker's caching could be a
 _huge_ win here.
 
 [1] https://www.youtube.com/watch?v=om5rbtudzrg
 
 across the project. Luckily, docker itself does an EXCELLENT job at
 handling caching and reuse - so I think we can have a set of
 containers that something in infra (waves hands) publishes to
 dockerhub, like:

  infra/py27
  infra/py26
 
 I'm assuming these would get rebuilt regularly (every time global
 requirements and friends are updated) on some sort of automated
 hook?
 
 Thoughts? Anybody wanna hack on it with me? I think it could wind up
 being a pretty useful tool for folks outside of OpenStack too if we
 get it right.
 
 Given availability (currently an unknown) I'd like to help with this.

I think all this is very cool, I'd say if we're going to put it in
gerrit do stackforge instead of openstack/ namespace, because it will be
useful beyond us.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] default allow security group

2014-09-05 Thread Dean Troyer
On Fri, Sep 5, 2014 at 10:27 AM, Monty Taylor mord...@inaugust.com wrote:

 I've decided that as I have problems with OpenStack while using it in the
 service of Infra, I'm going to just start spamming the list.


User CLI/API feedback!


 neutron security-group-create default --allow-every-damn-thing


You mean like this?  https://review.openstack.org/#/c/119407/

dt

*Disclaimer: For demonstration purposes on nova-network only; the views
expressed here may not be those of the OpenStack Foundation, it's member
companies or lackeys; in case of duplicates, ties will be awarded; your
mileage may vary; allow 4 to 6 weeks for delivery; any resemblance to
functional code, living or dead, is unintentional and purely coincidental;
representations of this code may be freely reused without the express
written consent of the Commissioner of the National Football League.

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FFE] [nova] Barbican key manager wrapper

2014-09-05 Thread Coffman, Joel M.
-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: Friday, September 05, 2014 8:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [FFE] [nova] Barbican key manager wrapper



On 09/05/2014 08:11 AM, Sean Dague wrote:

 On 09/05/2014 07:51 AM, Daniel P. Berrange wrote:

 On Thu, Sep 04, 2014 at 05:19:45PM +, Coffman, Joel M. wrote:

 We request a feature freeze exception be granted to merge this code [3], 
 which is really a shim between the existing key manager interface in Nova 
 and python-barbicanclient, into Nova [4]. The acceptance of this feature 
 will improve the security of cloud users and operators who use the Cinder 
 volume encryption feature [1], which is currently limited to a single, 
 static encryption key for volumes. Cinder has already merged a similar 
 feature [5] following the review of several patch revisions; not accepting 
 the feature in Nova creates a disparity with Cinder in regards to the 
 management of encryption keys.

[snip]



There is a real issue in the current patch which I -1ed on Wed around the way 
requirements are pulled in.



If you are in FFE there really is an expectation that patches are respun 
quickly on feedback. So if this isn't addressed shortly, I'm removing my 
sponsorship here.



That feedback has been addressed - sorry for the delay.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Day, Phil
The corresponding Tempest change is also ready to roll (thanks to Ken'inci):  
https://review.openstack.org/#/c/112474/1   so its kind of just a question of 
getting the sequence right.

Phil


 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 05 September 2014 17:05
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] FFE server-group-quotas
 
 On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
  2014-09-05 21:56 GMT+09:00 Day, Phil philip@hp.com:
  Hi,
 
  I'd like to ask for a FFE for the 3 patchsets that implement quotas for
 server groups.
 
  Server groups (which landed in Icehouse) provides a really useful anti-
 affinity filter for scheduling that a lot of customers woudl like to use, but
 without some form of quota control to limit the amount of anti-affinity its
 impossible to enable it as a feature in a public cloud.
 
  The code itself is pretty simple - the number of files touched is a side-
 effect of having three V2 APIs that report quota information and the need to
 protect the change in V2 via yet another extension.
 
  https://review.openstack.org/#/c/104957/
  https://review.openstack.org/#/c/116073/
  https://review.openstack.org/#/c/116079/
 
  I am happy to sponsor this work.
 
  Thanks
  Ken'ichi ohmichi
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 These look like they are also all blocked by Tempest because it's changing
 return chunks. How does one propose to resolve that, as I don't think there
 is an agreed path up there for to get this into a passing state from my 
 reading
 of the reviews.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Dugger, Donald D
Well, I and I believe a few others feel a slightly higher sense of urgency 
about splitting out the scheduler but I don't want to hijack this thread for 
that debate.  Fair warning, I intend to start a new thread where we can talk 
specifically about the scheduler split, I'm afraid we're in the situation where 
we're all in agreement but everyone has a different view of what that agreement 
is.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, September 5, 2014 8:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out 
virt drivers

On 09/05/2014 06:29 AM, John Garbutt wrote:
 Scheduler: I think we need to split out the scheduler with a similar 
 level of urgency. We keep blocking features on the split, because we 
 know we don't have the review bandwidth to deal with them. Right now I 
 am talking about a compute related scheduler in the compute program, 
 that might evolve to worry about other services at a later date.

-1

Without first cleaning up the interfaces around resource tracking, claim 
creation and processing, and the communication interfaces between the 
nova-conductor, nova-scheduler, and nova-compute.

I see no urgency at all in splitting out the scheduler. The cleanup of the 
interfaces around the resource tracker and scheduler has great priority, 
though, IMO.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslotest 1.1.0.0a2 released

2014-09-05 Thread Doug Hellmann
The Oslo team is pleased to announce the release of oslotest 1.1.0.0a2, the 
latest version of the oslo library with test base classes and fixtures.

This release includes:

* Add fixture for mock.patch.multiple
* Ensure that mock.patch.stopall is called last
* Remove differences between Python 2.x and 3.x versions
* Require six
* Add documentation for running oslo_debug_heler.sh
* Restructure oslotest docs
* Add pdb support to tox with debug helper shell script
* Updated from global requirements
* Cleaning up index.rst file
* Add known issue about time.time mocking
* Updated from global requirements
* Add API documentation
* Moving to use the mock module found in Python3

Of particular importance are the Python 2/3 changes related to making oslotest 
usable by projects that gate on python 3. We are now packaging version 
independent universal wheels and installing mox3 but not mox by default. If 
your project uses oslotest and mox, you may need to update your 
test-requirements.txt to ensure mox is installed. Alternately, you can use the 
fixtures in oslotest, which now use mox3 instead.

If you encounter problems with this release today, please report them in 
#openstack-oslo for quick turn-around.

After today, please file bugs in the launchpad bug tracker: 
https://bugs.launchpad.net/oslotest

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Dean Troyer
On Fri, Sep 5, 2014 at 4:27 AM, Thierry Carrez thie...@openstack.org
wrote:

 Tim Bell wrote:
  The one concern I have with a small core is that there is not an easy
 way to assess the maturity of a project on stackforge. The stackforge
 projects may be missing packaging, Red Hat testing, puppet modules,
 install/admin documentation etc. Thus, I need to have some indication that
 a project is deployable before looking at it with my user community to see
 if it meets a need that is sustainable.
 
  Do you see the optional layer services being blessed / validated in
 some way and therefore being easy to identify ?

 Yes, I think whatever exact shape this takes, it should convey some
 assertion of stability to be able to distinguish itself from random
 projects. Some way of saying this is good and mature, even if it's not
 in the inner circle.

 Being in The integrated release has been seen as a sign of stability
 forever, while it was only ensuring integration with other projects
 and OpenStack processes. We are getting better at requiring maturity
 there, but if we set up layers, we'll have to get even better at that.


The layers are (or originally were) a purely technical organization,
intentionally avoiding association with defcore and other groupings, and on
reflection, maturity too.  The problem that repeatedly bubbles up is that
people (mostly outside the community) want a simple tag for maturity or
blessedness and have been using the integrated/incubated status for that.
 Maturity has nothing to do with technical relationships between projects
(required/optional layers).

The good and mature blessing should be an independent attribute that is
set on projects as a result of nomination by TC members or PTLs or existing
core members or whatever trusted group we choose.  I'd say for starters
that anything from Stackforge that we use in integrated/incubated projects
is on the short list for that status, as it already has that implicitly by
our use.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests

2014-09-05 Thread David Kranz

On 09/05/2014 12:10 PM, Matthew Treinish wrote:

On Fri, Sep 05, 2014 at 09:42:17AM +1200, Steve Baker wrote:

On 05/09/14 04:51, Matthew Treinish wrote:

On Thu, Sep 04, 2014 at 04:32:53PM +0100, Steven Hardy wrote:

On Thu, Sep 04, 2014 at 10:45:59AM -0400, Jay Pipes wrote:

On 08/29/2014 05:15 PM, Zane Bitter wrote:

On 29/08/14 14:27, Jay Pipes wrote:

On 08/26/2014 10:14 AM, Zane Bitter wrote:

Steve Baker has started the process of moving Heat tests out of the
Tempest repository and into the Heat repository, and we're looking for
some guidance on how they should be packaged in a consistent way.
Apparently there are a few projects already packaging functional tests
in the package projectname.tests.functional (alongside
projectname.tests.unit for the unit tests).

That strikes me as odd in our context, because while the unit tests run
against the code in the package in which they are embedded, the
functional tests run against some entirely different code - whatever
OpenStack cloud you give it the auth URL and credentials for. So these
tests run from the outside, just like their ancestors in Tempest do.

There's all kinds of potential confusion here for users and packagers.
None of it is fatal and all of it can be worked around, but if we
refrain from doing the thing that makes zero conceptual sense then there
will be no problem to work around :)

I suspect from reading the previous thread about In-tree functional
test vision that we may actually be dealing with three categories of
test here rather than two:

* Unit tests that run against the package they are embedded in
* Functional tests that run against the package they are embedded in
* Integration tests that run against a specified cloud

i.e. the tests we are now trying to add to Heat might be qualitatively
different from the projectname.tests.functional suites that already
exist in a few projects. Perhaps someone from Neutron and/or Swift can
confirm?

I'd like to propose that tests of the third type get their own top-level
package with a name of the form projectname-integrationtests (second
choice: projectname-tempest on the principle that they're essentially
plugins for Tempest). How would people feel about standardising that
across OpenStack?

By its nature, Heat is one of the only projects that would have
integration tests of this nature. For Nova, there are some functional
tests in nova/tests/integrated/ (yeah, badly named, I know) that are
tests of the REST API endpoints and running service daemons (the things
that are RPC endpoints), with a bunch of stuff faked out (like RPC
comms, image services, authentication and the hypervisor layer itself).
So, the integrated tests in Nova are really not testing integration
with other projects, but rather integration of the subsystems and
processes inside Nova.

I'd support a policy that true integration tests -- tests that test the
interaction between multiple real OpenStack service endpoints -- be left
entirely to Tempest. Functional tests that test interaction between
internal daemons and processes to a project should go into
/$project/tests/functional/.

For Heat, I believe tests that rely on faked-out other OpenStack
services but stress the interaction between internal Heat
daemons/processes should be in /heat/tests/functional/ and any tests the
rely on working, real OpenStack service endpoints should be in Tempest.

Well, the problem with that is that last time I checked there was
exactly one Heat scenario test in Tempest because tempest-core doesn't
have the bandwidth to merge all (any?) of the other ones folks submitted.

So we're moving them to openstack/heat for the pure practical reason
that it's the only way to get test coverage at all, rather than concerns
about overloading the gate or theories about the best venue for
cross-project integration testing.

Hmm, speaking of passive aggressivity...

Where can I see a discussion of the Heat integration tests with Tempest QA
folks? If you give me some background on what efforts have been made already
and what is remaining to be reviewed/merged/worked on, then I can try to get
some resources dedicated to helping here.

We recieved some fairly strong criticism from sdague[1] earlier this year,
at which point we were  already actively working on improving test coverage
by writing new tests for tempest.

Since then, several folks, myself included, commited very significant
amounts of additional effort to writing more tests for tempest, with some
success.

Ultimately the review latency and overhead involved in constantly rebasing
changes between infrequent reviews has resulted in slow progress and
significant frustration for those attempting to contribute new test cases.

It's been clear for a while that tempest-core have significant bandwidth
issues, as well as not necessarily always having the specific domain
expertise to thoroughly review some tests related to project-specific
behavior or functionality.

So I view this as actually a breakdown 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Chris Friesen

On 09/05/2014 03:52 AM, Daniel P. Berrange wrote:



So my biggest fear with a model where each team had their own full
Nova tree and did large pull requests, is that we'd suffer major
pain during the merging of large pull requests, especially if any
of the merges touched common code. It could make the pull requests
take a really long time to get accepted into the primary repo.

By constrast with split out git repos per virt driver code, we will
only ever have 1 stage of code review for each patch. Changes to
common code would go straight to main nova common repo and so get
reviewed by the experts there without delay, avoiding the 2nd stage
of review from merge requests.


Why treat things differently?  It seems to me that even in the first 
scenario you could still send common code changes straight to the main 
nova repo.  Then the pulls from the virt repo would literally only touch 
the virt code in the common repo.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][FFE] glance_store switch-over and random access to image data

2014-09-05 Thread Mark Washenberger
I'm +1 on the FFE for both of these branches.


On Fri, Sep 5, 2014 at 8:51 AM, Flavio Percoco fla...@redhat.com wrote:

 On 09/05/2014 05:20 PM, Thierry Carrez wrote:
  Flavio Percoco wrote:
  Greetings,
 
  I'd like to request a FFE for 2 features I've been working on during
  Juno which, unfortunately, haven been delayed for different reasons
  during this time.
  [...]
 
  I would be inclined to give both a chance, but they really need to merge
  quickly, and the current Glance review velocity is not exactly feeding
  my hopes. +0 as far as I'm concerned, and definitely -1 if it takes more
  than one week.
 

 Agreed.

 Both patches are passing all tests. They just need to be reviewed.

 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Stable Havana 2013.2.4 preparation

2014-09-05 Thread Alan Pevec
Hi all,

as planned[1] stable-maint team is going to prepare the final stable
Havana release 2013.2.4, now that Juno-3 was released.
Proposed release date is Sep 18, with the code freeze on stable/havana
branches week before on Sep 11.
Stable-maint members: please review open backports [2] taking into
account the fact this is the final release.
Potentially risky changes should be reviewed very closely since there
won't be another release to fix possible regressions.

Cheers,
Alan

[1] 
https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fhavana_releases
design summit notes: https://etherpad.openstack.org/p/StableIcehouse

[2] 
https://review.openstack.org/#/q/status:open+AND+branch:stable/havana+AND+(project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer),n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Request for J3 Feature Freeze Exception

2014-09-05 Thread David Pineau
Hello dear cinder stackers,

Recently, the feature freeze was acted for J3, while I was frantically
trying to tighten the feedback loop on the driver I was working on for
my company, Scality, in hope of getting merged for J3.

I felt I was really close, a lot of reviews coming in the last few
days, and dealing with the timelag for the feedback, as I am under the
impression that a lot of the cinder team is in the US (I am in west
europe)

So I asked Duncan what could be done, learned about the FFE, and I am
now humbly asking you guys to give us a last chance to get in for
Juno. I was told that if it was possible the last delay would be next
week, and believe me, we're doing everything we can on our side to be
able to meet that.

I had a patch ready to go in answer to the recent reviews I got, but
wanted to get driver cert log okay before I put if up for reviews. The
situation being what it is, I put that up, and am working on the
driver cert, solving last-minute issues.

I'll understand if that is not possible, and I thank you for your
understanding !

-- 
David Pineau, Developer at Scality
IRC handle: joa
Gerrit handle: Joachim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Status update on the python-openstacksdk project

2014-09-05 Thread Doug Hellmann

On Sep 5, 2014, at 10:21 AM, Brian Curtin br...@python.org wrote:

 Hi all,
 
 Between recent IRC meetings and the mid-cycle operators meetup, we've
 heard things ranging from is the SDK project still around to I
 can't wait for this. I'm Brian Curtin from Rackspace and I'd like to
 tell you what the python-openstacksdk [0][1] project has been up to
 lately.
 
 After initial discussions, meetings [2], and a coordination session in
 Atlanta, a group of us decided to kick off a project to offer a
 complete Software Development Kit for those creating and building on
 top of OpenStack. This project aims to offer a one-stop-shop to
 interact with all of the parts of an OpenStack cloud, either writing
 code against a consistent set of APIs, or by using command line tools
 implemented on those APIs [3], with concise documentation and examples
 that end-users can leverage.
 
 From a vendor perspective, it doesn't make sense for all of us to have
 our own SDKs written against the same APIs. Additionally, every
 service having their own client/CLI presents a fragmented view to
 consumers and introduces difficulties once users move beyond
 involvement with one or two services. Beyond the varying dependencies
 and the sheer number of moving parts involved, user experience is not
 as welcoming and great as it should be.
 
 We first built transport and session layers based on python-requests
 and Jamie Lennox's Keystone client authentication plugins (minus
 compatibility cruft). The service resources are represented in a base
 resource class, and we've implemented resources for interacting with
 Identity, Object-Store, Compute, Image, Database, Network, and
 Orchestration APIs. Expanding or adding support for new services is
 straightforward, but we're thinking about the rest of the picture
 before building out too far.
 
 This resource layer may be slightly raw if you're looking at it as a
 consumer, and not likely what you'd want in a full scale application.
 Now that we have these resources exposed to work with, we're looking
 upward to think about how an end-user would want to interact with a
 service. We're also moving downward and looking at what we want to
 provide to command line interfaces, such as easier access to the
 JSON/dicts (as prodded by Dean :).
 
 Overall, we're moving along nicely. While we're thinking about these
 high-level/end-user views, I'd love to know if anyone has any thoughts
 there. For example, what would the ideal interface to your favorite
 service look like? As things are hacked out, we'll share them and
 gather as much input as we can from this community as well as the
 users.
 
 If you're interested in getting involved or have any questions or
 comments, we meet on Tuesdays at 1900 UTC in #openstack-meeting-3, and
 all of us hang out in #openstack-sdks on Freenode.
 
 As for who's involved, we're on stackalytics [4], but recently it has
 been Terry Howe (HP), Jamie Lennox (Red Hat), Dean Troyer (Nebula),
 Steve Lewis (Rackspace), and myself.
 
 Thanks for your time
 
 
 [0] https://github.com/stackforge/python-openstacksdk
 [1] https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK
 [2] http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/
 [3] OpenStackClient is planning to switch to using the Python SDK
 after the interfaces have stabilized.
 [4] 
 http://stackalytics.com/?project_type=stackforgemodule=python-openstacksdkrelease=all

I’m glad to hear the project is still moving along — I didn’t mean to give the 
impression that it might not be, just that I hadn’t been able to follow as 
closely as I’d liked so I wasn’t sure where things stood. I hope to be more 
involved for Kilo.

Doug

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] StackTach.v3 - Screencasts ...

2014-09-05 Thread Sandy Walsh
For those of you playing the home game ... just added four new screencasts to 
the StackTach.v3 playlist. 

These are technical deep dives into the code added over the last week or so, 
with demos. 
For the more complex topics I spend a little time on the background and 
rationale. 

StackTach.v3: Stream debugging  (24:22)
StackTach.v3: Idempotent pipeline processing and debugging (12:16)
StackTach.v3: Quincy  Quince - the REST API (22:56)
StackTach.v3: Klugman the versioned cmdline tool for Quincy (8:46)

https://www.youtube.com/playlist?list=PLmyM48VxCGaW5pPdyFNWCuwVT1bCBV5p3

Please add any comments to the video and I'll try to address them there. 

Next ... the move to StackForge!

Have a great weekend!
-S
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-05 Thread Stephen Balukoff
Hi German,

Thanks for your reply! My responses are in-line below, and of course you
should feel free to counter my counter-points. :)

For anyone else paying attention and interested in expressing a voice here,
we'll probably be voting on this subject at next week's Octavia meeting.


On Thu, Sep 4, 2014 at 9:13 PM, Eichberger, German german.eichber...@hp.com
 wrote:

  Hi,



 Stephen visited us today (the joy of spending some days in SeattleJ) and
 we discussed that  further (and sorry for using VM – not sure what won):


Looks like Amphora won, so I'll start using that terminology below.


  1.   We will only support one driver per controller, e.g. if you
 upgrade a driver you deploy a new controller with the new driver and either
 make him take over existing VMs (minor change) or spin  up new ones (major
 change) but keep the “old” controller in place until it doesn’t serve any
 VMs any longer

Why? I agree with the idea of one back-end type per driver, but why
shouldn't we support more than one driver per controller?

I agree that you probably only want to support one version of each driver
per controller, but it seems to me it shouldn't be that difficult to write
a driver that knows how to speak different versions of back-end amphorae.
Off the top of my head I can think of two ways of doing this:

1. For each new feature or bugfix added, keep track of the minimal version
of the amphora required to use that feature/bugfix. Then, when building
your configuration, as various features are activated in the configuration,
keep a running track of the minimal amphora version required to meet that
configuration. If the configuration version is higher than the version of
the amphora you're going to update, you can pre-emptively return an error
detailing an unsupported configuration due to the back-end amphora being
too old. (What you do with this error-- fail, recycle the amphora,
whatever-- is up to the operator's policy at this point, though I would
probably recommend just recycling the amphora.) If a given user's
configuration never makes use of advanced features later on, there's no
rush to upgrade their amphoras, and new controllers can push configs that
work with the old amphoras indefinitely.

2. If the above sounds too complicated, you can forego that and simply
build the config, try to push it to the amphora, and see if you get an
error returned.  If you do, depending on the nature of the error you may
decide to recycle the amphora or take other actions. As there should never
be a case where you deploy a controller that generates configs with
features that no amphora image can satisfy, re-deploying the amphora with
the latest image should correct this problem.

There are probably other ways to manage this that I'm not thinking of as
well-- these are just the two that occurred to me immediately.

Also, your statement above implies some process around controller upgrades
which hasn't been actually decided yet. It may be that we recommend a
different upgrade path for controllers.


  2.   If we render configuration files on the VM we only support one
 upgrade model (replacing the VM) which might simplify development as
 opposed to the driver model where we need to write code to push out
 configuration changes to all VMs for minor changes + write code to failover
 VMs for major changes

So, you're saying it's a *good* thing that you're forced into upgrading all
your amphoras for even minor changes because having only one upgrade path
should make the code simpler.

For large deployments, I heartily disagree.


  3.   I am afraid that half baked drivers will break the controller
 and I feel it’s easier to shoot VMs with half baked renderers  than the
 controllers.


I defer to Doug's statement on this, and will add the following:

Breaking a controller temporarily does not cause a visible service
interruption for end-users. Amphorae keep processing load-balancer
requests. All it means is that tenants can't make changes to existing load
balanced services until the controllers are repaired.

But blowing away an amphora does create a visible service interruption for
end-users. This is especially bad if you don't notice this until after
you've gone through and updated your fleet of 10,000+ amphorae because your
upgrade process requires you to do so.

Given the choice of scrambling to repair a few hundred broken controllers
while almost all end-users are oblivious to the problem, or scrambling to
repair 10's of thousands of amphorae while service stops for almost all
end-users, I'll take the former.  (The former is a relatively minor note on
a service status page. The latter is an article about your cloud outage on
major tech blogs and a damage-control press-release from your PR
department.)


  4.   The main advantage by using an Octavia format to talk to VMs is
 that we can mix and match VMs with different properties (e.g. nginx,
 haproxy) on the same controller because the implementation 

[openstack-dev] [rally] Parallel scenarios

2014-09-05 Thread Ajay Kalambur (akalambu)
Hi
In Rally is there a way to run parallel scenarios?
I know the scenario can support concurrency but I am talking about running 
multiple such scenarios themselves in parallel
Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Russell Bryant
On 09/05/2014 10:06 AM, Jay Pipes wrote:
 On 09/05/2014 06:29 AM, John Garbutt wrote:
 Scheduler: I think we need to split out the scheduler with a similar
 level of urgency. We keep blocking features on the split, because we
 know we don't have the review bandwidth to deal with them. Right now I
 am talking about a compute related scheduler in the compute program,
 that might evolve to worry about other services at a later date.
 
 -1
 
 Without first cleaning up the interfaces around resource tracking, claim
 creation and processing, and the communication interfaces between the
 nova-conductor, nova-scheduler, and nova-compute.
 
 I see no urgency at all in splitting out the scheduler. The cleanup of
 the interfaces around the resource tracker and scheduler has great
 priority, though, IMO.

I'd just reframe things ... I'd like the work you're referring to here
be treated as an obvious key pre-requisite to a split, and this cleanup
is what should be treated with urgency by those with a vested interest
in getting more autonomy around scheduler development.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Election Season, PTL and TC September-October 2014

2014-09-05 Thread Anita Kuno
PTL Election details:
https://wiki.openstack.org/wiki/PTL_Elections_September/October_2014

TC Election details:
https://wiki.openstack.org/wiki/TC_Elections_October_2014

Please read the stipulations and timelines for candidates and electorate
contained in these wikipages.

There will be an announcement email opening nominations as well as an
announcement email opening the polls.

Be aware, in the PTL elections if the program only has one candidate,
that candidate is acclaimed and there will be no poll. There will only
be a poll if there is more than one candidate stepping forward for a
program's PTL position.

There was a new governance resolution pertaining to election
expectations, be sure you are informed of the information contained
therein:
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20140711-election-activities.rst

There will be further announcements posted to the openstack-dev mailing
list as action is required from the electorate or candidates. This email
is for information purposes only.

If you have any questions which you feel affect others please reply to
this email thread. If you have any questions that you wish to discuss in
private please email both myself Anita Kuno (anteaya) email:
ante...@anteaya.info and Tristan Cacqueray (tristanC) email:
tristan.cacque...@enovance.com so that we may address your concerns.

Please Note: There is a chance I may have to step aside from my duties
for personal reasons, should that happen Jeremy Stanley (irc:fungi
email:fu...@yuggoth.org) will take over my duties for the PTL election,
in co-ordination with Tristan and Thierry Carrez (irc: ttx
email:thie...@openstack.org) will perform my duties for the TC election.
The candidates and the electorate should experience no difference in the
execution of the duties should I need to step aside.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] VM/Container Naming Issue

2014-09-05 Thread Jorge Miramontes
Hey guys,

I just noticed that Amphora won the vote. I have several issues with
this.

1) Amphora wasn't in the first list of items to vote on. I'm confused as
to how it ended up in the final round. The fact that it did makes me
feel like the first round of votes were totally disregarded.

2) The first vote was on Wednesday. The final vote was due less that 24
hours after that vote. This did not give me enough time to vote as I was
rather busy and I'm sure other were as well. This is more of a minor
point, however, as I realize we can't wait for the world to vote otherwise
we would get nowhere. The bigger issue is that I was able to vote in the
first round and was fine with the top 5 first round items going to the
final round. As far as I know amphora wasn't in the first round.

3) The word amphora is a very specific type of physical container used in
Greco-Roman times to store a variety of things such as water, wine, and
grain (yay 8th grade classical heritage!). This makes no sense for what we
are trying to name other than the fact that it relates to a container. In
my mind, the words vase and jug should have also been added to the final
round if that's the precedent we want to set. If amphora were in the first
round I would be okay if majority won. I just feel the democratic process
was not followed the way it should have been.

All this said, I don't want to stifle progress so I don't necessarily want
a re-vote. I just want to make sure we are not setting a bad precedent for
voting on future items.


Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] For people attending the mid-cycle policy summit

2014-09-05 Thread Sean Roberts
We have a couple of requests for all of you planning to attend the 18-19th 
September Policy Mid-cycle summit.  

1. We’re planning on starting with a series of talks describing the state of 
policy (current and possibly future) in different projects.  We've confirmed 
people for talks on the following projects.

Nova
Neutron
Congress 

Are there any other projects interested in giving a talk?  It could just be a 
chalk-talk (on the whiteboard), if that makes it easier.

2. We’re planning to use the talks as level-setting for a discussion/workshop 
on how all our policy efforts might interoperate to better serve OpenStack 
users.  We’d like to drive that discussion by working through one or more use 
cases that require us to all think about OpenStack policy from a holistic point 
of view.  

Examples of the kinds of questions we envision trying to answer:

How would the OpenStack users communicate their policies to OpenStack?  
Can OpenStack always enforce policies?  What about monitoring?  Auditing?
What is the workflow for how OpenStack takes the policies and 
implements/renders them?
What happens if there are conflicts between users?  How are those conflicts 
surfaced/resolved?
What gaps are there?  How do we plug them?  What’s the roadmap?
Below is the start of a use case that we think will do the trick.  Let’s work 
together to refine it over email before the summit, so we can hit the ground 
running.  Please reply (to all) with suggestions/alternatives/etc.

a) Application-developer: My 2-tier PCI app (database tier and web tier) can be 
deployed either for production or for development.  

When deployed for production, it needs 

solid-state storage for the DB tier
all ports but 80 closed on the web tier
no network communication to DB tier except from the web tier
no VM in the DB tier can be deployed on the same hypervisor as another VM in 
the DB tier; same for the web tier
b) Cloud operator.  

Applications deployed for production must have access to the internet.
Applications deployed for production must not be deployed in the DMZ cluster.
Applications deployed for production should scale based on load.
Applications deployed for development should have 1 VM instance per tier.
Every application must use VM images signed by an administrator
 c) Compliance officer

No VM from a PCI app may be located on the same hypervisor as a VM from a 
non-PCI app.

~ sean

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.0.0: A console interface to Gerrit

2014-09-05 Thread Jeremy Stanley
On 2014-09-04 16:17:04 -0700 (-0700), James E. Blair wrote:
[...]
  * Your terminal is an actual terminal -- Gertty works just fine
in 80 columns, but it is also happy to spread out into hundreds
of columns for ideal side-by-side diffing.
 
  * Colors -- you think ANSI escape sequences are a neat idea.
[...]

Well, you already had me with 80 columns, but I actually
do think ANSI escape sequences are a neat idea.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Request for J3 FFE - NetApp: storage pools for scheduler

2014-09-05 Thread Alex Meade
Hi Cinder Folks,

I would like to request a FFE for cinder pools support with the NetApp
drivers[1][2].

We began working on this patch as soon as Winstons pool-aware scheduler
changes[3] became stable and this patch is ready now that Winstons code has
been merged. These changes have been well tested against our storage
controllers and are not very complex. I would appreciate any consideration
for an FFE.

Thanks,

-Alex Meade

[1]
https://blueprints.launchpad.net/cinder/+spec/pool-aware-cinder-scheduler-support-in-netapp-drivers


[2] https://review.openstack.org/#/c/119436/

[3] https://review.openstack.org/#/c/98715/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] VM/Container Naming Issue

2014-09-05 Thread Doug Wiegley
Hi Jorge,

That was totally my bad.  Since we had lukewarm to *no* consensus on the
original list, I had a late inspiration to see if people liked the idea of
using a Roman container name, given our project has a Roman name.
Everyone on IRC was in favor, though that could’ve also just been grasping
at any alternative.

Of the original list, in which people could vote for as many as they
wanted, the final runoff was between items receiving a score of ZERO or
-1.  Not exactly a mandate.

That said, the second runner-up was “appliance”, and I’m not that strongly
for amphora (or amp for short.)  The only thing I’m strongly for is
putting this to bed and writing some code.  If folks want another vote
over the weekend, that’s fine; just holler in IRC.

Thanks,
Doug


On 9/5/14, 1:17 PM, Jorge Miramontes jorge.miramon...@rackspace.com
wrote:

Hey guys,

I just noticed that Amphora won the vote. I have several issues with
this.

1) Amphora wasn't in the first list of items to vote on. I'm confused as
to how it ended up in the final round. The fact that it did makes me
feel like the first round of votes were totally disregarded.

2) The first vote was on Wednesday. The final vote was due less that 24
hours after that vote. This did not give me enough time to vote as I was
rather busy and I'm sure other were as well. This is more of a minor
point, however, as I realize we can't wait for the world to vote otherwise
we would get nowhere. The bigger issue is that I was able to vote in the
first round and was fine with the top 5 first round items going to the
final round. As far as I know amphora wasn't in the first round.

3) The word amphora is a very specific type of physical container used in
Greco-Roman times to store a variety of things such as water, wine, and
grain (yay 8th grade classical heritage!). This makes no sense for what we
are trying to name other than the fact that it relates to a container. In
my mind, the words vase and jug should have also been added to the final
round if that's the precedent we want to set. If amphora were in the first
round I would be okay if majority won. I just feel the democratic process
was not followed the way it should have been.

All this said, I don't want to stifle progress so I don't necessarily want
a re-vote. I just want to make sure we are not setting a bad precedent for
voting on future items.


Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-05 Thread Tim Bell

If I take a recent case we had, we’re looking for ways to simplify backup for 
our users. They should not have to write crons to do snapshots but leave that 
to a service.

Raksha seems promising… a wiki page on openstack.org, code in github etc.

How can the average deployer know whether a stackforge is


a.  An early prototype which has completed (such as some of the early LBaaS 
packages)

b.  A project which has lost its initial steam and further investment is 
not foreseen

c.  A reliable project where there has not been a significant need to 
change recently but is a good long term bet

The end result is that deployers hold back waiting for an indication that this 
is more than a prototype in whatever form that would take. This means a missed 
opportunity as if something is interesting to many deployers, that could create 
the community needed to keep a project going.

The statement of “this is good and mature, even if it's not in the inner 
circle is exactly what is needed. When I offer something to my end-users, 
there is an implied promise of this (and a corresponding credibility issue if 
that is not met).

Tim



From: Dean Troyer [mailto:dtro...@gmail.com]
Sent: 05 September 2014 19:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the 
TC meeting

On Fri, Sep 5, 2014 at 4:27 AM, Thierry Carrez 
thie...@openstack.orgmailto:thie...@openstack.org wrote:
Tim Bell wrote:
 The one concern I have with a small core is that there is not an easy way to 
 assess the maturity of a project on stackforge. The stackforge projects may 
 be missing packaging, Red Hat testing, puppet modules, install/admin 
 documentation etc. Thus, I need to have some indication that a project is 
 deployable before looking at it with my user community to see if it meets a 
 need that is sustainable.

 Do you see the optional layer services being blessed / validated in some 
 way and therefore being easy to identify ?
Yes, I think whatever exact shape this takes, it should convey some
assertion of stability to be able to distinguish itself from random
projects. Some way of saying this is good and mature, even if it's not
in the inner circle.

Being in The integrated release has been seen as a sign of stability
forever, while it was only ensuring integration with other projects
and OpenStack processes. We are getting better at requiring maturity
there, but if we set up layers, we'll have to get even better at that.

The layers are (or originally were) a purely technical organization, 
intentionally avoiding association with defcore and other groupings, and on 
reflection, maturity too.  The problem that repeatedly bubbles up is that 
people (mostly outside the community) want a simple tag for maturity or 
blessedness and have been using the integrated/incubated status for that.  
Maturity has nothing to do with technical relationships between projects 
(required/optional layers).

The good and mature blessing should be an independent attribute that is set 
on projects as a result of nomination by TC members or PTLs or existing core 
members or whatever trusted group we choose.  I'd say for starters that 
anything from Stackforge that we use in integrated/incubated projects is on the 
short list for that status, as it already has that implicitly by our use.

dt

--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Jay Pipes

On 09/05/2014 03:01 PM, Russell Bryant wrote:

On 09/05/2014 10:06 AM, Jay Pipes wrote:

On 09/05/2014 06:29 AM, John Garbutt wrote:

Scheduler: I think we need to split out the scheduler with a similar
level of urgency. We keep blocking features on the split, because we
know we don't have the review bandwidth to deal with them. Right now I
am talking about a compute related scheduler in the compute program,
that might evolve to worry about other services at a later date.


-1

Without first cleaning up the interfaces around resource tracking, claim
creation and processing, and the communication interfaces between the
nova-conductor, nova-scheduler, and nova-compute.

I see no urgency at all in splitting out the scheduler. The cleanup of
the interfaces around the resource tracker and scheduler has great
priority, though, IMO.


I'd just reframe things ... I'd like the work you're referring to here
be treated as an obvious key pre-requisite to a split, and this cleanup
is what should be treated with urgency by those with a vested interest
in getting more autonomy around scheduler development.


Sure, that's a perfectly gentle way of putting it :)

Thanks!
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0026] Unrestricted write permission to config files can allow code execution

2014-09-05 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Unrestricted write permission to config files can allow code execution
- ---

### Summary ###
In numerous places throughout OpenStack projects, variables are read
directly from configuration files and used to construct statements
which are executed with the privileges of the OpenStack service.  Since
configuration files are trusted, the input is not checked or sanitized.
If a malicious user is able to write to these files, they may be able
to execute arbitrary code as the OpenStack service.

### Affected Services / Software ###
Nova / All versions, Trove / Juno, possibly others

### Discussion ###
Some OpenStack services rely on operating system commands to perform
certain actions.  In some cases these commands are created by appending
input from configuration files to a specified command, and passing the
complete command directly to the operating system shell to execute.
For example:

- --- begin example example.py snippet ---
  command='ls -al ' + config.DIRECTORY
  subprocess.Popen(command, shell=True)
- --- end example example.py snippet ---

In this case, if config.DIRECTORY is set to something benign like
'/opt' the code behaves as expected.  If, on the other hand, an
attacker is able to set config.DIRECTORY to something malicious such as
'/opt ; rm -rf /etc', the shell will execute both 'ls -al /opt' and 'rm
- -rf /etc'.  When called with shell=True, the shell will blindly execute
anything passed to it.  Code with the potential for shell injection
vulnerabilities has been identified in the above mentioned services and
versions, but vulnerabilities are possible in other services as well.

Please see the links at the bottom for a couple of examples in Nova and
Trove.

### Recommended Actions ###
Ensure permissions for configuration files across all OpenStack
services are set so that only the owner user can read/write to them.
In cases where other processes or users may have write access to
configuration files, ensure that all settings are sanitized and
validated.

Additionally the principle of least privilege should always be observed
- - files should be protected with the most restrictive permissions
possible.  Other serious security issues, such as the exposure of
plaintext credentials, can result from permissions which allow
malicious users to view sensitive data (read access).

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0026
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1343657
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
Shell Injection:

https://docs.python.org/2/library/subprocess.html#frequently-used-arguments
Additional LaunchPad Bugs:
https://bugs.launchpad.net/trove/+bug/1349939
https://bugs.launchpad.net/nova/+bug/1192971
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUCho0AAoJEJa+6E7Ri+EV/+MH/0Eoy8lIT5geQ541HJ/RTsn1
MVqXRvJK1wH/+OJaNKqvPjbn5ig/2t4IFdbnCpRzRJ2OxMpsX8zoSzBdYzYi7gAi
E1NczYbONk4JNn3fc4tzobWrq9hDWLgO5U57IhiAMjm6Q9vdsWK0SUqy5ZTZT/bG
+xgf+muriBBC0pdUKMpjaQdXeVJlTctix+RhZjOuGace4ioS4Dyn/ND4YIr+bRmk
cQqEgoPKHjFq+IoppNY1OgOJNs9FzUjOCtrUFBqyNrCmt34eSGs+z39Jh5Uf2ym6
W04+yw7zbdGwVcCVReZ+UYTDH+mPnrHDCdppTHe4M8eqe6e0eieL3dxlaBMP4lc=
=wxy3
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-05 Thread Andreas Jaeger
On 09/05/2014 04:43 PM, Anita Kuno wrote:
 On 09/05/2014 10:33 AM, Andreas Jaeger wrote:
 Hi Mykola,
 On 09/05/2014 04:09 PM, Mykola Grygoriev wrote:
 Hi,

 My name is Mykola Grygoriev and I'm engineer who currently working on
 deploying 3d party CI for Сoraid Сinder driver.

 Great, thanks!

 Following instructions on

 http://ci.openstack.org/third_party.html#requesting-a-service-account

 asking for adding gerrit CI account (coraid-ci) to the Voting
 Third-Party CI Gerrit group
 https://review.openstack.org/#/admin/groups/91,members.

 There's a dedicated mailing list for these requests, please reread
 http://ci.openstack.org/third_party.html#requesting-a-service-account

 and resend your email to the third-party-requests mailing lists,

 Andreas

 Actually Andreas, he is following these instructions:
 http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system
 
 He is asking for the community to give him feedback about whether his
 system is ready to have voting permissions. It just seems odd because he
 is the first one actually following the instructions.

Oops ;(

Sorry, Mykola,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature freeze exception for virt-driver-numa-placement

2014-09-05 Thread Jay Pipes

On 09/05/2014 11:20 AM, John Garbutt wrote:

On 5 September 2014 13:59, Nikola Đipanov ndipa...@redhat.com wrote:

Since this did not get an 'Approved' as of yet, I want to make sure that
this is not because the number of sponsors. 2 core members have already
sponsored it, and as per [1] cores can sponsor their own FFEs so that's 3.


While I am no fan of that idea, this was already in the gate, so 2
cores should be more than enough.


I am currently reviewing the final patch series in this and am willing 
to be the third sponsor. I've reviewed most of the NUMA-related patches 
from Dan and Nikola in the past few months so I'm pretty familiar with 
the work, as well as the difficulties Nikola has run into in the 
scheduler code regarding adding this functionality.


-jay


Mikal has said I could approve FFEs in his absence, but given the
conflict in the thread, I want to leave this approval to him :)

I know its 10 patches, but I think we can try resolve the discussion
on the thread, and look to approve this on Monday morning, and still
make it before the deadline. Do shout up if that seems impossible
and/or stupid.

Thanks,
John


[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044669.html

On 09/04/2014 01:58 PM, Nikola Đipanov wrote:

Hi team,

I am requesting the exception for the feature from the subject (find
specs at [1] and outstanding changes at [2]).

Some reasons why we may want to grant it:

First of all all patches have been approved in time and just lost the
gate race.

Rejecting it makes little sense really, as it has been commented on by a
good chunk of the core team, most of the invasive stuff (db migrations
for example) has already merged, and the few parts that may seem
contentious have either been discussed and agreed upon [3], or can
easily be addressed in subsequent bug fixes.

It would be very beneficial to merge it so that we actually get real
testing on the feature ASAP (scheduling features are not tested in the
gate so we need to rely on downstream/3rd party/user testing for those).

Thanks,

Nikola

[1]
http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/virt-driver-numa-placement.rst
[2]
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virt-driver-numa-placement,n,z
[3] https://review.openstack.org/#/c/111782/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Juno-3 milestone candidates available

2014-09-05 Thread Matt Riedemann



On 9/5/2014 5:10 AM, Thierry Carrez wrote:

Hi everyone,

We just hit feature freeze[1], so please do not approve changes that add
features or new configuration options unless those have been granted a
feature freeze exception.

This is also string freeze[2], so you should avoid changing translatable
strings. If you have to modify a translatable string, you should give a
heads-up to the I18N team.

Finally, this is also DepFreeze[3], so you should avoid adding new
dependencies (bumping oslo or openstack client libraries is OK until
RC1). If you have a new dependency to add, raise a thread on
openstack-dev about it.

The juno-3 development milestone was tagged, it contains more than 135
features and 760 bugfixes added since the juno-2 milestone 6 weeks ago
(not even counting the Oslo libraries in the mix). You can find the full
list of new features and fixed bugs, as well as tarball downloads, at:

https://launchpad.net/keystone/juno/juno-3
https://launchpad.net/glance/juno/juno-3
https://launchpad.net/nova/juno/juno-3
https://launchpad.net/horizon/juno/juno-3
https://launchpad.net/neutron/juno/juno-3
https://launchpad.net/cinder/juno/juno-3
https://launchpad.net/ceilometer/juno/juno-3
https://launchpad.net/heat/juno/juno-3
https://launchpad.net/trove/juno/juno-3
https://launchpad.net/sahara/juno/juno-3

Many thanks to all the PTLs and release management liaisons who made us
reach this important milestone in the Juno development cycle. Thanks in
particular to John Garbutt, who keeps on doing an amazing job at the
impossible task of keeping the Nova ship straight in troubled waters
while we head toward the Juno release port.

Regards,

[1] https://wiki.openstack.org/wiki/FeatureFreeze
[2] https://wiki.openstack.org/wiki/StringFreeze
[3] https://wiki.openstack.org/wiki/DepFreeze



I should probably know this, but at least I'm asking first. :)

Here is an example of a new translatable user-facing error message [1].

From the StringFreeze wiki, I'm not sure if this is small or large.

Would a compromise to get this in be to drop the _() so it's just a 
string and not a message?


Maybe I should just shut-up and email the openstack-i18n mailing list [2].

[1] https://review.openstack.org/#/c/118535/
[2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-05 Thread Joe Gordon
On Fri, Sep 5, 2014 at 4:05 AM, Nikola Đipanov ndipa...@redhat.com wrote:

 On 09/04/2014 10:25 PM, Solly Ross wrote:
  Anyway, I think it would be useful to have some sort of page where people
  could say I'm an SME in X, ask me for reviews and then patch
 submitters could go
  and say, oh, I need an someone to review my patch about storage
 backends, let me
  ask sross.
 

 This is a good point - I've been thinking along similar lines that we
 really could have a huge win in terms of the review experience by
 building a tool (maybe a social network looking one :)) that relates
 reviews to people being able to do them, visualizes reviewer karma and
 other things that can help make the code submissions and reviews more
 human friendly.

 Dan seems to dismiss the idea of improved tooling as something that can
 get us only thus far, but I am not convinced. However - this will
 require even more manpower and we are already ridiculously short on that
 so...


I have previously toyed with idea of making such a tool, and if someone
else wants to work on it I would be happy to help.



 N.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature Freeze exception for juno-slaveification

2014-09-05 Thread Joe Gordon
On Fri, Sep 5, 2014 at 9:11 AM, Mike Wilson geekinu...@gmail.com wrote:

 Hi all,

 I am requesting an exception for the juno-slaveification blueprint. There
 is a single outstanding patch [1] which has already been approved before,
 but needed to be re-spun due to gate failures which then necessitated a
 rebase. For those not familiar with the work, the spec[2] can shed some
 more light on the scope of work for Juno.

 All the other patches from this blueprint have merged, the only remaining
 patch really just needs a +W as it has been extensively reviewed and
 already approved previously. This may be an easy candidate since Andrew
 Laski, Jay Pipes and Dan Smith have reviewed and +2'd this already.


I am happy to sponsor this, as this is the last patch needed to finish a BP
and the patch was approved once bit needed a rebase over some objects
changes. The change from the approved version is pretty minor.



 Thanks,

 Mike Wilson

 [1] https://review.openstack.org/#/c/103064/
 [2]
 http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/juno-slaveification.rst


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][FFE] make ssh host key policy configurable

2014-09-05 Thread Jay Bryant
Not sure if the patch to make the ssh strict policy setting configurable
needed an official ffe.  It merged after the tag so I wanted to cover my
bases.

This is my request.

Thanks!
Jay

https://review.openstack.org/114336
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mirror changes

2014-09-05 Thread Monty Taylor

Hey all!

A few of quick notes about PyPI mirrors and the gate.

Firstly - today we just rolled out per-cloud-region mirrors. Hopefully 
this eliminate issues we have from time to time with connections to PyPI 
timing out. We have a selector script running on node creation, so nodes 
in, say, Rackspace DFW, will be configured to hit the DFW mirror.


Second - we've removed the old partial mirror. It's been deprecated for 
a while now. This was http://pypi.openstack.org/openstack. If you start 
having 404 errors trying to get to that URL, it's on purpose. :)


Finally - we may remove pypi.openstack.org/simple as well, since we now 
have per-region mirrors and as PyPI itself has a CDN, so we shouldn't 
need to be in the business of hosting general purpose mirrors for 
people. If anyone is MASSIVELY opposed to that for some reason, please 
let us know.


Thanks!
Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Juno-3 milestone candidates available

2014-09-05 Thread Jay Bryant
Matt,

I don't think that is the right solution.

If the string changes I think the only problem is it won't be translated if
it is thrown.   That is better than breaking the coding standard imho.

Jay
On Sep 5, 2014 3:30 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:



 On 9/5/2014 5:10 AM, Thierry Carrez wrote:

 Hi everyone,

 We just hit feature freeze[1], so please do not approve changes that add
 features or new configuration options unless those have been granted a
 feature freeze exception.

 This is also string freeze[2], so you should avoid changing translatable
 strings. If you have to modify a translatable string, you should give a
 heads-up to the I18N team.

 Finally, this is also DepFreeze[3], so you should avoid adding new
 dependencies (bumping oslo or openstack client libraries is OK until
 RC1). If you have a new dependency to add, raise a thread on
 openstack-dev about it.

 The juno-3 development milestone was tagged, it contains more than 135
 features and 760 bugfixes added since the juno-2 milestone 6 weeks ago
 (not even counting the Oslo libraries in the mix). You can find the full
 list of new features and fixed bugs, as well as tarball downloads, at:

 https://launchpad.net/keystone/juno/juno-3
 https://launchpad.net/glance/juno/juno-3
 https://launchpad.net/nova/juno/juno-3
 https://launchpad.net/horizon/juno/juno-3
 https://launchpad.net/neutron/juno/juno-3
 https://launchpad.net/cinder/juno/juno-3
 https://launchpad.net/ceilometer/juno/juno-3
 https://launchpad.net/heat/juno/juno-3
 https://launchpad.net/trove/juno/juno-3
 https://launchpad.net/sahara/juno/juno-3

 Many thanks to all the PTLs and release management liaisons who made us
 reach this important milestone in the Juno development cycle. Thanks in
 particular to John Garbutt, who keeps on doing an amazing job at the
 impossible task of keeping the Nova ship straight in troubled waters
 while we head toward the Juno release port.

 Regards,

 [1] https://wiki.openstack.org/wiki/FeatureFreeze
 [2] https://wiki.openstack.org/wiki/StringFreeze
 [3] https://wiki.openstack.org/wiki/DepFreeze


 I should probably know this, but at least I'm asking first. :)

 Here is an example of a new translatable user-facing error message [1].

 From the StringFreeze wiki, I'm not sure if this is small or large.

 Would a compromise to get this in be to drop the _() so it's just a string
 and not a message?

 Maybe I should just shut-up and email the openstack-i18n mailing list [2].

 [1] https://review.openstack.org/#/c/118535/
 [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature Freeze exception for juno-slaveification

2014-09-05 Thread Dan Smith
 All the other patches from this blueprint have merged, the only
 remaining patch really just needs a +W as it has been extensively
 reviewed and already approved previously. This may be an easy
 candidate since Andrew Laski, Jay Pipes and Dan Smith have reviewed
 and +2'd this already.
 
 I am happy to sponsor this, as this is the last patch needed to finish a
 BP and the patch was approved once bit needed a rebase over some objects
 changes. The change from the approved version is pretty minor.

Yeah, this was approved before the deadline, just failed in the gate,
and is the last patch of a maintenance blueprint. It's very low risk and
lets us mark off another thing.

I'll sponsor this one too.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature Freeze exception for juno-slaveification

2014-09-05 Thread Jay Pipes

I am also happy to sponsor it. I've already reviewed the patches...

On 09/05/2014 04:34 PM, Joe Gordon wrote:




On Fri, Sep 5, 2014 at 9:11 AM, Mike Wilson geekinu...@gmail.com
mailto:geekinu...@gmail.com wrote:

Hi all,

I am requesting an exception for the juno-slaveification blueprint.
There is a single outstanding patch [1] which has already been
approved before, but needed to be re-spun due to gate failures which
then necessitated a rebase. For those not familiar with the work,
the spec[2] can shed some more light on the scope of work for Juno.

All the other patches from this blueprint have merged, the only
remaining patch really just needs a +W as it has been extensively
reviewed and already approved previously. This may be an easy
candidate since Andrew Laski, Jay Pipes and Dan Smith have reviewed
and +2'd this already.


I am happy to sponsor this, as this is the last patch needed to finish a
BP and the patch was approved once bit needed a rebase over some objects
changes. The change from the approved version is pretty minor.


Thanks,

Mike Wilson

[1] https://review.openstack.org/#/c/103064/
[2]

http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/juno-slaveification.rst


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [FFE] Final Libvirt User Namespaces Patch

2014-09-05 Thread Matt Dietz
Thirding sponsorship.

I didn¹t review as much as the other two, but I helped merge a couple of
the patches. Agreed with Jay otherwise; we¹re almost there, let¹s finish
it.

-Original Message-
From: Jay Pipes jaypi...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Friday, September 5, 2014 at 9:56 AM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] [FFE] Final Libvirt User Namespaces
Patch

On 09/05/2014 09:57 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 01:49:20PM +, Andrew Melton wrote:
 Hey Devs,

 I'd like to request a feature freeze exception for:
https://review.openstack.org/#/c/94915/

 This feature is the final patch set for the User Namespace BP
 
(https://blueprints.launchpad.net/nova/+spec/libvirt-lxc-user-namespaces
).
 This is an important feature for libvirt-lxc because in greatly
increases
 the security of running libvirt-lxc based containers. The code for this
 feature has been up for a couple months now and has had plenty of time
 to go through review. The code itself is solid by now and functionally
 it hasn't changed much in the last month or so. Lastly, the only thing
 holding up the patch from merging this week was a multitude of bugs in
 the gate.

 Since we've already merged 9 out of the 10 total patches for this
feature,
 it is a no brainer to merge this last one.

 I sponsor it.

Me as well. I've already reviewed most of the patches.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][FFE] Support of Cinder QOS Specs in Horizon

2014-09-05 Thread Hagarty, Richard (ESSN Storage MSDU)
Hello,

I am requesting an exception for the Horizon Cinder QOS Specs blueprint[1]. 
The blueprint addresses adding QOS Spec management in Horizon and completely 
covers the Cinder QOS CLI commands.

There are 4 patches that implement this blueprint, the first of which has 
already landed in Juno.

The remaining 3 patches are functionally complete and have had several positive 
reviews from both Horizon and Cinder team members.

Here is what is covered in the 4 patches:

Patch 1  [2] - Displays a table of all defined QOS Specs.
Patch 2  [3] - Adds ability to edit values of already defined QOS Specs.
Patch 3 [4] - Adds ability to create and delete QOS Specs, as well as create 
and delete values associated with a QOS Spec.
Patch 4 [5] - Adds ability to associate a QOS Spec with a Volume Type.

My feeling is that all of the patches need to be implemented in order for this 
to be truly useful. Without all of the patches, the user will still be forced 
to use the Cinder CLI.

Are there any Horizon core members who would be willing to sponsor this 
exception?


Thanks for your consideration,

Richard Hagarty, Developer at Hewlett-Packard

IRC handle: rhagarty


[1]: https://blueprints.launchpad.net/horizon/+spec/cinder-qos-specs
[2]: https://review.openstack.org/#/c/111408
[3]: https://review.openstack.org/#/c/112369/
[4]: https://review.openstack.org/#/c/113038/
[5]: https://review.openstack.org/#/c/113571/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [FFE] Final Libvirt User Namespaces Patch

2014-09-05 Thread Michael Still
Approved.

Michael

On Fri, Sep 5, 2014 at 3:46 PM, Matt Dietz matt.di...@rackspace.com wrote:
 Thirding sponsorship.

 I didn¹t review as much as the other two, but I helped merge a couple of
 the patches. Agreed with Jay otherwise; we¹re almost there, let¹s finish
 it.

 -Original Message-
 From: Jay Pipes jaypi...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Friday, September 5, 2014 at 9:56 AM
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] [FFE] Final Libvirt User Namespaces
 Patch

On 09/05/2014 09:57 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 01:49:20PM +, Andrew Melton wrote:
 Hey Devs,

 I'd like to request a feature freeze exception for:
https://review.openstack.org/#/c/94915/

 This feature is the final patch set for the User Namespace BP

(https://blueprints.launchpad.net/nova/+spec/libvirt-lxc-user-namespaces
).
 This is an important feature for libvirt-lxc because in greatly
increases
 the security of running libvirt-lxc based containers. The code for this
 feature has been up for a couple months now and has had plenty of time
 to go through review. The code itself is solid by now and functionally
 it hasn't changed much in the last month or so. Lastly, the only thing
 holding up the patch from merging this week was a multitude of bugs in
 the gate.

 Since we've already merged 9 out of the 10 total patches for this
feature,
 it is a no brainer to merge this last one.

 I sponsor it.

Me as well. I've already reviewed most of the patches.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno

2014-09-05 Thread Solly Ross
Well, I'm definitely down to do some work on something like that
(especially since the original quote was from me ;-)).  Perhaps we
should split this off into a separate thread and have some design/feature
discussions once the mad rush to get Juno out the door dies down?

Best Regards,
Solly

- Original Message -
 From: Joe Gordon joe.gord...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, September 5, 2014 4:30:57 PM
 Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for Juno
 
 
 
 
 On Fri, Sep 5, 2014 at 4:05 AM, Nikola Đipanov  ndipa...@redhat.com  wrote:
 
 
 On 09/04/2014 10:25 PM, Solly Ross wrote:
  Anyway, I think it would be useful to have some sort of page where people
  could say I'm an SME in X, ask me for reviews and then patch submitters
  could go
  and say, oh, I need an someone to review my patch about storage backends,
  let me
  ask sross.
  
 
 This is a good point - I've been thinking along similar lines that we
 really could have a huge win in terms of the review experience by
 building a tool (maybe a social network looking one :)) that relates
 reviews to people being able to do them, visualizes reviewer karma and
 other things that can help make the code submissions and reviews more
 human friendly.
 
 Dan seems to dismiss the idea of improved tooling as something that can
 get us only thus far, but I am not convinced. However - this will
 require even more manpower and we are already ridiculously short on that
 so...
 
 I have previously toyed with idea of making such a tool, and if someone else
 wants to work on it I would be happy to help.
 
 
 
 N.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature Freeze exception for juno-slaveification

2014-09-05 Thread Michael Still
Approved.

Michael

On Fri, Sep 5, 2014 at 3:43 PM, Jay Pipes jaypi...@gmail.com wrote:
 I am also happy to sponsor it. I've already reviewed the patches...

 On 09/05/2014 04:34 PM, Joe Gordon wrote:




 On Fri, Sep 5, 2014 at 9:11 AM, Mike Wilson geekinu...@gmail.com
 mailto:geekinu...@gmail.com wrote:

 Hi all,

 I am requesting an exception for the juno-slaveification blueprint.
 There is a single outstanding patch [1] which has already been
 approved before, but needed to be re-spun due to gate failures which
 then necessitated a rebase. For those not familiar with the work,
 the spec[2] can shed some more light on the scope of work for Juno.

 All the other patches from this blueprint have merged, the only
 remaining patch really just needs a +W as it has been extensively
 reviewed and already approved previously. This may be an easy
 candidate since Andrew Laski, Jay Pipes and Dan Smith have reviewed
 and +2'd this already.


 I am happy to sponsor this, as this is the last patch needed to finish a
 BP and the patch was approved once bit needed a rebase over some objects
 changes. The change from the approved version is pretty minor.


 Thanks,

 Mike Wilson

 [1] https://review.openstack.org/#/c/103064/
 [2]

 http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/juno-slaveification.rst


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] List of granted FFEs

2014-09-05 Thread Michael Still
Hi,

I've built this handy dandy list of granted FFEs, because searching
email to find out what is approved is horrible. It would be good if
people with approved FFEs could check their thing is listed here:

https://etherpad.openstack.org/p/juno-nova-approved-ffes

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][FFE] Feature freeze exception for virt-driver-numa-placement

2014-09-05 Thread Michael Still
For better or for worse we have already merged about half of the
patches for this series, so I think stopping now because of concerns
about CI is pretty arbitrary. I do think Sean's point about scheduler
tests outside of tempest is valid though and I'd like to see it
reflected in the review comments on the relevant patch(es).

On the CI front it is true that we have features not covered by CI in
the gate now (live migration for example), but it is also true that
they are some of our least reliable features. I see this as an
important aspect of the need to pay down tech debt, and I'd like to
see us have a more serious go at doing that in Kilo than we managed in
Juno.

This has three sponsors, so I am therefore approving it.

Michael

On Fri, Sep 5, 2014 at 3:23 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 09/05/2014 11:20 AM, John Garbutt wrote:

 On 5 September 2014 13:59, Nikola Đipanov ndipa...@redhat.com wrote:

 Since this did not get an 'Approved' as of yet, I want to make sure that
 this is not because the number of sponsors. 2 core members have already
 sponsored it, and as per [1] cores can sponsor their own FFEs so that's
 3.


 While I am no fan of that idea, this was already in the gate, so 2
 cores should be more than enough.


 I am currently reviewing the final patch series in this and am willing to be
 the third sponsor. I've reviewed most of the NUMA-related patches from Dan
 and Nikola in the past few months so I'm pretty familiar with the work, as
 well as the difficulties Nikola has run into in the scheduler code regarding
 adding this functionality.

 -jay


 Mikal has said I could approve FFEs in his absence, but given the
 conflict in the thread, I want to leave this approval to him :)

 I know its 10 patches, but I think we can try resolve the discussion
 on the thread, and look to approve this on Monday morning, and still
 make it before the deadline. Do shout up if that seems impossible
 and/or stupid.

 Thanks,
 John

 [1]

 http://lists.openstack.org/pipermail/openstack-dev/2014-September/044669.html

 On 09/04/2014 01:58 PM, Nikola Đipanov wrote:

 Hi team,

 I am requesting the exception for the feature from the subject (find
 specs at [1] and outstanding changes at [2]).

 Some reasons why we may want to grant it:

 First of all all patches have been approved in time and just lost the
 gate race.

 Rejecting it makes little sense really, as it has been commented on by a
 good chunk of the core team, most of the invasive stuff (db migrations
 for example) has already merged, and the few parts that may seem
 contentious have either been discussed and agreed upon [3], or can
 easily be addressed in subsequent bug fixes.

 It would be very beneficial to merge it so that we actually get real
 testing on the feature ASAP (scheduling features are not tested in the
 gate so we need to rely on downstream/3rd party/user testing for those).

 Thanks,

 Nikola

 [1]

 http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/virt-driver-numa-placement.rst
 [2]

 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virt-driver-numa-placement,n,z
 [3] https://review.openstack.org/#/c/111782/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-05 Thread Eichberger, German
Hi Stephen,

I think this is a good discussion to have and will make it more clear why we 
chose a specific design. I also believe by having this discussion we will make 
the design stronger.  I am still a little bit confused what the 
driver/controller/amphora agent roles are. In my driver-less design we don’t 
have to worry about the driver which most likely in haproxy’s case will be 
split to some degree between controller and amphora device.

So let’s try to sum up what we want a controller to do:

-  Provision new amphora devices

-  Monitor/Manage health

-  Gather stats

-  Manage/Perform configuration changes

The driver as described would be:

-  Render configuration changes in a specific format, e.g. haproxy

Amphora Device:

-  Communicate with the driver/controller to make things happen

So as Doug pointed out I can make a very thin driver which basically passes 
everything through to the Amphora Device or on the other hand of the spectrum I 
can make a very thick driver which manages all aspects from the amphora life 
cycle to whatever (aka kitchen sink). I know we are going for uttermost 
flexibility but I believe:

-  With building an haproxy centric controller we don’t really know 
which things should be controller/which thing should be driver. So my shortcut 
is not to build a driver at all ☺

-  The more flexibility increases complexity and makes it confusing for 
people to develop components. Should this concern go into the controller, the 
driver, or the amphora VM? Two of them? Three of them? Limiting choices makes 
it simpler to achieve that.

HPs worry is that by creating the potential to run multiple (version of 
drivers) drivers, on multiple versions of controllers, on multiple versions of 
amphora devices creates a headache for testing. For example does the version 
4.1 haproxy driver work with the cersion 4.2 controller on an 4.0 amphora 
device? Which compatibility matrix do we need to build/test? Limiting one 
driver to one controller can help with making that manageable.

Thanks,
German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, September 05, 2014 10:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] Question about where to render haproxy 
configurations

Hi German,

Thanks for your reply! My responses are in-line below, and of course you should 
feel free to counter my counter-points. :)

For anyone else paying attention and interested in expressing a voice here, 
we'll probably be voting on this subject at next week's Octavia meeting.

On Thu, Sep 4, 2014 at 9:13 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi,

Stephen visited us today (the joy of spending some days in Seattle☺) and we 
discussed that  further (and sorry for using VM – not sure what won):

Looks like Amphora won, so I'll start using that terminology below.


1.   We will only support one driver per controller, e.g. if you upgrade a 
driver you deploy a new controller with the new driver and either make him take 
over existing VMs (minor change) or spin  up new ones (major change) but keep 
the “old” controller in place until it doesn’t serve any VMs any longer
Why? I agree with the idea of one back-end type per driver, but why shouldn't 
we support more than one driver per controller?

I agree that you probably only want to support one version of each driver per 
controller, but it seems to me it shouldn't be that difficult to write a driver 
that knows how to speak different versions of back-end amphorae. Off the top of 
my head I can think of two ways of doing this:

1. For each new feature or bugfix added, keep track of the minimal version of 
the amphora required to use that feature/bugfix. Then, when building your 
configuration, as various features are activated in the configuration, keep a 
running track of the minimal amphora version required to meet that 
configuration. If the configuration version is higher than the version of the 
amphora you're going to update, you can pre-emptively return an error detailing 
an unsupported configuration due to the back-end amphora being too old. (What 
you do with this error-- fail, recycle the amphora, whatever-- is up to the 
operator's policy at this point, though I would probably recommend just 
recycling the amphora.) If a given user's configuration never makes use of 
advanced features later on, there's no rush to upgrade their amphoras, and new 
controllers can push configs that work with the old amphoras indefinitely.

2. If the above sounds too complicated, you can forego that and simply build 
the config, try to push it to the amphora, and see if you get an error 
returned.  If you do, depending on the nature of the error you may decide to 
recycle the amphora or take other actions. As there should never be a case 
where you deploy a controller that generates 

[openstack-dev] [Glance] Where are the docs updated

2014-09-05 Thread Brian Rosmaita
The Glance where are the docs? wiki page has been updated:
  https://wiki.openstack.org/wiki/Glance-where-are-the-docs

By the way, the Docs Team is having a bug-squash day on 9th September 2014 if 
you feel like writing prose instead of python:
  https://wiki.openstack.org/wiki/Documentation/BugDay

cheers,
brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] DefCore Community Meetings 9/10 9/11 AGENDA (Lighthouse.7)

2014-09-05 Thread Stefano Maffulli
The DefCore project is moving forward and needs more and more eyes on
it. The next meetings are on Sept 9 and 10, with the same agenda to
facilitate global access.

I'm sharing the details below. All members of OpenStack ecosystem should
follow this process closely as it is going to define what an OpenStack
cloud will have to look like before it can be called OpenStack.

Get familiar with Rob Hirschfeld's series on DefCore, starting from
http://robhirschfeld.com/2014/09/02/defcore-process-flow/

DEFCORE COMMUNITY MEETINGS 9/10  9/11 AGENDA (Lighthouse.7)


# Agenda

1. Review DefCore status and history
2. Review  Discuss Designated Sections proposal


# Community Meetings about Designated Sections


## Meeting 1 9/10 @ 6 pm PT
* Join the meeting: https://join.me/564-677-790
* Other international numbers available:
   https://join.me/intphone/564677790/0


## Meeting 2 9/11 @ 8 am PT

* Join the meeting: https://join.me/638-268-423
* Other international numbers available:
  https://join.me/intphone/638268423/0

* By phone:
   - United States - Hartford, CT   +1.860.970.0010
   - United States - Los Angeles, CA   +1.213.226.1066
   - United States - Thousand Oaks, CA   +1.805.309.5900
   - Access Code   [see meeting]

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][FFE] make ssh host key policy configurable

2014-09-05 Thread John Griffith
On Fri, Sep 5, 2014 at 2:37 PM, Jay Bryant jsbry...@electronicjungle.net
wrote:

 Not sure if the patch to make the ssh strict policy setting configurable
 needed an official ffe.  It merged after the tag so I wanted to cover my
 bases.

 This is my request.

 Thanks!
 Jay

 https://review.openstack.org/114336

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​It was already merged so it's done now, no need to worry about it.

Thanks,
John​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Michael Still
So, this one is looking for one more core. Any takers?

Michael

On Sat, Sep 6, 2014 at 2:20 AM, Day, Phil philip@hp.com wrote:
 The corresponding Tempest change is also ready to roll (thanks to Ken'inci):  
 https://review.openstack.org/#/c/112474/1   so its kind of just a question of 
 getting the sequence right.

 Phil


 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 05 September 2014 17:05
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] FFE server-group-quotas

 On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
  2014-09-05 21:56 GMT+09:00 Day, Phil philip@hp.com:
  Hi,
 
  I'd like to ask for a FFE for the 3 patchsets that implement quotas for
 server groups.
 
  Server groups (which landed in Icehouse) provides a really useful anti-
 affinity filter for scheduling that a lot of customers woudl like to use, but
 without some form of quota control to limit the amount of anti-affinity its
 impossible to enable it as a feature in a public cloud.
 
  The code itself is pretty simple - the number of files touched is a side-
 effect of having three V2 APIs that report quota information and the need to
 protect the change in V2 via yet another extension.
 
  https://review.openstack.org/#/c/104957/
  https://review.openstack.org/#/c/116073/
  https://review.openstack.org/#/c/116079/
 
  I am happy to sponsor this work.
 
  Thanks
  Ken'ichi ohmichi
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 These look like they are also all blocked by Tempest because it's changing
 return chunks. How does one propose to resolve that, as I don't think there
 is an agreed path up there for to get this into a passing state from my 
 reading
 of the reviews.

   -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Parallel scenarios

2014-09-05 Thread Mikhail Dubov
Hi Ajay,

Rally as of now does not support launching different scenarios in parallel;
implementing this functionality is, however, one of the major points in our
roadmap (see this blueprint
https://blueprints.launchpad.net/rally/+spec/benchmark-runners-at-large-scale),
since being able to produce e.g. parallel load from different servers is
really important for benchmarking at scale.

Best regards,
Mikhail Dubov

Mirantis, Inc.
E-Mail: mdu...@mirantis.com
Skype: msdubov


On Fri, Sep 5, 2014 at 10:09 PM, Ajay Kalambur (akalambu) 
akala...@cisco.com wrote:

  Hi
 In Rally is there a way to run parallel scenarios?
 I know the scenario can support concurrency but I am talking about running
 multiple such scenarios themselves in parallel
 Ajay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [Cinder] FFE tracking

2014-09-05 Thread John Griffith
Hey Everyone,

I was trying to just use Launchpad to track these but there's been requests
for an etherpad similar to what Mikal put together.  So here it is [1]

Thanks,
John

[1]: https://etherpad.openstack.org/p/juno-cinder-approved-ffes
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Sean Dague
I looked at the Tempest change, it should be landable. I'll sign up for
review on these, as the rest of the patches are pretty straight forward.

-Sean

On 09/05/2014 06:32 PM, Michael Still wrote:
 So, this one is looking for one more core. Any takers?
 
 Michael
 
 On Sat, Sep 6, 2014 at 2:20 AM, Day, Phil philip@hp.com wrote:
 The corresponding Tempest change is also ready to roll (thanks to Ken'inci): 
  https://review.openstack.org/#/c/112474/1   so its kind of just a question 
 of getting the sequence right.

 Phil


 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 05 September 2014 17:05
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] FFE server-group-quotas

 On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
 2014-09-05 21:56 GMT+09:00 Day, Phil philip@hp.com:
 Hi,

 I'd like to ask for a FFE for the 3 patchsets that implement quotas for
 server groups.

 Server groups (which landed in Icehouse) provides a really useful anti-
 affinity filter for scheduling that a lot of customers woudl like to use, 
 but
 without some form of quota control to limit the amount of anti-affinity its
 impossible to enable it as a feature in a public cloud.

 The code itself is pretty simple - the number of files touched is a side-
 effect of having three V2 APIs that report quota information and the need to
 protect the change in V2 via yet another extension.

 https://review.openstack.org/#/c/104957/
 https://review.openstack.org/#/c/116073/
 https://review.openstack.org/#/c/116079/

 I am happy to sponsor this work.

 Thanks
 Ken'ichi ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 These look like they are also all blocked by Tempest because it's changing
 return chunks. How does one propose to resolve that, as I don't think there
 is an agreed path up there for to get this into a passing state from my 
 reading
 of the reviews.

   -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][FFE] Support of Cinder QOS Specs in Horizon

2014-09-05 Thread Cheng, Lin Hua (Cloud Services)
Hi Richard,

I¹ll review your changes. You still have to get one more core to agree to
review the patches though. Good luck!

-Lin


From: Hagarty, Richard (ESSN Storage MSDU) richard.haga...@hp.com
Reply-To: OpenStack List openstack-dev@lists.openstack.org
Date: Friday, September 5, 2014 at 3:09 PM
To: OpenStack List openstack-dev@lists.openstack.org
Cc: Gershon, Dave dave.gers...@hp.com, Hagarty, Richard (ESSN Storage
MSDU) richard.haga...@hp.com
Subject: [openstack-dev] [Horizon][FFE] Support of Cinder QOS Specs in
Horizon

Hello,
 
I am requesting an exception for the Horizon ³Cinder QOS Specs²
blueprint[1]. The blueprint addresses adding QOS Spec management in Horizon
and completely covers the Cinder QOS CLI commands.
 
There are 4 patches that implement this blueprint, the first of which has
already landed in Juno.
 
The remaining 3 patches are functionally complete and have had several
positive reviews from both Horizon and Cinder team members.
 
Here is what is covered in the 4 patches:
 
Patch 1  [2] ­ Displays a table of all defined QOS Specs.
Patch 2  [3] ­ Adds ability to edit values of already defined QOS Specs.
Patch 3 [4] ­ Adds ability to create and delete QOS Specs, as well as create
and delete values associated with a QOS Spec.
Patch 4 [5] ­ Adds ability to associate a QOS Spec with a Volume Type.
 
My feeling is that all of the patches need to be implemented in order for
this to be truly useful. Without all of the patches, the user will still be
forced to use the Cinder CLI.
 
Are there any Horizon core members who would be willing to sponsor this
exception? 
 
 
Thanks for your consideration,
Richard Hagarty, Developer at Hewlett-Packard
IRC handle: rhagarty
 
 
[1]: https://blueprints.launchpad.net/horizon/+spec/cinder-qos-specs
[2]: https://review.openstack.org/#/c/111408
[3]: https://review.openstack.org/#/c/112369/
[4]: https://review.openstack.org/#/c/113038/
[5]: https://review.openstack.org/#/c/113571/
 
 




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Juno-3 milestone candidates available

2014-09-05 Thread Joe Cropper
+1 to what Jay said.

I’m not sure whether the string freeze applies to bugs, but the defect that 
Matt mentioned (for which I authored the fix) adds a string, albeit to fix a 
bug.  Hoping it’s more desirable to have an untranslated correct message than a 
translated incorrect message.  :-)

- Joe
On Sep 5, 2014, at 3:41 PM, Jay Bryant jsbry...@electronicjungle.net wrote:

 Matt,
 
 I don't think that is the right solution.
 
 If the string changes I think the only problem is it won't be translated if 
 it is thrown.   That is better than breaking the coding standard imho.
 
 Jay
 
 On Sep 5, 2014 3:30 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
 
 On 9/5/2014 5:10 AM, Thierry Carrez wrote:
 Hi everyone,
 
 We just hit feature freeze[1], so please do not approve changes that add
 features or new configuration options unless those have been granted a
 feature freeze exception.
 
 This is also string freeze[2], so you should avoid changing translatable
 strings. If you have to modify a translatable string, you should give a
 heads-up to the I18N team.
 
 Finally, this is also DepFreeze[3], so you should avoid adding new
 dependencies (bumping oslo or openstack client libraries is OK until
 RC1). If you have a new dependency to add, raise a thread on
 openstack-dev about it.
 
 The juno-3 development milestone was tagged, it contains more than 135
 features and 760 bugfixes added since the juno-2 milestone 6 weeks ago
 (not even counting the Oslo libraries in the mix). You can find the full
 list of new features and fixed bugs, as well as tarball downloads, at:
 
 https://launchpad.net/keystone/juno/juno-3
 https://launchpad.net/glance/juno/juno-3
 https://launchpad.net/nova/juno/juno-3
 https://launchpad.net/horizon/juno/juno-3
 https://launchpad.net/neutron/juno/juno-3
 https://launchpad.net/cinder/juno/juno-3
 https://launchpad.net/ceilometer/juno/juno-3
 https://launchpad.net/heat/juno/juno-3
 https://launchpad.net/trove/juno/juno-3
 https://launchpad.net/sahara/juno/juno-3
 
 Many thanks to all the PTLs and release management liaisons who made us
 reach this important milestone in the Juno development cycle. Thanks in
 particular to John Garbutt, who keeps on doing an amazing job at the
 impossible task of keeping the Nova ship straight in troubled waters
 while we head toward the Juno release port.
 
 Regards,
 
 [1] https://wiki.openstack.org/wiki/FeatureFreeze
 [2] https://wiki.openstack.org/wiki/StringFreeze
 [3] https://wiki.openstack.org/wiki/DepFreeze
 
 
 I should probably know this, but at least I'm asking first. :)
 
 Here is an example of a new translatable user-facing error message [1].
 
 From the StringFreeze wiki, I'm not sure if this is small or large.
 
 Would a compromise to get this in be to drop the _() so it's just a string 
 and not a message?
 
 Maybe I should just shut-up and email the openstack-i18n mailing list [2].
 
 [1] https://review.openstack.org/#/c/118535/
 [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
 
 -- 
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Question about where to render haproxy configurations

2014-09-05 Thread Stephen Balukoff
Hi German,

Responses in-line:


On Fri, Sep 5, 2014 at 2:31 PM, Eichberger, German german.eichber...@hp.com
 wrote:

  Hi Stephen,



 I think this is a good discussion to have and will make it more clear why
 we chose a specific design. I also believe by having this discussion we
 will make the design stronger.  I am still a little bit confused what the
 driver/controller/amphora agent roles are. In my driver-less design we
 don’t have to worry about the driver which most likely in haproxy’s case
 will be split to some degree between controller and amphora device.


Yep, I agree that a good technical debate like this can help both to get
many people's points of view and can help determine the technical merit of
one design over another. I appreciate your vigorous participation in this
process. :)

So, the purpose of the controller / driver / amphora and the
responsibilities they have are somewhat laid out in the Octavia v0.5
component design document, but it's also possible that there weren't enough
specifics in that document to answer the concerns brought up in this
thread. So, to that end in my mind, I see things like the following:

The controller:
* Is responsible for concerns of the Octavia system as a whole, including
the intelligence around interfacing with the networking, virtualization,
and other layers necessary to set up the amphorae on the network and
getting them configured.
* Will rarely, if ever, talk directly to the end-systems or -services (like
Neutron, Nova, etc.). Instead it goes through a clean driver interface
for each of these.
* The controller has direct access to the database where state is stored.
* Must load at least one driver, may load several drivers and choose
between them based on configuration logic (ex. flavors, config file, etc.)

The driver:
* Handles all communication to or from the amphorae
* Is loaded by the controller (ie. its interface with the controller is a
base class, associated methods, etc. It's objects and code, not a RESTful
API.)
* Speaks amphora-specific protocols on the back-end. In the case of the
reference haproxy amphora, this will most likely be in the form of a
RESTful API with an agent on the amp, as well as (probably) HMAC-signed UDP
health, status and stats messages from the amp to the driver.

The amphora:
* Does the actual load balancing
* Is managed by the controller through the driver.
* Should be as dumb as possible.
* Comes in different types, based on the software in the amphora image.
(Though all amps of a given type should be managed by the same driver.)
Types might include haproxy, nginx, haproxy + nginx, 3rd party
vendor X, etc.
* Should never have direct access to the Octavia database, and therefore
attempt to be as stateless as possible, as far as configuration is
concerned.

To be honest, our current product does not have a driver layer per se,
since we only interface with one type of back-end. However, we still render
our haproxy configs in the controller. :)




 So let’s try to sum up what we want a controller to do:

 -  Provision new amphora devices

 -  Monitor/Manage health

 -  Gather stats

 -  Manage/Perform configuration changes



 The driver as described would be:

 -  Render configuration changes in a specific format, e.g. haproxy



 Amphora Device:

 -  Communicate with the driver/controller to make things happen



 So as Doug pointed out I can make a very thin driver which basically
 passes everything through to the Amphora Device or on the other hand of the
 spectrum I can make a very thick driver which manages all aspects from the
 amphora life cycle to whatever (aka kitchen sink). I know we are going for
 uttermost flexibility but I believe:


So, I'm not sure it's fair to characterize the driver I'm suggesting as
very thick. If you get right down to it, I'm pretty sure the only major
thing we disagree on here is where the haproxy configuration is rendered:
 Just before it's sent over the wire to the amphora, or just after it's
JSON-equivalent is received over the wire from the controller.


  -  With building an haproxy centric controller we don’t really
 know which things should be controller/which thing should be driver. So my
 shortcut is not to build a driver at all J

So, I've become more convinced that having a driver layer there is going to
be important if we want to support 3rd party vendors creating their own
amphorae at all (which I think we do). It's also going to be important if
we want to be able to support other versions of open-source amphorae (or
experimental versions prior to pushing out to a wider user-base, etc.)

Also, I think: Making ourselves use a driver here also helps keep
interfaces clean. This helps us avoid spaghetti code and makes things more
maintainable in the long run.

  -  The more flexibility increases complexity and makes it
 confusing for people to develop components. Should this concern go into the
 

Re: [openstack-dev] [Horizon][Keystone] Steps toward Kerberos and Federation

2014-09-05 Thread Adam Young

On 09/05/2014 11:28 AM, Marco Fargetta wrote:

I understand the general idea and the motivations but I am not sure
about the implementation. Even with a SPA you still need to provide
credentials and manage tokens for the authentication/authorisation in
a way not too much different from the current implementation.

Additionally, this might have some implications with the security since
you have to open the services to accept connection from clients everywhere.

Is there some schema/spec/pad/whatever describing this new approach for
Horizon and the other services?
I think several people are starting to discuss things along these lines, 
most under the Horizon heading.  I'm just trying to contribute the 
Keystone piece.


You are quite right about the security implications.  One potential 
approach is to use a Proxy for services and hide any behavior or APIs 
you don't want exposed to the outside world.  But in general, the 
services themselves should be hardened enough to be exposed directly to 
the public internet.  That is  the design of OpenStack.





Cheers,
Marco


On Fri, Sep 05, 2014 at 10:14:00AM -0400, Adam Young wrote:

On 09/05/2014 04:49 AM, Marco Fargetta wrote:

Hi,

I am wondering if the solution I was trying to sketch with the spec
https://review.openstack.org/#/c/96867/13; is not easier to implement
and manage then the steps highlated till n.2. Maybe, the spec is not
yet there and should be improved (I will abandon or move to Kilo as
Marek suggest) but the overall schema I think it is better then try to
complicate the communication between Horizon and Keystone, IMHO.

That is a very well written, detailed spec.  I'm impressed.

The S4U2Proxy/Step one stuff will be ready to go as soon as I drop
off the Net for a while and clean up my patches.  But that doesn't
address the Federation issue.

The Javascript approach is, I think, simpler and better than using
OAUTH2 as you specify, as it is the direction that Horizon is going
anyway: A single page App, Javascript driven,  talking direct to all
the Remote Services.

I want to limit the number of services that get tokens.  I want to
limit the scope of those tokens as much as possible.  Keeping
passwords out of Horizon is just the first step to this goal.


As I see it Keystone tokens and the OAUTH* protocols are both ways
of doing distributed single sign on and delegation of privileges.
However,  Keystone specifies data that is relevant to OpenStack, and
OAUTH is necessarily format agnostic.  Using a different mechanism
punts on the hard decisions and rewrites the easy ones.

Yes, I wish we had started with OAUTH way back a couple years ago,
but I can't say it is so compelling that we should do it now.


Step 3 is a different story and it needs more evaluation of the
possible scenarios opened.

Cheers,
Marco

On Thu, Sep 04, 2014 at 05:37:38PM -0400, Adam Young wrote:

While the Keystone team has made pretty good strides toward
Federation for getting a Keystone token, we do not yet have a
complete story for Horizon.  The same is true about Kerberos.  I've
been working on this, and I want to inform the people that are
interested in the approach, as well as get some feedback.

My first priority has been Kerberos.  I have a proof of concept of
this working, but the amount of hacking I had to
Django-OpenStack-Auth (DOA) made me shudder:  its fairly ugly.  A
few discussions today have moved things such that I think I can
clean up the approach.

Phase 1.  DOA should be able to tell whether to use password or
Kerberos in order to get a token from Keystone based on an variable
set by the Apache web server;  mod_auth_kerb will set

 request.META['KRB5CCNAME']

only in the kerberos case.  If it gets this variable, DOA will only
do Kerberos.  If it does not, it will only do password.  There will
be no fallback from Kerberos to password;  this is enforced by
mod_auth_kerb, not something we can easily hack around in Django.

That gets us Kerberos, but not Federation. Most of the code changes
are common with what follows after:

Phase 1.5.  Add an optional field  to the password auth page that
allows a user to log in with a token instead of userid/password.
This can be a hidden field by default if we really want.  DOA now
needs to be able to validate a token.  Since Horizon only cares
about the hashed version of the tokens anyway, we only to Online
lookup, not PKI token validation.  In the successful response,
Keystone will to return the properly hashed (SHA256 for example) of
the PKI tokens for Horizon to cache.

Phase 2.  Use AJAX to get a token from Keystone instead of sending
the credentials to the client.  Then pass that token to Horizon in
order to login. This implies that Keystone has set up CORS support.
This will open the door for Federation.  While it will provide a
different path to Kerberos than the stage 1, I think both are
valuable approaches and will serve different use cases.  This

Phase 3.  Never send your token to Horizon.  In this 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread James Bottomley

On Fri, 2014-09-05 at 08:02 -0400, Sean Dague wrote:
 On 09/05/2014 07:40 AM, Daniel P. Berrange wrote:
  On Fri, Sep 05, 2014 at 07:12:37AM -0400, Sean Dague wrote:
  On 09/05/2014 06:40 AM, Nikola Đipanov wrote:
  A handy example of this I can think of is the currently granted FFE for
  serial consoles - consider how much of the code went into the common
  part vs. the libvirt specific part, I would say the ratio is very close
  to 1 if not even in favour of the common part (current 4 outstanding
  patches are all for core, and out of the 5 merged - only one of them was
  purely libvirt specific, assuming virt/ will live in nova-common).
 
  Joe asked a similar question elsewhere on the thread.
 
  Once again - I am not against doing it - what I am saying is that we
  need to look into this closer as it may not be as big of a win from the
  number of changes needed per feature as we may think.
 
  Just some things to think about with regards to the whole idea, by no
  means exhaustive.
 
  So maybe the better question is: what are the top sources of technical
  debt in Nova that we need to address? And if we did, everyone would be
  more sane, and feel less burnt.
 
  Maybe the drivers are the worst debt, and jettisoning them makes them
  someone else's problem, so that helps some. I'm not entirely convinced
  right now.
 
  I think Cells represents a lot of debt right now. It doesn't fully work
  with the rest of Nova, and produces a ton of extra code paths special
  cased for the cells path.
 
  The Scheduler has a ton of debt as has been pointed out by the efforts
  in and around Gannt. The focus has been on the split, but realistically
  I'm with Jay is that we should focus on the debt, and exposing a REST
  interface in Nova.
 
  What about the Nova objects transition? That continues to be slow
  because it's basically Dan (with a few other helpers from time to time).
  Would it be helpful if we did an all hands on deck transition of the
  rest of Nova for K1 and just get it done? Would be nice to have the bulk
  of Nova core working on one thing like this and actually be in shared
  context with everyone else for a while.
  
  I think the idea that we can tell everyone in Nova what they should
  focus on for a cycle, or more generally, is doomed to failure. This
  isn't a closed source company controlled project where you can dictate
  what everyones priority must be. We must accept that rely on all our
  contributors good will in voluntarily giving their time  resource to
  the projct, to scratch whatever itch they have in the project. We have
  to encourage them to want to work nova and demonstrate that we value
  whatever form of contributor they choose to make. If we have technical
  debt that we think is important to address we need to illustrate /
  show people why they should care about helping. If they none the less
  decide that work isn't for them, we can't just cast them aside and/or
  ignore their contributions, while we get on with other things. This
  is why I think it is important that we split up nova to allow each
  are to self-organize around what they consider to be priorities in
  their area of interest / motivation. Not enabling that is going to
  to continue to kill our community
 
 I'm getting tired of the reprieve that because we are an Open Source
 project declaring priorities is pointless, because it's not. I would say
 it's actually the exception that a developer wakes up in the morning and
 says I completely disregard what anyone else thinks is important in
 this project, this is what I'm going to do today. Because if that's how
 they felt they wouldn't choose to be part of a community, they would
 just go do their own thing. Lone wolfs by definition don't form
 communities.

Actually, I don't think this analysis is accurate.  Some people are
simply interested in small aspects of a project.  It's the scratch your
own itch part of open source.  The thing which makes itch scratchers
not lone wolfs is the desire to go the extra mile to make what they've
done useful to the community.  If they never do this, they likely have a
forked repo with only their changes (and are the epitome of a lone
wolf).  If you scratch your own itch and make the effort to get it
upstream, you're assisting the community (even if that's the only piece
of code you do) and that assistance makes you (at least for a time) part
of the community.

A community doesn't necessarily require continuity from all its
elements.  It requires continuity from some (the core, if you will), but
it also allows for contributions from people who only have one or two
things they need doing.  For OpenStack to convert its users into its
contributors, it is going to have to embrace this, because they likely
only need a couple of things fixing, so they'll pop into the community,
fix what they need fixing and then go back to being users again.

Some projects, the linux kernel in particular, deliberately don't
enforce 

Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread Nathanael Burton
Daniel,

Thanks for the well thought out and thorough proposal to help Nova.

As an OpenStack operator/developer since Cactus time, it has definitely
gotten harder and harder to get fixes in Nova for small bugs that we find
running at scale with production systems. This forces us to maintain more
and more custom patches in-house (or for longer periods of time).  The huge
amount of time necessary to shepherd patches through review discourages
additional devs from contributing patches because of the amount of time
investment required.

I believe whatever we can do to improve the ability to fix technical debt
within Nova and both keep and grow the non-core contributors of Nova would
be greatly beneficial.

Thanks!

Nate
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-05 Thread James Bottomley
On Fri, 2014-09-05 at 14:14 +0200, Thierry Carrez wrote:
 Daniel P. Berrange wrote:
  For a long time I've use the LKML 'subsystem maintainers' model as the
  reference point for ideas. In a more LKML like model, each virt team
  (or other subsystem team) would have their own separate GIT repo with
  a complete Nova codebase, where they did they day to day code submissions,
  reviews and merges. Periodically the primary subsystem maintainer would
  submit a large pull / merge requests to the overall Nova maintainer.
  The $1,000,000 question in such a model is what kind of code review
  happens during the big pull requests to integrate subsystem trees. 
 
 Please note that the Kernel subsystem model is actually a trust tree
 based on 20 years of trust building. OpenStack is only 4 years old, so
 it's difficult to apply the same model as-is.

That's true but not entirely accurate.  The kernel maintainership is a
trust tree, but not every person in that tree has been in the position
for 20 years.  We have one or two who have (Dave Miller, net maintainer,
for instance), but we have some newcomers: Sarah Sharp has only been on
USB3.0 for a year.  People pass in and out of the maintainer tree all
the time.

In many ways, the Open Stack core model is also a trust tree (you elect
people to the core and support their nominations because you trust them
to do the required job).  It's not a 1 for 1 conversion, but it should
be possible to derive the trust you need from the model you already
have, should you wish to make OpenStack function more like the Linux
Kernel.

Essentially Daniel's proposal boils down to making the trust boundaries
align with separated community interests to get more scaling in the
model.  This is very similar to the way the kernel operates: most
maintainers only have expertise in their own areas.  We have a few
people with broad reach, like Andrew and Linus, but by and large most
people settle down in a much smaller area.  However, you don't have to
follow the kernel model to get this to happen, you just have to identify
the natural interest boundaries of the contributors and align around
them (provided they have enough mass to form their own community).

James



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][FFE] Use Libvirt Storage Pools (Partial)

2014-09-05 Thread Solly Ross
Hi,

I would like to request a feature freeze exception for the first part of the
use-libvirt-storage-pools blueprint [1].

Overview


The patches in question [2] would entail adding support for libvirt storage 
pools,
but neither making them default nor adding in automatic transitioning [3] from 
the
legacy backends to the storage pool backends.  This would allow new deployments
and/or adventurous existing deployments to switch to libvirt storage pools,
while adding an extra release before the automated transitioning and switch
to default (as per the blueprint) is made.

Risk


While the risk is not negligible, it is relatively minor, since the major 
changes
are behind a configuration option ([libvirt] use_storage_pools).  
Additionally,
since the major changes are enabled by default [4], there is little danger of 
breakage
in existing deployments

Benefits


The plan according to the original blueprint was to enable automatic 
transitioning of
legacy backends to the new backend, and setting the storage pools to default to 
enabled.
However, this would likely not be a good idea for feature freeze, as things 
like that
can cause complicated and odd bugs (although we all hope our code has no bugs, 
it's more
or less inevitable that bugs pop up ;-)).

However, adding support for using the storage pool backend without enabling it 
by default
or adding in automatic transitioning in Juno would give an extra cycle to 
ensure that any
bugs get reported before the storage pool backend is made default.  Then, the 
rest of the
spec could proceed as planned, just moved up a relase (i.e. deprecate the 
legacy backends
in Kilo, and remove in L, assuming, of course, that the spec is re-approved).

[1] https://review.openstack.org/#/c/86947/7
[2] https://review.openstack.org/#/c/113058/, 
https://review.openstack.org/#/c/113059/, 
https://review.openstack.org/#/c/113060/
[3] The patch for enabling automatic transitioning is up on Gerrit.
However, there are a couple of flaws in its implementation (not bugs, per 
se,
but things I would like to fix none-the-less).  However, this patch is not 
needed
for the previous patches to function properly, as they stand on their own.
[4] The only significant change not behind a configuration option is the change 
that
makes the image cache use a file-based storage pool to manage its images.  
Since the
pool is file-based and automatically created in the same location as the 
current image
cache, there is little risk associated.


Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE server-group-quotas

2014-09-05 Thread Michael Still
Thus, it get approved and added to the etherpad of doom.

Michael

On Sat, Sep 6, 2014 at 8:52 AM, Sean Dague s...@dague.net wrote:
 I looked at the Tempest change, it should be landable. I'll sign up for
 review on these, as the rest of the patches are pretty straight forward.

 -Sean

 On 09/05/2014 06:32 PM, Michael Still wrote:
 So, this one is looking for one more core. Any takers?

 Michael

 On Sat, Sep 6, 2014 at 2:20 AM, Day, Phil philip@hp.com wrote:
 The corresponding Tempest change is also ready to roll (thanks to 
 Ken'inci):  https://review.openstack.org/#/c/112474/1   so its kind of just 
 a question of getting the sequence right.

 Phil


 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 05 September 2014 17:05
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] FFE server-group-quotas

 On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
 2014-09-05 21:56 GMT+09:00 Day, Phil philip@hp.com:
 Hi,

 I'd like to ask for a FFE for the 3 patchsets that implement quotas for
 server groups.

 Server groups (which landed in Icehouse) provides a really useful anti-
 affinity filter for scheduling that a lot of customers woudl like to use, 
 but
 without some form of quota control to limit the amount of anti-affinity its
 impossible to enable it as a feature in a public cloud.

 The code itself is pretty simple - the number of files touched is a side-
 effect of having three V2 APIs that report quota information and the need 
 to
 protect the change in V2 via yet another extension.

 https://review.openstack.org/#/c/104957/
 https://review.openstack.org/#/c/116073/
 https://review.openstack.org/#/c/116079/

 I am happy to sponsor this work.

 Thanks
 Ken'ichi ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 These look like they are also all blocked by Tempest because it's changing
 return chunks. How does one propose to resolve that, as I don't think there
 is an agreed path up there for to get this into a passing state from my 
 reading
 of the reviews.

   -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-05 Thread Amit Das
I too agree this can be useful beyond openstack.

It will great if one of you (experts) can explain in more details how the
python virtual environment is heavyweight than docker containers.

I am just a user of devstack without the nitty gritty details of its inner
workings.
However, I can say that sometimes my Ubuntu VM that has devstack  tempest
tests running exhaust the entire CPU (make the machine unusable) after the
tests are run.


Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/


On Fri, Sep 5, 2014 at 9:44 PM, Sean Dague s...@dague.net wrote:

 On 09/05/2014 12:05 PM, Chris Dent wrote:
  On Fri, 5 Sep 2014, Monty Taylor wrote:
 
  The tl;dr is it's like tox, except it uses docker instead of
  virtualenv - which means we can express all of our requirements, not
  just pip ones.
 
  Oh thank god[1].
 
  Seriously.
 
  jogo started a thread about what matters for kilo and I was going
  to respond with (amongst other things) get containers into the testing
  scene. Seems you're way ahead of me. Docker's caching could be a
  _huge_ win here.
 
  [1] https://www.youtube.com/watch?v=om5rbtudzrg
 
  across the project. Luckily, docker itself does an EXCELLENT job at
  handling caching and reuse - so I think we can have a set of
  containers that something in infra (waves hands) publishes to
  dockerhub, like:
 
   infra/py27
   infra/py26
 
  I'm assuming these would get rebuilt regularly (every time global
  requirements and friends are updated) on some sort of automated
  hook?
 
  Thoughts? Anybody wanna hack on it with me? I think it could wind up
  being a pretty useful tool for folks outside of OpenStack too if we
  get it right.
 
  Given availability (currently an unknown) I'd like to help with this.

 I think all this is very cool, I'd say if we're going to put it in
 gerrit do stackforge instead of openstack/ namespace, because it will be
 useful beyond us.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [feature freeze exception] FFE for libvirt-disk-discard-option

2014-09-05 Thread Michael Still
I will be the third here. Approved and added to the etherpad of doom.

Michael

On Fri, Sep 5, 2014 at 10:07 PM, Sean Dague s...@dague.net wrote:
 On 09/05/2014 07:42 AM, Daniel P. Berrange wrote:
 On Fri, Sep 05, 2014 at 06:28:55AM +, Bohai (ricky) wrote:
 Hi,

 I'd like to ask for a feature freeze exception for blueprint 
 libvirt-disk-discard-option.
 https://review.openstack.org/#/c/112977/

 approved spec:
 https://review.openstack.org/#/c/85556/

 blueprint was approved, but its status was changed to Pending Approval 
 because of FF.
 https://blueprints.launchpad.net/nova/+spec/libvirt-disk-discard-option

 The patch has got a +2 from the core and pretty close to merge, but FF came.

 ACK, I'll sponsor this.

 It is such a simple  useful patch it is madness to reject it on a process
 technicality.

 Regards,
 Daniel


 +1, just reviewed and +2ed it. Seems straight forward. Consider me a
 sponsor.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][TripleO] Identifying root disk of server deployed by TripleO

2014-09-05 Thread Jyoti Ranjan
I deployed a server having three drives using TripleO. How can I know which
disk is root disk? I need this information as I have a utility which will
format all drives except root disk. Utility is being used for some solution
we are developing using machines deployed by TripleO.

Regards,
Jyoti Ranjan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2