Re: [openstack-dev] [horizon][keystone]

2015-02-06 Thread Adam Young

On 02/04/2015 03:54 PM, Thai Q Tran wrote:

Hi all,

I have been helping with the websso effort and wanted to get some 
feedback.
Basically, users are presented with a login screen where they can 
select: credentials, default protocol, or discovery service.

If user selects credentials, it works exactly the same way it works today.
If user selects default protocol or discovery service, they can choose 
to be redirected to those pages.


Keep in mind that this is a prototype, early feedback will be good.
Here are the relevant patches:
https://review.openstack.org/#/c/136177/
https://review.openstack.org/#/c/136178/
https://review.openstack.org/#/c/151842/

I have attached the files and present them below:




Replace the dropdown with a specific link for each protocol type:

SAML and OpenID  are the only real contenders at the moment, but we will 
not likely have so many that it will clutter up the page.


Thanks for doing this.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-06 Thread Ryan Moe
Dmitriy,

Thank you for the excellent run-down of the CLI commands. I assume this
will make its way into the developer documentation? I would like to know if
you could point me to more information about the inner workings of granular
deployment. Currently it's challenging to debug issues related to granular
deployment.

As an example there is a bug [0] where tasks appear to be run in the wrong
order based on which combination of roles exist in the environment.
However, it's not clear how to determine what decides which tasks to run
and when (is it astute, fuel-library, etc.), where the data comes from.
etc.

Again, thanks for your (and everybody else's) work on granular deployment.
This is an awesome feature.

[0] https://bugs.launchpad.net/fuel/+bug/1411660

-Ryan

On Fri, Feb 6, 2015 at 6:37 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Hello folks,

 Not long ago we added necessary commands in fuel client to work with
 granular deployment configuration and API.

 So, you may know that configuration is stored in fuel-library, and
 uploaded into database during
 bootstrap of fuel master. If you want to change/add some tasks right on
 master node, just add tasks.yaml
 and appropriate manifests in folder for release that you are interested
 in. Then apply this command:

  fuel rel --sync-deployment-tasks --dir /etc/puppet

 Also you may want to overwrite deployment tasks for any specific
 release/cluster by next commands:

  fuel rel --rel id --deployment-tasks --download
  fuel rel --rel id --deployment-tasks --upload

  fuel env --env id --deployment-tasks --download
  fuel env --env id --deployment-tasks --upload

 After this is done - you will be able to run customized graph of tasks:

 The most basic command:

  fuel node --node 1,2,3 --tasks upload_repos netconfig

 Developer will need to specify nodes that should be used in deployment and
 tasks ids. Order in which they are provided doesn't matter,
 it will be computed from dependencies specified in database. Also very
 important to understand that if task is mapped to role controller,
 but node where you want to apply that task doesn't have this role - it
 wont be executed.

 Skipping of tasks

  fuel node --node 1,2,3 --skip netconfig hiera

 List of task that are provided with this parameter will be skipped during
 graph traversal in nailgun.
 The main question is - should we skip other task that have provided tasks
 as dependencies?
 In my opinion we can leave this flag as simple as it is, and use following
 commands for smarter traversal.

 Specify start and end nodes in graph:

  fuel node --node 1,2,3 --end netconfig

 Will deploy everything up to netconfig task, including netconfig. This is:
 all tasks that we are considering as pre_deployment (keys generation, rsync
 manifests, sync time, upload repos),
 and such tasks as hiera setup, globals computation and maybe some other
 basic preparatory tasks.

  fuel node --node 1,2,3 --start netconfig

 Start from netconfig, including netconfig, deploy all other tasks, tasks
 that we are considering as post_deployment.
 For example if one want to execute only netconfig successors:

  fuel node --node 1,2,3 --start netconfig --skip netconfig

 And user will be able to use start and end at the same time:

  fuel node --node 1,2,3 --start netconfig --end upload_cirros

 Nailgun will build path that includes only necessary tasks to join this
 two points. However start flag is not merged yet, but i think it will be by
 Monday.

 Also we are working on deployment graph visualization, it will be static
 (i mean there is no progress tracking of any kind),
 but it will help a lot to understand what is going to be executed.

 Thank you for reading, i would like to hear more thoughts about this, and
 answer any questions

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone]

2015-02-06 Thread Adam Young

On 02/05/2015 04:20 AM, Anton Zemlyanov wrote:

Hi,

I guess Credentials is login and password. I have no idea what is 
Default Protocol or Discovery Service.

The proposed UI is rather embarrassing.
No it is not.  It is a rapid prototyping technique to get things to fail 
fast, and to get feedback from the community.  It would be embarrassing 
if this was made the final design with no review.


Please focus on constructive criticism. We want to encourage 
participation, and not belittle people attempting to make things better.





Anton

On Thu, Feb 5, 2015 at 12:54 AM, Thai Q Tran tqt...@us.ibm.com 
mailto:tqt...@us.ibm.com wrote:


Hi all,

I have been helping with the websso effort and wanted to get some
feedback.
Basically, users are presented with a login screen where they can
select: credentials, default protocol, or discovery service.
If user selects credentials, it works exactly the same way it
works today.
If user selects default protocol or discovery service, they can
choose to be redirected to those pages.

Keep in mind that this is a prototype, early feedback will be good.
Here are the relevant patches:
https://review.openstack.org/#/c/136177/
https://review.openstack.org/#/c/136178/
https://review.openstack.org/#/c/151842/

I have attached the files and present them below:





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-06 Thread Mike Perez
On 15:51 Fri 06 Feb , Nilesh P Bhosale wrote:
snip
 I understand this is as per design, but curious to understand logic behind 
 this.
snip
 Why not allow deletion of volumes form the CG? at least when there are no 
 dependent snapshots.

From the review [1], this is because allowing a volume that's part of
a consistency group to be deleted is error prone for both the user and the
storage backend. It assumes the storage backend will register the volume not
being part of the consistency group. It also assumes the user is keeping
tracking of what's part of a consistency group.

 With the current implementation, only way to delete the volume is to 
 delete the complete CG, deleting all the volumes in that, which I feel is 
 not right.

The plan in Kilo is to allow adding/removing volumes from a consistency group
[2][3]. The user now has to explicitly remove the volume from a consistency
group, which in my opinion is better than implicit with delete.

I'm open to rediscussing this issue with vendors and seeing about making sure
things in the backend to be cleaned up properly, but I think this solution
helps prevent the issue for both users and backends.

[1] - https://review.openstack.org/#/c/149095/
[2] - 
https://blueprints.launchpad.net/cinder/+spec/consistency-groups-kilo-update
[3] - https://review.openstack.org/#/c/144561/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] SSO

2015-02-06 Thread Tim Bell

From the sound of things, we're not actually talking about SSO. If so, we 
would not be talking about the design of a login screen.

An SSO application such as Horizon would not have a login page. If the user was 
logged in already through corporate/organisation SSO page, nothing would appear 
before the standard Horizon page.

We strongly encourage our user community that if there is any web page asking 
for your credentials which is not the CERN standard SSO page, it is not 
authorised. Our SSO also supports Google/Twitter/Eduroam etc. logins. Some of 
these will be refused for OpenStack login so that having a twitter account 
alone does not get you access to CERN's cloud resources (but this is an 
authorisation rather than authentication problem).

Is there really the use case for a site where there is SSO from a corporate 
perspective but there is not a federated login SSO capability ? I don't have a 
fundamental problem with the approach but we should position it with respect to 
the use case which is that I login in the morning and all applications I use 
(cloud and all) are able to recognise that.

Tim


From: Adam Young [mailto:ayo...@redhat.com]
Sent: 06 February 2015 19:48
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [horizon][keystone]

On 02/04/2015 03:54 PM, Thai Q Tran wrote:
Hi all,

I have been helping with the websso effort and wanted to get some feedback.
Basically, users are presented with a login screen where they can select: 
credentials, default protocol, or discovery service.
If user selects credentials, it works exactly the same way it works today.
If user selects default protocol or discovery service, they can choose to be 
redirected to those pages.

Keep in mind that this is a prototype, early feedback will be good.
Here are the relevant patches:
https://review.openstack.org/#/c/136177/
https://review.openstack.org/#/c/136178/
https://review.openstack.org/#/c/151842/

I have attached the files and present them below:



Replace the dropdown with a specific link for each protocol type:

SAML and OpenID  are the only real contenders at the moment, but we will not 
likely have so many that it will clutter up the page.

Thanks for doing this.


[cid:image001.png@01D04247.35BD8B30][cid:image002.png@01D04247.35BD8B30][cid:image003.png@01D04247.35BD8B30][cid:image004.png@01D04247.35BD8B30]






__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-06 Thread Dmitriy Shulyak
 Thank you for the excellent run-down of the CLI commands. I assume this
 will make its way into the developer documentation? I would like to know if
 you could point me to more information about the inner workings of granular
 deployment. Currently it's challenging to debug issues related to granular
 deployment.


All tasks that are in scope of role are serialized right into deployment
configuration that is consumed by astute. So it can be traced in the logs
(nailgun or astute) or in astute.yaml that is stored on node itself. Here
is what it looks like [0].
Some other internals described in spec -
https://review.openstack.org/#/c/113491/.

For developer it makes sense to get familiar with networkx data structures
[1], and then dive into debuging of [2].
But it is not an option for a day-to-day usage, and UX will be improved by
graph visualizer [3].

One more option that can improve understanding is human-readable planner..
For example it can output smth like this:

 fuel deployment plan --start hiera --end netconfig

   Manifest hiera.pp will be executed on nodes [1,2,3]
   Manifest netconfig will be executed on nodes [1,2]

But i am not sure is this thing really necessary, dependencies are trivial
in comparison to puppet, and i hope it will take very little time to
understand how things are working :)

As an example there is a bug [0] where tasks appear to be run in the wrong
 order based on which combination of roles exist in the environment.
 However, it's not clear how to determine what decides which tasks to run
 and when (is it astute, fuel-library, etc.), where the data comes from.
 etc.


As for the bug - it may be a duplicate for
https://launchpad.net/bugs/1417579, which was fixed by
https://review.openstack.org/#/c/152511/

[0] http://paste.openstack.org/show/168298/
[1]
http://networkx.github.io/documentation/latest/tutorial/tutorial.html#directed-graphs
[2]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_graph.py#L29
[3] https://review.openstack.org/#/c/152434/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-06 Thread yang, xing
As Mike said, allowing deletion of a single volume from a CG is error prone.  
User could be deleting a single volume without knowing that it is part of a CG. 
 The new Modify CG feature for Kilo allows you to remove a volume from CG and 
you can delete it as a separate operation.  When user removes a volume from a 
CG, at least he/she is making a conscious decision knowing that the volume is 
currently part of the CG.

Thanks,
Xing


-Original Message-
From: Mike Perez [mailto:thin...@gmail.com] 
Sent: Friday, February 06, 2015 1:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Why not allow deleting volume from a CG ?

On 15:51 Fri 06 Feb , Nilesh P Bhosale wrote:
snip
 I understand this is as per design, but curious to understand logic 
 behind this.
snip
 Why not allow deletion of volumes form the CG? at least when there are 
 no dependent snapshots.

From the review [1], this is because allowing a volume that's part of a 
consistency group to be deleted is error prone for both the user and the 
storage backend. It assumes the storage backend will register the volume not 
being part of the consistency group. It also assumes the user is keeping 
tracking of what's part of a consistency group.

 With the current implementation, only way to delete the volume is to 
 delete the complete CG, deleting all the volumes in that, which I feel 
 is not right.

The plan in Kilo is to allow adding/removing volumes from a consistency group 
[2][3]. The user now has to explicitly remove the volume from a consistency 
group, which in my opinion is better than implicit with delete.

I'm open to rediscussing this issue with vendors and seeing about making sure 
things in the backend to be cleaned up properly, but I think this solution 
helps prevent the issue for both users and backends.

[1] - https://review.openstack.org/#/c/149095/
[2] - 
https://blueprints.launchpad.net/cinder/+spec/consistency-groups-kilo-update
[3] - https://review.openstack.org/#/c/144561/

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FFE request for passing capabilities in the flavor to ironic

2015-02-06 Thread Hsu, Wan-Yen
Hi,

   I would like to ask for a feature freeze exception for passing capabilities 
in the flavor to Ironic:

  
https://blueprints.launchpad.net/nova/+spec/pass-flavor-capabilities-to-ironic-virt-driver

Addressed by: https://review.openstack.org/136104
Pass on the capabilities in the flavor to the ironic

   Addressed by: https://review.openstack.org/141012
   Pass on the capabilities to instance_info
several Ironic Kilo features including secure boot, trusted boot, and local 
boot support with partition image are depending on this feature.  It also has 
impact on Ironic vendor driver's hardware property introspection feature.

 Code changes to support this spec in Nova ironic virt driver is very small-
only 31 lines of code (including comments) in nova/virt/ironic/patcher.py,  
and 22 lines of code in test_patcher.py.

   Please consider approving this FFE.  Thanks!

Regards,
wanyen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] how to use a devstack external plugin in gate testing

2015-02-06 Thread Sean Dague
For those that didn't notice, on the Devstack team we've started to push
back on new in-tree support for all the features. That's intentional.
We've got an external plugin interface now -
http://docs.openstack.org/developer/devstack/plugins.html#externally-hosted-plugins,
and have a few projects like the ec2api and glusterfs that are
successfully using it. Our Future direction is to do more of this -
https://review.openstack.org/#/c/150789/

The question people ask a lot is 'but, how do I do a gate job with the
external plugin?'.

Starting with the stackforge/ec2api we have an example up on how to do
that: https://review.openstack.org/#/c/153659/

The important bits are as follows:

1. the git repo that you have your external plugin in *must* be in
gerrit. stackforge is fine, but it has to be hosted in the OpenStack
infrastructure.

2. The job needs to add your PROJECT to the projects list, i.e.:

export PROJECTS=stackforge/ec2-api $PROJECTS

3. The job needs to add a DEVSTACK_LOCAL_CONFIG line for the plugin
enablement:

export DEVSTACK_LOCAL_CONFIG=enable_plugin ec2-api
git://git.openstack.org/stackforge/ec2-api

Beyond that you can define your devstack job however you like. It can
test with Tempest. It can instead use a post_test_hook for functional
testing. Whatever is appropriate for your project.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat][Mistral] Use and adoption of YAQL

2015-02-06 Thread Dmitri Zimine
Stan, Alex, Renat: 

Should we migrate to YAQL 1.0 now? and stop using the initial one? What’s the 
delta? 

Still short on the docs :) but I understand they’re coming up. 
https://github.com/stackforge/yaql/tree/master/doc/source

Cheers, Dmitri. 

On Jan 16, 2015, at 6:46 AM, Stan Lagun sla...@mirantis.com wrote:

 Dmitri,
 
 we are working hard towards stable YAQL 1.0 which is expected to be released 
 during Kilo cycle. It is going to have proper documentation and high unit 
 test coverage which can also serve as a documentation source. YAQL has 
 already migrated to StackForge and adopted OpenStack development process and 
 tools but the work is still in progress. Any help from Mistral team and/or 
 other YAQL adopters is appreciated.
 
 
 
 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis
 
 
 On Thu, Jan 15, 2015 at 10:54 PM, Dmitri Zimine dzim...@stackstorm.com 
 wrote:
 Folks, 
 
 We use YAQL in Mistral for referencing variables, expressing conditions, etc. 
 Murano is using it extensively, I saw Heat folks thought of using it, at 
 least once :) May be others...
 
 We are learning that YAQL incredibly powerful comparing to alternatives like 
 Jinja2 templates used  in salt / ansible. 
 
 But with lack of documentation, it becomes one of adoption blockers to 
 Mistral (we got very vocal user feedback on this).
 
 This is pretty much all the docs I can offer our users on YAQL so far. Not 
 much. 
 http://yaql.readthedocs.org/en/latest/
 https://github.com/ativelkov/yaql/blob/master/README.rst
 https://murano.readthedocs.org/en/latest/murano_pl/murano_pl.html#yaql
 
 Are there any plans to fix it? 
 
 Are there interest from other projects to use YAQL? 
 
 Cheers, 
 DZ. 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception Request

2015-02-06 Thread Alessandro Pilotti
Hi all,

I’d like to ask a FFE for the Hyper-V Rescue feature 

Patch:  https://review.openstack.org/#/c/127159/
Blueprint:  https://blueprints.launchpad.net/nova/+spec/hyper-v-rescue

It’s a feature parity blueprint with no impact outside of the Hyper-V driver.

It already received a +2 (currently lost through the usual various rebases).

The blueprint priority was set to medium before the K-2 freeze.

Drivers have been heavily penalized by the Juno and Kilo release cycle
prioritization. It’d be great if we could have at least this feature parity
patch released upstream, considering that the majority of this cycle's Hyper-V
blueprints, already fully implemented, will be postponend to L.

Thanks,

Alessandro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.versionedobjects repository is ready for pre-import review

2015-02-06 Thread Ben Nemec
Overall looks good to me and the unit tests are passing locally.  I'm
wondering about some of the stuff that was left commented out without a
FIXME and left a couple of comments about them, but I'm mostly assuming
they were just things commented for testing that weren't removed later.

-Ben

On 02/02/2015 03:59 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 2, 2015, at 04:33 PM, Doug Hellmann wrote:
 I’ve prepared a copy of nova.objects as oslo_versionedobjects in
 https://github.com/dhellmann/oslo.versionedobjects-import. The script to
 create the repository is part of the update to the spec in
 https://review.openstack.org/15.

 Please look over the code so you are familiar with it. Dan and I have
 already talked about the need to rewrite the tests that depend on nova’s
 service code, so those are set to skip for now. We’ll need to do some
 work to make the lib compatible with python 3, so I’ll make sure the
 project-config patch does not enable those tests, yet.

 Please post comments on the code here on the list in case I end up
 needing to rebuild that import repository.

 I’ll give everyone a few days before removing the WIP flag from the infra
 change to import this new repository
 (https://review.openstack.org/151792).
 
 I filed bugs for a few known issues that we'll need to work on before
 the first release: https://bugs.launchpad.net/oslo.versionedobjects
 
 Doug
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-cinderclient] Return request ID to caller

2015-02-06 Thread Joe Gordon
On Thu, Feb 5, 2015 at 11:24 PM, Kekane, Abhishek 
abhishek.kek...@nttdata.com wrote:

  Hi Devs,



 This change is not backward compatible and to do not break OpenStack
 services which are using cinder-client,

 we need to first make provision in these consumer services to handle
 cinder-client return type change.

 To make this cinder-client change backward compatible we need to do
 changes in consumers of cinder-client like patch :
 https://review.openstack.org/#/c/152820/



 Also for backward compatibility can we make changes suggested by Gary W.
 Smith on cinder-spec : https://review.openstack.org/#/c/132161/6/.

 As per his suggestion we need to add one new optional kwarg
 'return_req_id' in cinder-client api methods, when it is 'True'
 cinder-client will returns the tuple, but when False (the default) it
 returns the current value (i.e.- only response body).



 For example cinder-client 'get' method will look like -



 def _get(self, url, response_key=None, return_req_id=False):

 resp, body = self.api.client.get(url)

 if response_key:

 body = self.resource_class(self, body[response_key],
 loaded=True)

 else:

 body = self.resource_class(self, body, loaded=True)



 if return_req_id:

 # return tuple containing headers and body

 return (resp.headers, body)



 return body





 If we want headers from cinder-client then we need to pass kwarg
 'return_req_id' as True from caller.

 For example from nova we need to call cinder-client get method as -



 resp_header, volume = cinderclient(context).volumes.get(volume_id,
 return_req_id=True)





 With this optional kwarg 'return_req_id' approach we do not need to make
 changes in services which are using cinder-client.

 It will be backward compatible change.


Maintaining backwards compatibility is very important. Making return_req_id
optional sounds like a good solution going forward.




 Could you please give your suggestion on this approach.



 Thanks,



 Abhishek





 *From:* Joe Gordon [mailto:joe.gord...@gmail.com]
 *Sent:* 05 February 2015 22:50
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [python-cinderclient] Return request ID to
 caller







 On Wed, Feb 4, 2015 at 11:23 PM, Malawade, Abhijeet 
 abhijeet.malaw...@nttdata.com wrote:

 Hi,



 I have submitted patch for cinder-client [1] to 'Return tuple containing
 header and body from client' instead of just response.

 Also cinder spec for the same is under review [2].



 This change will break OpenStack services which are using cinder-client.
 To do not break services which are using cinder-client,

 we need to first make changes in those projects to check return type of
 cinder-client. We are working on doing cinder-client return

 type check changes in OpenStack services like nova, glance_store, heat,
 trove, manila etc.

 We have already submitted patch for same against nova :
 https://review.openstack.org/#/c/152820/



 [1] https://review.openstack.org/#/c/152075/

 [2] https://review.openstack.org/#/c/132161/



 This sounds like a backwards incompatible change to the python client,
 that will break downstream consumers of python-cinderclient. This change
 should be done in a way that allows us to deprecate the old usage without
 breaking it right away.





 I want to seek early feedback from the community members on the above
 patches, so please give your thoughts on the same.



 Thanks,

 Abhijeet Malawade


 __
 Disclaimer: This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 Disclaimer: This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-06 Thread Joe Gordon
Before releasing a new python-novaclient we should make sure novaclient is
capped on stable branches so we don't break the world yet again.

On Fri, Feb 6, 2015 at 8:17 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:

 We haven't done a release of python-novaclient in awhile (2.20.0 was
 released on 2014-9-20 before the Juno release).

 It looks like there are some important feature adds and bug fixes on
 master so we should do a release, specifically to pick up the change for
 keystone v3 support [1].

 So can this be done now or should this wait until closer to the Kilo
 release (library releases are cheap so I don't see why we'd wait).

 [1] https://review.openstack.org/#/c/105900/

 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception Request for Quiesce boot from volume instances

2015-02-06 Thread Tomoki Sekiyama
Hello,

I'd like to request a feature freeze exception for the change
  https://review.openstack.org/#/c/138795/ .

This patch makes live volume-boot instance snapshots consistent by
quiescing instances before snapshotting. Quiescing for image-boot
instances are already merged in the libvirt driver, and this is a
complementary part for volume-boot instances.


Nikola Dipanov and Daniel Berrange actively reviewed the patch and I hope
it is ready now (+1 from Nikola with a comment that he is waiting for the
FFE process at this point so no +2s).
Please consider approving this FFE.


Best regards,
Tomoki Sekiyama


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Volume Replication and Migration bug triage ...

2015-02-06 Thread Jay S. Bryant

All,

In discussion with Mike Perez earlier this week the following bugs were 
highlighted in Volume Migration and Volume Replication.  IBM is focusing 
on investigating and resolving these bugs.


I will be putting out updates as we progress towards resolution of these 
issues.


Replication:

/https://bugs.launchpad.net/cinder/+bug/1390001 -- Investigated by Tao 
and found to be Invalid/

https://bugs.launchpad.net/cinder/+bug/1384040 -- assigned to Tao
https://bugs.launchpad.net/cinder/+bug/1372292 -- assigned to Tao
/https://bugs.launchpad.net/cinder/+bug/1370311 -- Requires multi-pool 
scheduler awareness and replica promote supporting multiple pools.  BP 
opened: 
https://blueprints.launchpad.net/cinder/+spec/storwize-support-muli-pool-within-one-backend-relative-features/
https://bugs.launchpad.net/cinder/+bug/1383524 -- Currently assigned to 
Ronen with updates from Avishay.  Have a question in to Avishay to see 
if he can keep investigating.


Migration:

/https://bugs.launchpad.net/cinder/+bug/1404013 -- Fix released for this./
https://bugs.launchpad.net/cinder/+bug/1403916 -- Question out to the 
reporter to see if this is still an issue. (LVM)
https://bugs.launchpad.net/cinder/+bug/1403912 -- Question out to the 
reporter to see if this is still an issue. (LVM)

/https://bugs.launchpad.net/cinder/+bug/1403904 -- Marked Invalid/
https://bugs.launchpad.net/cinder/+bug/1391179 --  Assigned to Alon Marx 
as this is an issue that was seen on XIV.
https://bugs.launchpad.net/cinder/+bug/1283313 -- Avishay was looking 
into this.  Asked if he still is doing so.
https://bugs.launchpad.net/cinder/+bug/1255957 -- Currently marked 
incomplete.  May warrant futher investigation.

/https://bugs.launchpad.net/cinder/+bug/1391172 -- Fix released/
https://bugs.launchpad.net/cinder/+bug/1403902 -- A number of patches 
have been proposed around this one.  Will follow up to understand if it 
is still a problem.
https://bugs.launchpad.net/cinder/+bug/1255622 -- John was the last one 
looking at this.  Appears to work in some situations.

https://bugs.launchpad.net/cinder/+bug/1398177 -- Assigned to Vincent Hou
https://bugs.launchpad.net/cinder/+bug/1308822 -- Assigned to Tao or Li Min.
https://bugs.launchpad.net/cinder/+bug/1278289 -- Assigned to Jay. Will 
investigate
https://bugs.launchpad.net/cinder/+bug/1308315.-- Tao is investigating 
but hasn't been able to recreate.


While triaging all the issues I updated the ones for migration with a 
'migration' tag and updated the replication ones with a 'replication' tag.


If you have any questions/concerns about this, please let me know. 
Otherwise we will work towards cleaning these up.


Thanks!
Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception Request (Use libvirt storage pools)

2015-02-06 Thread Solly Ross
Hi,

I would like to request a non-priority feature freeze exception for the 
Use libvirt storage pools blueprint [1].

The blueprint introduces a new image backed type that uses libvirt storage 
pools,
and is designed to supercede several of the existing image backends for Nova.
Using libvirt storage pools simplifies both the maintenance of existing code
and the introduction of future storage pool types (since we can support
any libvirt storage pool format that supports the createXMLFrom API call).
It also paves the way for potentially using the storage pool API to assist
with SSH-less migration of disks (not part of this blueprint).
The blueprint also provides a way to migrate disks using legacy backends
to the new backend on cold migrations/resizes, reboots (soft and hard),
and live block migrations.

The code [2] is up and working, and is split into (hopefully) manageable chunks.

Best Regards,
Solly Ross

[1] 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html
[2] https://review.openstack.org/#/c/152348/ and onward

P.S. I would really like to get this in, since this would be the second time 
that
this has been deferred, and took a good bit of manual rebasing to create the 
Kilo
version from the Juno version.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] how to use a devstack external plugin in gate testing

2015-02-06 Thread Kyle Mestery
On Fri, Feb 6, 2015 at 1:36 PM, Sean Dague s...@dague.net wrote:

 For those that didn't notice, on the Devstack team we've started to push
 back on new in-tree support for all the features. That's intentional.
 We've got an external plugin interface now -

 http://docs.openstack.org/developer/devstack/plugins.html#externally-hosted-plugins
 ,
 and have a few projects like the ec2api and glusterfs that are
 successfully using it. Our Future direction is to do more of this -
 https://review.openstack.org/#/c/150789/

 The question people ask a lot is 'but, how do I do a gate job with the
 external plugin?'.

 Starting with the stackforge/ec2api we have an example up on how to do
 that: https://review.openstack.org/#/c/153659/

 The important bits are as follows:

 1. the git repo that you have your external plugin in *must* be in
 gerrit. stackforge is fine, but it has to be hosted in the OpenStack
 infrastructure.

 2. The job needs to add your PROJECT to the projects list, i.e.:

 export PROJECTS=stackforge/ec2-api $PROJECTS

 3. The job needs to add a DEVSTACK_LOCAL_CONFIG line for the plugin
 enablement:

 export DEVSTACK_LOCAL_CONFIG=enable_plugin ec2-api
 git://git.openstack.org/stackforge/ec2-api

 Beyond that you can define your devstack job however you like. It can
 test with Tempest. It can instead use a post_test_hook for functional
 testing. Whatever is appropriate for your project.

 This is awesome Sean! Thanks for the inspiration here. In fact, I just
pushed a series of patches [1] [2] which do the same for the networking-odl
stackforge project.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/153704/
[2] https://review.openstack.org/#/c/153705/

-Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] will the real v2.1/v3 API status please stand up?

2015-02-06 Thread Matt Riedemann
I'm not going to hide it, I don't know what's going on with the v2.1 API 
status, i.e. what is the criteria to that thing dropping it's 
'experimental' label?


I wasn't at the mid-cycle meetup for Kilo but even for Juno I'll admit I 
was a bit lost. It's not my fault, I'm more good looks than brains. :)


When I look at approved specs for Kilo, three pop out:

1. https://blueprints.launchpad.net/nova/+spec/v2-on-v3-api

2. https://blueprints.launchpad.net/nova/+spec/api-microversions

3. https://blueprints.launchpad.net/nova/+spec/v3-api-policy

The only one of those that has a dependency in launchpad is the last one 
and it's dependency is on:


https://blueprints.launchpad.net/nova/+spec/nova-v3-api

Which looks like it was replaced by the v2-on-v3-api blueprint.

If I look at the open changes for each, there are a lot:

1. 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/v2-on-v3-api,n,z


2. 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/api-microversions,n,z


3. 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/v3-api-policy,n,z


Do those all need to merge before the v2.1 API is no longer experimental?

Is the, for lack of a better term, 'completion criteria', being tracked 
in an etherpad or wiki page somewhere?  I see stuff in the priorities 
etherpad https://etherpad.openstack.org/p/kilo-nova-priorities-tracking 
but it's not clear to me at a high level what makes v2.1 no longer 
experimental.


Can someone provide that in less than 500 words?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Mid-Cycle Meetup Planning

2015-02-06 Thread Adrian Otto
Team,

Our dates have been set as 2015-03-02 and 2015-03-03.

Wiki (With location, map, calendar links, agenda planning link, and links to 
tickets):
https://wiki.openstack.org/wiki/Magnum/Midcycle

RSVP Tickets:
https://www.eventbrite.com/e/magnum-midcycle-meetup-tickets-15673361446

Please be sure to register on our Eventbrite page above so we will know how 
many to plan for for lunch.

Thanks,

Adrian

On Jan 26, 2015, at 3:49 PM, Adrian Otto adrian.o...@rackspace.com wrote:

 Team,
 
 Thanks for participating in the poll. Due to considerable scheduling 
 conflicts, I am expanding the poll to include the following Monday 
 2015-03-02+Tuesday 2015-03-03. Hopefully these alternate dates can get more 
 of us together on the same days.
 
 Please take a moment to respond to the poll a second time to indicate your 
 availability on the newly proposed dates:
 
 http://doodle.com/ddgsptuex5u3394m
 
 Thanks,
 
 Adrian
 
 On Jan 8, 2015, at 2:24 PM, Adrian Otto adrian.o...@rackspace.com wrote:
 
 Team,
 
 If you have been watching the Magnum project you know that things have 
 really taken off recently. At Paris we did not contemplate a Mid-Cycle 
 meet-up but now that we have come this far so quickly, and have such a broad 
 base of participation now, it makes sense to ask if you would like to attend 
 a face-to-face mid-cycle meetup. I propose the following for your 
 consideration:
 
 - Two full days to allow for discussion of Magnum architecture, and 
 implementation of our use cases.
 - Located in San Francisco.
 - Open to using Los Angeles or another west coast city to drive down travel 
 expenses, if that is a concern that may materially impact participation.
 - Dates: February 23+24 or 25+26
 
 If you think you can attend (with 80+% certainty) please indicate your 
 availability on the proposed dates using this poll:
 
 http://doodle.com/ddgsptuex5u3394m
 
 Please also add a comment on the Doodle Poll indicating what Country/US City 
 you will be traveling FROM in order to attend.
 
 I will tabulate the responses, and follow up to this thread. Feel free to 
 respond to this thread to discuss your thoughts about if we should meet, or 
 if there are other locations or times that we should consider.
 
 Thanks,
 
 Adrian
 
 PS: I do recognize that some of our contributors reside in countries that 
 require Visas to travel to the US, and those take a long time to acquire. 
 The reverse is also true. For those of you who can not attend in person, we 
 will explore options for remote participation using teleconferencing 
 technology, IRC, Etherpad, etc for limited portions of the agenda.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need nova-specs core reviews for Scheduler spec

2015-02-06 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/06/2015 12:15 PM, Nikola Đipanov wrote:

 I've left a comment on the spec - basically I don't think this is an
 approach we should take.

Understood. There have been so many back-and-forth changes, with each
making the solution more and more complex, that it is hard to understand
the benefit. Essentially, we were trying to get as close to the eventual
state where the Scheduler knows everything it needs to make its decisions.

 Since I was not at the midcycle, I am sorry the discussions happened so
 close to the FF freeze, and there was not enough time to get broader
 feedback from the community in time.

No worries; it's not like there isn't anything else for you guys to be
reviewing or anything. :)

IAC, I've pushed a new revision of the spec. It's much simpler, with
much less impact, but it doesn't model the scheduler knows all ideal
as much. I'd be interested in your feedback, as well as everyone else's.
I think that this simpler approach will also be much more doable in the
Kilo timeframe.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJU1VpIAAoJEKMgtcocwZqLHeEP/RpivR5YZCdQDe40ZhehsbIE
w2e5Bcb7ZyXQPjg0ZZB4kqU4mwxgLv2Sa3yTBq5Sy/3YXY5Odxbt6svPmMtI+dBt
YqJurNGORtuqnHxbPMZG5p5FLCXLaAmw/da6TpUbcw09ZhkBSq8MtBkAokxIiGsa
aHTnFzjRDgARb69wjW2wpU4N19K69KYtTvEHTXxvNPN6bGD2M9byzox+VEWPrZVd
3kNpYNsulC1FbdXFg3vg41GkOKngyEKeSkDHUEoEs9rKVE0Lg4eYjmFwVl0Mfxdx
YG+kLqjZ9raHYVUH3Ej2orIA8I2NucaNIxUfuWgQmkxdchd3pIB0IW4TARfmU0DN
QQVYWtk6menx+z8I4ZCMlobvOmPWgaETSYxqz8muNeahP+OMWcFGezigl1JufLVS
M3X+aOPeMUArFMciFX/p+rFLKOvUwYLQ9BzM0eBW5QUE/bZMdSHDgaxNDofPVLPF
JjhHVX0kz7z3s4B6hnuCUWQlg1HBjf+tK6LENOMEFH838OtLmXU/fnobC9LyDQxb
rmjCDasJutZuPWRqTWTvOXjNGjoIaierDuiE1Z9z6Va3IcHnGVR61J8w2O7HvfKW
IZ2oSiWWZQyOGCLLgNCEHBhtkGina+vZGA7AzI/7dlZGHM43YcR3BLA0oOVvcefN
+8EZCLYW9lmfUo1B7jog
=kh4g
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-06 Thread Andrew Woodward
On Fri, Feb 6, 2015 at 11:16 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Dmitry,
 thanks for sharing CLI options. I'd like to clarify a few things.

  Also very important to understand that if task is mapped to role
 controller, but node where you want to apply that task doesn't have this
 role - it wont be executed.
 Is there a particular reason why we want to restrict a user to run an
 arbitrary task on a server, even if server doesn't have a role assigned? I
 think we should be flexible here - if I'm hacking something, I'd like to
 run arbitrary things.

 The way I've seen this work so far is the missing role in the graph simply
wont be executed, not the requested role


  fuel node --node 1,2,3 --end netconfig
 I would replace --end - --end-on, in order to show that task specified
 will run as well (to avoid ambiguity)

 This is separate question probably about CLI UX, but still - are we Ok
 with missing an action verb, like deploy? So it might be better to have,
 in my opinion:
 fuel deploy --node 1,2,3 --end netconfig

 provision and deploy tasks are already under node so it makes some sense
to keep them here unless everything moves


  For example if one want to execute only netconfig successors:
  fuel node --node 1,2,3 --start netconfig --skip netconfig
 I would come up with shortcut for one task. To me, it would be way better
 to specify comma-separated tasks:

 fuel deploy --node 1,2,3 --task netconfig[,task2]

this already appears to work as
fuel node --node 1,2,3 --task netconfig compute



Question here: if netconfig depends on other tasks, will those be executed
 prior to netconfig? I want both options, execute with prior deps, and
 execute just one particular task.

  Also we are working on deployment graph visualization
 yes, this would be awesome to have. When I specify --start and --end, I'll
 want to know what is going to be executed in reality, before it gets
 executed. So something like dry-run which shows execution graph would be
 very helpful.

We need to start with a better graph of everything that will run on each
node and in which order, I've yet to see something that renders the whole
graph including the requested deps on each node. It will be very useful for
debugging.


 As a separate note here, a few question:

1. If particular task fails to execute for some reason, what is the
error handling? Will I be able to see puppet/deployment tool exception
right in the same console, or should I check out some logs? We need to have
perfect UX for errors. Those who will be using CLI to run particular tasks,
will be dealing with errors for 95% of their time.

 The puppet based tasks run and should show errors the same as legacy
deployments or plugin tasks


1. I'd love to have some guidance on slave node as well. For instance,
I want to run just netconfig on slave node. How can I do it?

 fuel node --node 1 --tasks netconfig


1. If I stuck with error in task execution, which is in puppet. Can I
modify puppet module on master node, and re-run the task? (assuming that
updated module will be rsynced to slaves under deployment first)

 that's exactly how it works


 Thanks Dmitry!

 On Sat, Feb 7, 2015 at 12:16 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:


 Thank you for the excellent run-down of the CLI commands. I assume this
 will make its way into the developer documentation? I would like to know if
 you could point me to more information about the inner workings of granular
 deployment. Currently it's challenging to debug issues related to granular
 deployment.


 All tasks that are in scope of role are serialized right into deployment
 configuration that is consumed by astute. So it can be traced in the logs
 (nailgun or astute) or in astute.yaml that is stored on node itself. Here
 is what it looks like [0].
 Some other internals described in spec -
 https://review.openstack.org/#/c/113491/.

 For developer it makes sense to get familiar with networkx data
 structures [1], and then dive into debuging of [2].
 But it is not an option for a day-to-day usage, and UX will be improved
 by graph visualizer [3].

 One more option that can improve understanding is human-readable planner..
 For example it can output smth like this:

  fuel deployment plan --start hiera --end netconfig

Manifest hiera.pp will be executed on nodes [1,2,3]
Manifest netconfig will be executed on nodes [1,2]

 But i am not sure is this thing really necessary, dependencies are
 trivial in comparison to puppet, and i hope it will take very little time to
 understand how things are working :)

 As an example there is a bug [0] where tasks appear to be run in the
 wrong order based on which combination of roles exist in the environment.
 However, it's not clear how to determine what decides which tasks to run
 and when (is it astute, fuel-library, etc.), where the data comes from.
 etc.


 As for the bug - it may be a duplicate for
 

[openstack-dev] BUG in OpenVSwitch Version ovs-vswitchd (Open vSwitch) 1.4.6

2015-02-06 Thread masoom alam
Hi every one,

Can any one spot why the following bug will appear in Openstack leaving all
services of Neutron to unusable state?

To give you an idea that I was trying:

I tried to configure 173.39.237.0 ip to a VM, with the CIDR 173.39.236.0/23,
however the OVS gave error and now all the neutron services are completely
unusable

2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent
call last):
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
line 1197, in rpc_loop
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent port_info =
self.scan_ports(ports, updated_ports_copy)
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
line 821, in scan_ports
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
updated_ports.update(self.check_changed_vlans(registered_ports))
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
line 848, in check_changed_vlans
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent port_tags =
self.int_br.get_port_tag_dict()
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
/opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 394, in
get_port_tag_dict
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent result =
self.run_vsctl(args, check_error=True)
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
/opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 67, in run_vsctl
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent return
utils.execute(full_args, root_helper=self.root_helper)
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
/opt/stack/neutron/neutron/agent/linux/utils.py, line 75, in execute
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise
RuntimeError(m)
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent RuntimeError:
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent Command: ['sudo',
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf',
'ovs-vsctl', '--timeout=10', '--format=json', '--', '--columns=name,tag',
'list', 'Port']
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 1
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: 'Traceback
(most recent call last):\n  File /usr/local/bin/neutron-rootwrap, line 4,
in module\n
 __import__(\'pkg_resources\').require(\'neutron==2013.2.4.dev32\')\n  File
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line
3018, in module\nworking_set = WorkingSet._build_master()\n  File
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line
614, in _build_master\nreturn
cls._build_from_requirements(__requires__)\n  File
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line
627, in _build_from_requirements\ndists = ws.resolve(reqs,
Environment())\n  File
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line
805, in resolve\nraise
DistributionNotFound(req)\npkg_resources.DistributionNotFound:
alembic0.6.4,=0.4.1\n'
2015-02-04 05:25:06.993 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-06 Thread Mike Scherbakov
Dmitry,
thanks for sharing CLI options. I'd like to clarify a few things.

 Also very important to understand that if task is mapped to role
controller, but node where you want to apply that task doesn't have this
role - it wont be executed.
Is there a particular reason why we want to restrict a user to run an
arbitrary task on a server, even if server doesn't have a role assigned? I
think we should be flexible here - if I'm hacking something, I'd like to
run arbitrary things.

 fuel node --node 1,2,3 --end netconfig
I would replace --end - --end-on, in order to show that task specified
will run as well (to avoid ambiguity)

This is separate question probably about CLI UX, but still - are we Ok with
missing an action verb, like deploy? So it might be better to have, in my
opinion:
fuel deploy --node 1,2,3 --end netconfig

 For example if one want to execute only netconfig successors:
 fuel node --node 1,2,3 --start netconfig --skip netconfig
I would come up with shortcut for one task. To me, it would be way better
to specify comma-separated tasks:
 fuel deploy --node 1,2,3 --task netconfig[,task2]

Question here: if netconfig depends on other tasks, will those be executed
prior to netconfig? I want both options, execute with prior deps, and
execute just one particular task.

 Also we are working on deployment graph visualization
yes, this would be awesome to have. When I specify --start and --end, I'll
want to know what is going to be executed in reality, before it gets
executed. So something like dry-run which shows execution graph would be
very helpful.

As a separate note here, a few question:

   1. If particular task fails to execute for some reason, what is the
   error handling? Will I be able to see puppet/deployment tool exception
   right in the same console, or should I check out some logs? We need to have
   perfect UX for errors. Those who will be using CLI to run particular tasks,
   will be dealing with errors for 95% of their time.
   2. I'd love to have some guidance on slave node as well. For instance, I
   want to run just netconfig on slave node. How can I do it?
   3. If I stuck with error in task execution, which is in puppet. Can I
   modify puppet module on master node, and re-run the task? (assuming that
   updated module will be rsynced to slaves under deployment first)

Thanks Dmitry!

On Sat, Feb 7, 2015 at 12:16 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


 Thank you for the excellent run-down of the CLI commands. I assume this
 will make its way into the developer documentation? I would like to know if
 you could point me to more information about the inner workings of granular
 deployment. Currently it's challenging to debug issues related to granular
 deployment.


 All tasks that are in scope of role are serialized right into deployment
 configuration that is consumed by astute. So it can be traced in the logs
 (nailgun or astute) or in astute.yaml that is stored on node itself. Here
 is what it looks like [0].
 Some other internals described in spec -
 https://review.openstack.org/#/c/113491/.

 For developer it makes sense to get familiar with networkx data structures
 [1], and then dive into debuging of [2].
 But it is not an option for a day-to-day usage, and UX will be improved by
 graph visualizer [3].

 One more option that can improve understanding is human-readable planner..
 For example it can output smth like this:

  fuel deployment plan --start hiera --end netconfig

Manifest hiera.pp will be executed on nodes [1,2,3]
Manifest netconfig will be executed on nodes [1,2]

 But i am not sure is this thing really necessary, dependencies are trivial
 in comparison to puppet, and i hope it will take very little time to
 understand how things are working :)

 As an example there is a bug [0] where tasks appear to be run in the wrong
 order based on which combination of roles exist in the environment.
 However, it's not clear how to determine what decides which tasks to run
 and when (is it astute, fuel-library, etc.), where the data comes from.
 etc.


 As for the bug - it may be a duplicate for
 https://launchpad.net/bugs/1417579, which was fixed by
 https://review.openstack.org/#/c/152511/

 [0] http://paste.openstack.org/show/168298/
 [1]
 http://networkx.github.io/documentation/latest/tutorial/tutorial.html#directed-graphs
 [2]
 https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_graph.py#L29
 [3] https://review.openstack.org/#/c/152434/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] BUG in OpenVSwitch Version ovs-vswitchd (Open vSwitch) 1.4.6

2015-02-06 Thread James Polley
On Sat, Feb 7, 2015 at 5:09 AM, masoom alam masoom.a...@wanclouds.net
wrote:

 raise DistributionNotFound(req)\npkg_resources.DistributionNotFound:
 alembic0.6.4,=0.4.1\n'


It looks like your system is failing to find a version of alembic that
satisfies those requirements.

In your last post on this issue you said you had alembic 0.7.4 installed
already. That doesn't satisfy the requirement for a version 0.6.4. Perhaps
you need to uninstall that version, or downgrade to something that meets
the requirements?

Alternatively - it seems as though you may have a very old version of
neutron-rootwrap. I don't know much about Neutron so maybe I'm reading this
wrong, but require(\'neutron==2013.2.4.dev32\') suggests old age to me.
Perhaps it would be possible to upgrade your version?

FWIW, I tested in a virtualenv; with just alembic==0.7.4 installed, pip
install -r requirements.txt on a file that contained alembic0.6.4,=0.4.1
was able to downgrade my environment from 0.7.4 to 0.6.3. It seems possible
that maybe you could be talking to a PyPi proxy that is missing the older
alembic packages, and that adding those packages to the proxy so they can
be downloaded might possibly resolve your issue
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-06 Thread Angus Lees
Thanks for the additional details Peter.  This confirms the parts I'd
deduced from the docs I could find, and is useful knowledge.

On Sat Feb 07 2015 at 2:24:23 AM Peter Boros peter.bo...@percona.com
wrote:

 - Like many others said it before me, consistent reads can be achieved
 with wsrep_causal_reads set on in the session.


So the example was two dependent command-line invocations (write followed
by read) that have no way to re-use the same DB session (without
introducing lots of affinity issues that we'd also like to avoid).

Enabling wsrep_casual_reads makes sure the latter read sees the effects of
the earlier write, but comes at the cost of delaying all reads by some
amount depending on the write-load of the galera cluster (if I understand
correctly).  This additional delay was raised as a concern severe enough
not to just go down this path.

Really we don't care about other writes that may have occurred (we always
need to deal with races against other actors), we just want to ensure our
earlier write has taken effect on the galera server where we sent the
second read request.  If we had some way to say wsrep_delay_until
$first_txid then we we could be sure of read-after-write from a different
DB session and also (in the vast majority of cases) suffer no additional
delay.  An opaque sequencer is a generic concept across many of the
distributed consensus stores I'm familiar with, so this needn't be exposed
as a Galera-only quirk.


Meh, I gather people are bored with the topic at this point.  As I
suggested much earlier, I'd just enable wsrep_casual_reads on the first
request for the session and then move on to some other problem ;)

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Scheduling for Magnum

2015-02-06 Thread Adrian Otto
Magnum Team,

In our initial spec, we addressed the subject of resource scheduling. Our plan 
was to begin with a naive scheduler that places resources on a specified Node 
and can sequentially fill Nodes if one is not specified.

Magnum supports multiple conductor backends[1], one of which is our Kubernetes 
backend. We also have a native Docker backend that we would like to enhance so 
that when pods or containers are created, the target nodes can be selected 
according to user-supplied filters. Some examples of this are:

Constraint, Affinity, Anti-Affinity, Health

We have multiple options for solving this challenge. Here are a few:

1) Cherry pick scheduler code from Nova, which already has a working a filter 
scheduler design. 
2) Integrate swarmd to leverage its scheduler[2]. 
3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova. This is 
expected to happen about a year from now, possibly sooner.
4) Write our own filter scheduler, inspired by Nova.

I suggest that we deserve to have a scheduling implementation for our native 
docker backend before Gantt is ready. It’s unrealistic that the Magnum team 
will be able to accelerate Gantt’s progress, as significant changes must be 
made in Nova for this to happen. The Nova team is best equipped to do this. It 
requires active participation from Nova’s core review team, which has limited 
bandwidth, and other priorities to focus on. I think we unanimously agree that 
we would prefer to use Gantt, if it were available sooner.

I suggest we also rule out option 4, because it amounts to re-inventing the 
wheel.

That leaves us with options 1 and 2 in the short term. The disadvantage of 
either of these approaches is that we will likely need to remove them and 
replace them with Gantt (or a derivative work) once it matures. The advantage 
of option 1 is that python code already exists for this, and we know it works 
well in Nova. We could cherry pick that over, and drop it directly into Magnum. 
The advantage of option 2 is that we leverage the talents of the developers 
working on Swarm, which is better than option 4. New features are likely to 
surface in parallel with our efforts, so we would enjoy the benefits of those 
without expending work in our own project.

So, how do you feel about options 1 and 2? Which do you feel is more suitable 
for Magnum? What other options should we consider that might be better than 
either of these choices?

I have a slight preference for option 2 - integrating with Swarm, but I could 
be persuaded to pick option 1, or something even more brilliant. Please discuss.

Thanks,

Adrian

[1] https://github.com/stackforge/magnum/tree/master/magnum/conductor/handlers
[2] https://github.com/docker/swarm/tree/master/scheduler/
[3] https://wiki.openstack.org/wiki/Gantt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-06 Thread James Bottomley
On Sat, 2015-02-07 at 00:44 +, Adrian Otto wrote:
 Magnum Team,
 
 In our initial spec, we addressed the subject of resource scheduling. Our 
 plan was to begin with a naive scheduler that places resources on a specified 
 Node and can sequentially fill Nodes if one is not specified.
 
 Magnum supports multiple conductor backends[1], one of which is our 
 Kubernetes backend. We also have a native Docker backend that we would like 
 to enhance so that when pods or containers are created, the target nodes can 
 be selected according to user-supplied filters. Some examples of this are:
 
 Constraint, Affinity, Anti-Affinity, Health
 
 We have multiple options for solving this challenge. Here are a few:
 
 1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design. 
 2) Integrate swarmd to leverage its scheduler[2]. 
 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
 This is expected to happen about a year from now, possibly sooner.
 4) Write our own filter scheduler, inspired by Nova.
 
 I suggest that we deserve to have a scheduling implementation for our
 native docker backend before Gantt is ready. It’s unrealistic that the
 Magnum team will be able to accelerate Gantt’s progress, as
 significant changes must be made in Nova for this to happen. The Nova
 team is best equipped to do this. It requires active participation
 from Nova’s core review team, which has limited bandwidth, and other
 priorities to focus on. I think we unanimously agree that we would
 prefer to use Gantt, if it were available sooner.
 
 I suggest we also rule out option 4, because it amounts to
 re-inventing the wheel.
 
 That leaves us with options 1 and 2 in the short term. The
 disadvantage of either of these approaches is that we will likely need
 to remove them and replace them with Gantt (or a derivative work) once
 it matures. The advantage of option 1 is that python code already
 exists for this, and we know it works well in Nova. We could cherry
 pick that over, and drop it directly into Magnum. The advantage of
 option 2 is that we leverage the talents of the developers working on
 Swarm, which is better than option 4. New features are likely to
 surface in parallel with our efforts, so we would enjoy the benefits
 of those without expending work in our own project.
 
 So, how do you feel about options 1 and 2? Which do you feel is more
 suitable for Magnum? What other options should we consider that might
 be better than either of these choices?
 
 I have a slight preference for option 2 - integrating with Swarm, but
 I could be persuaded to pick option 1, or something even more
 brilliant. Please discuss.

Got to say that Option 1 looks far preferable.  As you say, we have to
switch to Gantt eventually, so it might end up being an expensive and
difficult retro fit with Option 2.  With Option 1, we look mostly like
the Nova scheduler, so can let them take the initial hit of doing the
shift to Gantt and slipstream in their wake once the major pain points
are ironed out.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Question on rollback live migration at the destination

2015-02-06 Thread Shuichiro MAKIGAKI

Robert,

Your concern seems to be correct. The bug has already been reported:
https://bugs.launchpad.net/nova/+bug/1284719.
# Oops, 1 year old bug...

Regards,
Makkie

On 2015/01/27 3:58, Robert Li (baoli) wrote:

Hi,

I’m looking at rollback_live_migration_at_destination() in
compute/manager.py. If it’s shared storage (such as NFS,
is_shared_instance_path is True), it’s not going to be called since
_live_migration_cleanup_flags() will return False. Can anyone let me
know what’s the reason behind it? So nothing needs to be cleaned up at
the destination in such case? Should VIFs be unplugged, to say the least?

I’m working on the live migration support with SR-IOV macvtap
interfaces. The devices allocated at the destination needs to be freed.

thanks,
Robert


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Kilo-2 development milestone available

2015-02-06 Thread Ilya Sviridov
Hello everyone,

MagnetoDB Kilo-2 development milestone has been released

https://launchpad.net/magnetodb/kilo/kilo-2

Have a nice day,
Ilya Sviridov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Telco][NFV][infra] Review process of TelcoWG use cases

2015-02-06 Thread Marc Koderer
Hello everyone,

we are currently facing the issue that we don’t know how to proceed with
our telco WG use cases. There are many of them already defined but the
reviews via Etherpad doesn’t seem to work.

I suggest to do a review on them with the usual OpenStack tooling.
Therefore I uploaded one of them (Session Border Controller) to the
Gerrit system into the sandbox repo:

https://review.openstack.org/#/c/152940/1

I would really like to see how many review we can get on it.
If this works out my idea is the following:

 - we create a project under Stackforge called telcowg-usecases
 - we link blueprint related to this use case
 - we build a core team and approve/prioritize them

Regards
Marc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Telco][NFV][infra] Review process of TelcoWG use cases

2015-02-06 Thread Steve Gordon
- Original Message -
 From: Anthony Veiga anthony_ve...@cable.comcast.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
  On Feb 6, 2015, at 8:17 , Jeremy Stanley fu...@yuggoth.org wrote:
  
  On 2015-02-06 12:11:40 +0100 (+0100), Marc Koderer wrote:
  [...]
  Therefore I uploaded one of them (Session Border Controller) to
  the Gerrit system into the sandbox repo:
  
 https://review.openstack.org/#/c/152940/1
  [...]
  
  This looks a lot like the beginnings of a specification which has
  implications for multiple OpenStack projects. Would proposing a
  cross-project spec in the openstack/openstack-specs repository be an
  appropriate alternative?\
 
 It does look like that.  However, the intent here is to allow non-developer
 members of a Telco provide the use cases they need to accomplish. This way
 the Telco WG can identify gaps and file a proper spec into each of the
 OpenStack projects.

Indeed, what we're trying to do is help the non-developer members of the group 
articulate their use cases and tease them out to a level that is meaningful to 
someone who is not immersed in telecommunications themselves. In this way we 
hope to in turn be able to create meaningful specifications for the actual 
OpenStack projects impacted.

It's possible that some of these will be truly cross-project and therefore head 
to openstack-specs but initial indications seem to be that most will either be 
specific to a project, or cross only a couple of projects (e.g. nova and 
neutron) - I am sure someone will come up with some more exceptions to this 
statement to prove me wrong :).

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-06 Thread Zane Bitter

On 03/02/15 14:12, Clint Byrum wrote:

The visible change in making things parallel was minimal. In talking
about convergence, it's become clear that users can and should expect
something radically different when they issue stack updates. I'd love to
say that it can be done to just bind convergence into the old ways, but
doing so would also remove the benefit of having it.

Also allowing resume wasn't a new behavior, it was fixing a bug really
(that state was lost on failed operations). Convergence is a pretty
different beast from the current model,


That's not actually the case for Phase 1; really nothing much should 
change from the user point of view, except that if you issue an update 
before a previous one is finished then you won't get an error back any more.



In any event, I think Angus's comment on the review is correct, we 
actually have two different problems here. One is how to land the code, 
and a config option is indisputably the right choice here: until many, 
many blueprints have landed then the convergence code path will do 
literally nothing at all. There is no conceivable advantage to users for 
opting in to that.


The second question, which we can continue to discuss, is whether to 
allow individual users to opt in/out once operators have enabled the 
convergence flow path. I'm not convinced that there is anything 
particular special about this feature that warrants such a choice more 
than any other feature that we have developed in the past. However, I 
don't think we need to decide until around the time that we're preparing 
to flip the default on. By that time we should have better information 
about the level of stability we're dealing with, and we can get input 
from operators on what kind of additional steps we should take to 
maintain stability in the face of possible regressions.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Donald Stufft

 On Feb 6, 2015, at 9:00 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
 As part of oslo.messaging initiative to split up requirements into
 certain list of per messaging driver dependencies
 [...]
 
 I'm curious what the end goal is here... when someone does `pip
 install oslo.messaging` what do you/they expect to get installed?
 Your run-parts style requirements.d plan is sort of
 counter-intuitive to me in that I would expect it to contain
 number-prefixed sublists of requirements which should be processed
 collectively in an alphanumeric sort order, but I get the impression
 this is not the goal of the mechanism (I'll be somewhat relieved if
 you tell me I'm mistaken in that regard).
 
 Taking into account suggestion from Monty Taylor i’m bringing this
 discussion to much wider audience. And the question is: aren’t we
 doing something complex or are there any less complex ways to
 accomplish the initial idea of splitting requirements?
 
 As for taking this to a wider audience we (OpenStack) are already
 venturing into special snowflake territory with PBR, however
 requirements.txt is a convention used at least somewhat outside of
 OpenStack-related Python projects. It might make sense to get input
 from the broader Python packaging community on something like this
 before we end up alienating ourselves from them entirely.

I’m not sure what exactly is trying to be achieved here, but I still assert
that requirements.txt is the wrong place for pbr to be looking and it should
instead look for dependencies specified inside of a setup.cfg.

More on topic, I'm not sure what inner dependencies are, but if what you're
looking for is optional dependencies that only are needed in specific situation
then you probably want extras, defined like:

setup(
extras_require={
somename: [
dep1,
dep2,
],
},
)

Then if you do ``pip install myproject[somename]`` it'll include dep1 and dep2
in the list of dependencies, you can also depend on this in other projects
like:

setup(
install_requires=[myproject[somename]=1.0],
)

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] CLI api for working with granular deployment model

2015-02-06 Thread Dmitriy Shulyak
Hello folks,

Not long ago we added necessary commands in fuel client to work with
granular deployment configuration and API.

So, you may know that configuration is stored in fuel-library, and uploaded
into database during
bootstrap of fuel master. If you want to change/add some tasks right on
master node, just add tasks.yaml
and appropriate manifests in folder for release that you are interested in.
Then apply this command:

 fuel rel --sync-deployment-tasks --dir /etc/puppet

Also you may want to overwrite deployment tasks for any specific
release/cluster by next commands:

 fuel rel --rel id --deployment-tasks --download
 fuel rel --rel id --deployment-tasks --upload

 fuel env --env id --deployment-tasks --download
 fuel env --env id --deployment-tasks --upload

After this is done - you will be able to run customized graph of tasks:

The most basic command:

 fuel node --node 1,2,3 --tasks upload_repos netconfig

Developer will need to specify nodes that should be used in deployment and
tasks ids. Order in which they are provided doesn't matter,
it will be computed from dependencies specified in database. Also very
important to understand that if task is mapped to role controller,
but node where you want to apply that task doesn't have this role - it wont
be executed.

Skipping of tasks

 fuel node --node 1,2,3 --skip netconfig hiera

List of task that are provided with this parameter will be skipped during
graph traversal in nailgun.
The main question is - should we skip other task that have provided tasks
as dependencies?
In my opinion we can leave this flag as simple as it is, and use following
commands for smarter traversal.

Specify start and end nodes in graph:

 fuel node --node 1,2,3 --end netconfig

Will deploy everything up to netconfig task, including netconfig. This is:
all tasks that we are considering as pre_deployment (keys generation, rsync
manifests, sync time, upload repos),
and such tasks as hiera setup, globals computation and maybe some other
basic preparatory tasks.

 fuel node --node 1,2,3 --start netconfig

Start from netconfig, including netconfig, deploy all other tasks, tasks
that we are considering as post_deployment.
For example if one want to execute only netconfig successors:

 fuel node --node 1,2,3 --start netconfig --skip netconfig

And user will be able to use start and end at the same time:

 fuel node --node 1,2,3 --start netconfig --end upload_cirros

Nailgun will build path that includes only necessary tasks to join this two
points. However start flag is not merged yet, but i think it will be by
Monday.

Also we are working on deployment graph visualization, it will be static (i
mean there is no progress tracking of any kind),
but it will help a lot to understand what is going to be executed.

Thank you for reading, i would like to hear more thoughts about this, and
answer any questions
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Feb 6 2014

2015-02-06 Thread Anne Gentle
News from docland, here we go.

Take a minute to click over to http://docs.openstack.org to see the new
landing page! Refresh to see even more characters added to the docs. :)

The openstackdocstheme, a Sphinx theme that replicates the www.openstack.org
header for docs content pages in RST, is released and ready to go as soon
as it lands in the global requirements [1]. We have the Ying Chun Guo and
the i18N team engaged on the translation toolchain, thank you Daisy! I
would like some help on javascript bugs: http://is.gd/2R2XY4 if you are so
inclined.

I would also like help migrating the End User Guide and Admin User Guide to
RST. Please sign up on the wiki at
https://wiki.openstack.org/wiki/Documentation/Migrate. I'm documenting my
findings as I go, and sometimes we will have to revise the text so the
markup works. Case in point, I couldn't get numbered list continuation to
work with [2]. I also cannot get embedded .. note: directives to work
between numbered list items with [3]. I also found that you can't have line
breaks in something with inline semantic markup like :guilabel:. If you are
an RST/Sphinx wizard, please take a look!

We have a long list of bugs for the Debian Install Guide, resulting in not
being able to launch an instance. If you're interested, please take a look
at the list at [4] and triage. I'm going to dive deeper into the root
problems in the relevant thread for more discussion so stay tuned to [5].

APAC meeting every other Wednesday is looking to move to an earlier time
slot. Reply on the thread if you have an opinion on the wee morning hours
of Universal Time. Thank you Lana for reviving those!

We merged 72 patches across all the docs repos this past week, nice work.
That fixed over 20 bugs with backports also improving previously-released
docs.

The HA Guide team met this week and they're going to start editing the
Install Guides to point to the HA Guide to add higher availability to
installations. Read their notes at [5].

Thanks all for the hard work this week, let's keep it up as the kilo-2
milestone just passed yesterday (2/5).
Anne


1. https://review.openstack.org/#/c/153237/
2. https://review.openstack.org/#/c/152582/
3. https://review.openstack.org#/c/153577
4. http://tinyurl.com/ooccchh
5.
http://lists.openstack.org/pipermail/openstack-docs/2015-February/005791.html
6.
http://lists.openstack.org/pipermail/openstack-docs/2015-February/005817.html



-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Devstack error when running g-reg: 401 Unauthorized

2015-02-06 Thread Bob Ball
This is likely https://launchpad.net/bugs/1415795 which is fixed by 
https://review.openstack.org/#/c/151506/

Make sure you have the above change in your devstack and it should work again.

Bob

From: liuxinguo [mailto:liuxin...@huawei.com]
Sent: 06 February 2015 03:08
To: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org
Cc: Zhangli (ISSP); Fanyaohong; Chenzongliang
Subject: [openstack-dev] [OpenStack-Infra] Devstack error when running g-reg: 
401 Unauthorized

Our CI get the following error when build devstack, begin from service ‘g-reg’ 
when uploading image:

is_service_enabled g-reg
2015-02-05 03:14:54.966 | + return 0
2015-02-05 03:14:54.968 | ++ keystone token-get
2015-02-05 03:14:54.968 | ++ grep ' id '
2015-02-05 03:14:54.969 | ++ get_field 2
2015-02-05 03:14:54.970 | ++ local data field
2015-02-05 03:14:54.970 | ++ read data
2015-02-05 03:14:55.797 | ++ '[' 2 -lt 0 ']'
2015-02-05 03:14:55.798 | ++ field='$3'
2015-02-05 03:14:55.799 | ++ echo '| id| 
9660a765e04d4d0a8bc3f0f44b305161 |'
2015-02-05 03:14:55.800 | ++ awk '-F[ \t]*\\|[ \t]*' '{print $3}'
2015-02-05 03:14:55.802 | ++ read data
2015-02-05 03:14:55.804 | + TOKEN=9660a765e04d4d0a8bc3f0f44b305161
2015-02-05 03:14:55.804 | + die_if_not_set 1137 TOKEN 'Keystone fail to get 
token'
2015-02-05 03:14:55.804 | + local exitcode=0
2015-02-05 03:14:55.810 | + echo_summary 'Uploading images'
2015-02-05 03:14:55.810 | + [[ -t 3 ]]
2015-02-05 03:14:55.810 | + [[ True != \T\r\u\e ]]
2015-02-05 03:14:55.810 | + echo -e Uploading images
2015-02-05 03:14:55.810 | + [[ -n '' ]]
2015-02-05 03:14:55.810 | + for image_url in '${IMAGE_URLS//,/ }'
2015-02-05 03:14:55.811 | + upload_image 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz 
9660a765e04d4d0a8bc3f0f44b305161
2015-02-05 03:14:55.811 | + local 
image_url=http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.811 | + local token=9660a765e04d4d0a8bc3f0f44b305161
2015-02-05 03:14:55.811 | + local image image_fname image_name
2015-02-05 03:14:55.811 | + mkdir -p /opt/stack/new/devstack/files/images
2015-02-05 03:14:55.813 | ++ basename 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.815 | + image_fname=cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.815 | + [[ 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz != file* 
]]
2015-02-05 03:14:55.815 | + [[ ! -f 
/opt/stack/new/devstack/files/cirros-0.3.2-x86_64-uec.tar.gz ]]
2015-02-05 03:14:55.816 | ++ stat -c %s 
/opt/stack/new/devstack/files/cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.818 | + [[ 8655821 = \0 ]]
2015-02-05 03:14:55.818 | + 
image=/opt/stack/new/devstack/files/cirros-0.3.2-x86_64-uec.tar.gz
2015-02-05 03:14:55.819 | + [[ 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz =~ openvz 
]]
2015-02-05 03:14:55.819 | + [[ 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz =~ \.vmdk 
]]
2015-02-05 03:14:55.819 | + [[ 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz =~ 
\.vhd\.tgz ]]
2015-02-05 03:14:55.819 | + [[ 
http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz =~ 
\.xen-raw\.tgz ]]
2015-02-05 03:14:55.819 | + local kernel=
2015-02-05 03:14:55.819 | + local ramdisk=
2015-02-05 03:14:55.819 | + local disk_format=
2015-02-05 03:14:55.819 | + local container_format=
2015-02-05 03:14:55.819 | + local unpack=
2015-02-05 03:14:55.819 | + local img_property=
2015-02-05 03:14:55.819 | + case $image_fname in
2015-02-05 03:14:55.819 | + '[' cirros-0.3.2-x86_64-uec '!=' 
cirros-0.3.2-x86_64-uec.tar.gz ']'
2015-02-05 03:14:55.819 | + image_name=cirros-0.3.2-x86_64-uec
2015-02-05 03:14:55.819 | + local 
xdir=/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec
2015-02-05 03:14:55.819 | + rm -Rf 
/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec
2015-02-05 03:14:55.912 | + mkdir 
/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec
2015-02-05 03:14:55.913 | + tar -zxf 
/opt/stack/new/devstack/files/cirros-0.3.2-x86_64-uec.tar.gz -C 
/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec
2015-02-05 03:14:56.619 | ++ for f in '$xdir/*-vmlinuz*' '$xdir/aki-*/image'
2015-02-05 03:14:56.619 | ++ '[' -f 
/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec/cirros-0.3.2-x86_64-vmlinuz
 ']'
2015-02-05 03:14:56.619 | ++ echo 
/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec/cirros-0.3.2-x86_64-vmlinuz
2015-02-05 03:14:56.619 | ++ break
2015-02-05 03:14:56.620 | ++ true
2015-02-05 03:14:56.620 | + 
kernel=/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec/cirros-0.3.2-x86_64-vmlinuz
2015-02-05 03:14:56.621 | ++ for f in '$xdir/*-initrd*' '$xdir/ari-*/image'
2015-02-05 03:14:56.622 | ++ '[' -f 
/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec/cirros-0.3.2-x86_64-initrd
 ']'
2015-02-05 03:14:56.622 | ++ echo 

[openstack-dev] [nova] Feature Freeze Exception Request

2015-02-06 Thread Solly Ross
Hi,

I would like to request a feature freeze exception for the
Websockify security proxy framework blueprint [1].

The blueprint introduces a framework for defining security drivers for the
connections between the websocket proxy and the hypervisor, and provides
a TLS driver for VNC connections using the VeNCrypt RFB auth method.

The two patches [2] have sat in place with one +2 (Dan Berrange) and multiple 
+1s
for a while now (the first does not currently show any votes because of a merge
conflict that I had to deal with recently).

Best Regards,
Solly Ross

[1] 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/websocket-proxy-to-host-security.html
[2] https://review.openstack.org/#/c/115483/ and 
https://review.openstack.org/#/c/115484/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stuck patches at the nova IRC meeting

2015-02-06 Thread Matt Riedemann



On 2/6/2015 7:20 AM, Sean Dague wrote:

Ok, my bad. When I proposed this part of the Nova meeting I was also
thinking about lost patches where a couple of weeks had gone by
without any negative feedback and the patch author got a chance to
advocate for it. That's how we used it in Tempest meetings.

The theory being that engaging in more communication might help with
having patches be a little closer to what's needed for merge.

On 02/05/2015 07:46 PM, Michael Still wrote:

Certainly it was my intent when I created that agenda item to cover
reviews that wouldn't otherwise reach a decision -- either two cores
wedged, or something else that we can't resolve trivially in gerrit.

Now, I can see that people don't like reviews sitting for a long time,
but that's probably too long a list to cover in an IRC meeting. I'm
not opposed to trying, but we should set expectations that we're going
to talk about only a few important reviews, not the dozens that are
unloved.

Michael

On Fri, Feb 6, 2015 at 9:27 AM, Tony Breeds t...@bakeyournoodle.com wrote:

On Thu, Feb 05, 2015 at 11:13:50PM +0100, Sylvain Bauza wrote:


I was always considering stuck reviews as reviews where 2 or more cores were
disagreeing between themselves so that it was needing a debate discussion
during the meeting.


I was under the same impression.

Stuck reviews were for reviewws were there was strong disagreement (amongst
cores)
Other reviews can be discussed as part of Open discussion

Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev










We also have this [1].  That shows unloved reviews which have been open 
for a long time (by latest revision, 72 days currently), by oldest 
revision without a negative score (76 days currently), and oldest 
reviews since first revision (247 days currently).


I don't know if just slapping that link into the nova meeting agenda 
would help at all, but maybe we could take the top 3 oldest changes out 
of there and post those for each meeting agenda to get people to focus 
on them?


[1] http://russellbryant.net/openstack-stats/nova-openreviews.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Core reviewer update

2015-02-06 Thread Nikhil Manchanda
Thanks all for the show of support!
Victoria, Peter, and Edmond -- welcome to core.
Thanks Michael, and Tim for all the hard work on Trove.

Cheers,
Nikhil

On Thu, Feb 5, 2015 at 2:38 PM, McReynolds, Auston amcreyno...@ebay.com
wrote:

  +1

  welcome aboard peter + victoria + edmond!

   From: Nikhil Manchanda slick...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, February 5, 2015 at 8:26 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Trove] Core reviewer update

   Hello Trove folks:

 Keeping in line with other OpenStack projects, and attempting to keep
 the momentum of reviews in Trove going, we need to keep our core-team up
 to date -- folks who are regularly doing good reviews on the code should
 be brought in to core and folks whose involvement is dropping off should
 be considered for removal since they lose context over time, not being
 as involved.

 For this update I'm proposing the following changes:
 - Adding Peter Stachowski (peterstac) to trove-core
 - Adding Victoria Martinez De La Cruz (vkmc) to trove-core
 - Adding Edmond Kotowski (edmondk) to trove-core
 - Removing Michael Basnight (hub_cap) from trove-core
 - Removing Tim Simpson (grapex) from trove-core

 For context on Trove reviews and who has been active, please see
 Russell's stats for Trove at:
 - http://russellbryant.net/openstack-stats/trove-reviewers-30.txt
 - http://russellbryant.net/openstack-stats/trove-reviewers-90.txt

 Trove-core members -- please reply with your vote on each of these
 proposed changes to the core team. Peter, Victoria and Eddie -- please
 let me know of your willingness to be in trove-core. Michael, and Tim --
 if you are planning on being substantially active on Trove in the near
 term, also please do let me know.

 Thanks,
 Nikhil

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request

2015-02-06 Thread Matt Riedemann



On 2/6/2015 7:28 AM, Silvan Kaiser wrote:

Hello!

I am requesting a feature freeze exception for Kilo-2 milestone
regarding https://review.openstack.org/#/c/110722/ .

This change adds support for using the Quobyte Storage system for
provisioning images in Nova. It works in conjunction with the Quobyte
driver in Cinder (which was merged at Kilo-1).
Refraining from merging would mean delay until L release, all the while
having a largely useless Driver in Cinder.

Jay Pipes, Matt Riedemann and Daniel Berrange kindly declared
sponsorship for this FFE.


Daniel didn't actually say he'd sponsor this, I said in the review that 
I *thought* he might be a possible third sponsor if it came to that. :)


I did say I'd sponsor this though. It's close but had enough comments 
from me that I thought it warranted a -1 in it's current form.


I realize this isn't a priority blueprint though and it's up to the 
nova-drivers team to decide on it, but FWIW it's self-contained for the 
most part and has been sitting around for a long time and I feel that 
lack of reviews shouldn't punish it in that regard (hopefully this 
doesn't open up a ton of other my thing has been around forever without 
reviews too so give me an exception also kind of precedent thing, not 
my intention).




Please feel free to contact me regarding further FFE procedure or if
there are any more questions (sil...@quobyte.com
mailto:sil...@quobyte.com, kaisers/casusbelli in irc).

Best regards
Silvan Kaiser



--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com http://www.quobyte.com/
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-06 Thread David Kranz

On 02/06/2015 07:49 AM, Sean Dague wrote:

On 02/06/2015 07:39 AM, Alexandre Levine wrote:

Rushi,

We're adding new tempest tests into our stackforge-api/ec2-api. The
review will appear in a couple of days. These tests will be good for
running against both nova/ec2-api and stackforge/ec2-api. As soon as
they are there, you'll be more than welcome to add even more.

Best regards,
   Alex Levine


Honestly, I'm more more pro having the ec2 tests in a tree that isn't
Tempest. Most Tempest reviewers aren't familiar with the ec2 API, their
focus has been OpenStack APIs.

Having a place where there is a review team that is dedicated only to
the EC2 API seems much better.

-Sean


+1

 And once similar coverage to the current tempest ec2 tests is 
achieved, either by copying from tempest or creating anew, we should 
remove the ec2 tests from tempest.


 -David


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Doug Hellmann


On Fri, Feb 6, 2015, at 09:56 AM, Denis Makogon wrote:
 On Fri, Feb 6, 2015 at 4:16 PM, Donald Stufft don...@stufft.io wrote:
 
 
   On Feb 6, 2015, at 9:00 AM, Jeremy Stanley fu...@yuggoth.org wrote:
  
   On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
   As part of oslo.messaging initiative to split up requirements into
   certain list of per messaging driver dependencies
   [...]
  
   I'm curious what the end goal is here... when someone does `pip
   install oslo.messaging` what do you/they expect to get installed?
   Your run-parts style requirements.d plan is sort of
   counter-intuitive to me in that I would expect it to contain
   number-prefixed sublists of requirements which should be processed
   collectively in an alphanumeric sort order, but I get the impression
   this is not the goal of the mechanism (I'll be somewhat relieved if
   you tell me I'm mistaken in that regard).
  
   Taking into account suggestion from Monty Taylor i’m bringing this
   discussion to much wider audience. And the question is: aren’t we
   doing something complex or are there any less complex ways to
   accomplish the initial idea of splitting requirements?
  
   As for taking this to a wider audience we (OpenStack) are already
   venturing into special snowflake territory with PBR, however
   requirements.txt is a convention used at least somewhat outside of
   OpenStack-related Python projects. It might make sense to get input
   from the broader Python packaging community on something like this
   before we end up alienating ourselves from them entirely.
 
  I’m not sure what exactly is trying to be achieved here, but I still assert
  that requirements.txt is the wrong place for pbr to be looking and it
  should
  instead look for dependencies specified inside of a setup.cfg.
 
  Sorry, i had to explain what i meant by saying 'inner dependency'. Let me
 be more clear at this step to avoid misunderstanding in terminology.
 Inner  dependency - is a redirection from requirements.txt to another
 file
 that contains additional dependencies (-r another_deps.txt)
 
  More on topic, I'm not sure what inner dependencies are, but if what
  you're
  looking for is optional dependencies that only are needed in specific
  situation
  then you probably want extras, defined like:
 
  setup(
  extras_require={
  somename: [
  dep1,
  dep2,
  ],
  },
  )
 
 
 That might be the case, but since we want to split up requirements into
 per-driver dependecies, it would require to check if setup.cfg can handle
 use of inner dependencies. for example:
 
 setup(
 extras_require={
 somename: [
 -r another_file_with_deps.txt,
 ],
 },
 )

Let's see if we can make pbr add the extras_require values. We can then
either specify the requirements explicitly in setup.cfg, or use a naming
convention for separate requirements files. Either way, we shouldn't
need setuptools to understand that we are managing the list of
requirements in files.

 
 
  Then if you do ``pip install myproject[somename]`` it'll include dep1 and
  dep2
  in the list of dependencies, you can also depend on this in other projects
  like:
 
  setup(
  install_requires=[myproject[somename]=1.0],
  )
 
 
 That's i've been looking for, so, for future installations it'll be very
 useful if cloud deployer knows which AMQP service will be used,
 then he'd be able to install only that type of oslo.messaging that he
 wants
 i.e.
 
 project/requirements.txt:
 ...
 oslo.messaging[amqp1]=${version}
 
 ...
 
 Really great input, thanks Donald. Appreciate it.
 
 ---
  Donald Stufft
  PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Kind regards,
 Denis M.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request

2015-02-06 Thread Daniel P. Berrange
On Fri, Feb 06, 2015 at 09:55:42AM -0600, Matt Riedemann wrote:
 
 
 On 2/6/2015 7:28 AM, Silvan Kaiser wrote:
 Hello!
 
 I am requesting a feature freeze exception for Kilo-2 milestone
 regarding https://review.openstack.org/#/c/110722/ .
 
 This change adds support for using the Quobyte Storage system for
 provisioning images in Nova. It works in conjunction with the Quobyte
 driver in Cinder (which was merged at Kilo-1).
 Refraining from merging would mean delay until L release, all the while
 having a largely useless Driver in Cinder.
 
 Jay Pipes, Matt Riedemann and Daniel Berrange kindly declared
 sponsorship for this FFE.
 
 Daniel didn't actually say he'd sponsor this, I said in the review that I
 *thought* he might be a possible third sponsor if it came to that. :)

Actually Silvan asked me in a private email just before this one and I
agreed.

 I realize this isn't a priority blueprint though and it's up to the
 nova-drivers team to decide on it, but FWIW it's self-contained for the most
 part and has been sitting around for a long time and I feel that lack of
 reviews shouldn't punish it in that regard (hopefully this doesn't open up a
 ton of other my thing has been around forever without reviews too so give
 me an exception also kind of precedent thing, not my intention).

Personally I'm fine as it is self-contained and would have merged already
if you hadn't added some small -1s at the last minute ;-P

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-06 Thread Sean Dague
On 02/06/2015 07:39 AM, Alexandre Levine wrote:
 Rushi,
 
 We're adding new tempest tests into our stackforge-api/ec2-api. The
 review will appear in a couple of days. These tests will be good for
 running against both nova/ec2-api and stackforge/ec2-api. As soon as
 they are there, you'll be more than welcome to add even more.
 
 Best regards,
   Alex Levine
 

Honestly, I'm more more pro having the ec2 tests in a tree that isn't
Tempest. Most Tempest reviewers aren't familiar with the ec2 API, their
focus has been OpenStack APIs.

Having a place where there is a review team that is dedicated only to
the EC2 API seems much better.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] kilo progress and feature freeze process

2015-02-06 Thread John Garbutt
Hi,

So we have now released kilo-2 and past the non-priority Feature
Freeze for kilo.

Please note 5th March is the General FeatureProposalFreeze:
https://wiki.openstack.org/wiki/Kilo_Release_Schedule

For kilo we agreed to focus on bug fixes, and the other agreed
priority 'slots'. The plan is we only merge priority items during
kilo-3. For specific change sets that need reviewing, please see:
https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

For those waiting on Blueprint Freeze Exceptions, you should have now
got your answer via a view comment in the nova-specs tree. Mostly, the
answer was please resubmit for the L release. The one exception is the
one remaining priority related nova-spec.
https://review.openstack.org/#/c/138444/


Nova Feature Freeze Process for kilo

At the nova meeting we agreed the following Feature Freeze process:

1) request exception to the mailing list, with a subject containing
the following:

[nova] Feature Freeze Exception Request

In that email please describe why you think this should be merged during kilo-3.

2) The cut off date will be 23.59 UTC on Thursday 12th Feb 2015

After this time, the nova-drivers will meet (time is TBC) and decide
if any of the requests warrant a Feature Freeze Exception. There is a
general aim for zero exceptions, unless its really... exceptional.

The deadline for the nova-drivers to make a decision will the the
nova-meeting on Thursday 19th Feb 2015

3) Once the nova-drivers have decided, they will be details provided
on how to move forward with getting any code merged, or resubmitted
for the L release. (This is the point where we would need to worry
about nova-core sponsors, etc)


Hopefully I have translated the IRC discussion correctly, and in a way
that makes sense. Please do ask if there are any questions.

We do hope these process changes in kilo will: focus our review
efforts so we actually get more things merged, save developers the
rebase pain of an endless wait for a review and respond to our users'
request for stability, scalability, upgradability over features.

Many thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] looking for status of an old wiki page work item

2015-02-06 Thread Ryan Moats

Yes, that looks much more detailed, thank you!

Ryan (having finished a minor wiki page edit)

Salvatore Orlando sorla...@nicira.com wrote on 02/05/2015 06:40:44 PM:

 From: Salvatore Orlando sorla...@nicira.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 02/05/2015 06:42 PM
 Subject: Re: [openstack-dev] [neutron] looking for status of an old
 wiki page work item

 I reckon the blueprint [1] supersedes the one in the old wiki page you
found.
 It lies in abandoned status as it did not make the release plan for
 Kilo. I am sure the author and the other contributors working on it
 will resume it for the next release.

 Salvatore

 [1] https://review.openstack.org/#/c/132661/

 On 5 February 2015 at 22:52, Ryan Moats rmo...@us.ibm.com wrote:
 I've run into a set of use cases where it would really be useful to
 be able to restrict which external networks a particular tenant can
 access, along the lines of the wiki page [1] talks about..

 When I checked for neutron blueprints, the only thing I found was
 [2] and that isn't really close.

 So, I'm wondering if there is a blueprint that I missed when I went
 searching or if there are folks that would be interested in seeing
 along the lines of [1] getting implemented...

 Thanks in advance,
 Ryan Moats

 [1] https://wiki.openstack.org/wiki/Neutron/sharing-model-for-
 external-networks
 [2] https://blueprints.launchpad.net/neutron/+spec/external-shared-
 net-ext-for-nuage-plugin



__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Telco][NFV][infra] Review process of TelcoWG use cases

2015-02-06 Thread Veiga, Anthony

 On Feb 6, 2015, at 8:17 , Jeremy Stanley fu...@yuggoth.org wrote:
 
 On 2015-02-06 12:11:40 +0100 (+0100), Marc Koderer wrote:
 [...]
 Therefore I uploaded one of them (Session Border Controller) to
 the Gerrit system into the sandbox repo:
 
https://review.openstack.org/#/c/152940/1
 [...]
 
 This looks a lot like the beginnings of a specification which has
 implications for multiple OpenStack projects. Would proposing a
 cross-project spec in the openstack/openstack-specs repository be an
 appropriate alternative?\

It does look like that.  However, the intent here is to allow non-developer 
members of a Telco provide the use cases they need to accomplish. This way the 
Telco WG can identify gaps and file a proper spec into each of the OpenStack 
projects.

 
 - we create a project under Stackforge called telcowg-usecases
 - we link blueprint related to this use case
 - we build a core team and approve/prioritize them
 
 I suppose this somewhat parallels how the API Working Group has
 decided to operate, so perhaps you just need a dedicated repository
 for Telco Working Group documents in general... some of which would
 percolate to cross-project specs (or maybe just related per-project
 specs) once sufficiently refined for a broader audience?

We’re considering a proper repo.  As Marc said, this is our first attempt at 
seeing if this can actually work.

 -- 
 Jeremy Stanley
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-Anthony
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Doug Hellmann


On Fri, Feb 6, 2015, at 07:37 AM, Denis Makogon wrote:
 Hello to All.
 
 
 As part of oslo.messaging initiative to split up requirements into
 certain
 list of per messaging driver dependencies
 https://review.openstack.org/#/c/83150/
 
 it was figured that we need to find a way to use pip inner dependencies
 and
 we were able to do that, short info our solution and how it works:
 
 
 
- This is how regular requirements.txt looks:
 
 dep1
 
 …
 
 dep n
 
 
- This is how looks requirements.txt with inner dependencies:
 
 dep1
 
 -r somefolder/another-requirements.txt
 
 -r completelyanotherfolder/another-requirements.txt
 
 …
 
 dep n
 
 That’s what we’ve did for oslo.messaging. But we’ve faced with problem
 that
 was defined as openstack-infra/project-config
 
 tool issue, this tool called project-requirements-change
 https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/project-requirements-change.py
 .As you can see it’s not able to handle inner dependencies in any
 
 of requirements.txt files, as you can see this tool expects to parse only
 explicit set of requirements (see regular requirements.txt definition
 above).
 
 So, i decided to fix that tool to make it able to look over inner
 dependencies, and here’s https://review.openstack.org/#/c/153227/ what
 i
 have for yesterday,
 
 Taking into account suggestion from Monty Taylor i’m bringing this
 discussion to much wider audience.
 
 And the question is: aren’t we doing something complex or are there any
 less complex ways to
 
 accomplish the initial idea of splitting requirements?

After re-reading this message, and discussing it with a few folks in the
infra channel on IRC, I'm a little concerned that we don't have enough
background to fully understand the problem and proposed solution. bnemec
pointed out that the discussion happened before we had the spec process,
but now that we do have that process I think the best next step is to
have a spec written in oslo-specs describing the problem we're trying to
solve and the approaches that were discussed. This may really just be
summarizing the existing discussions, but let's get all of that
information into a single document before we go any further.

Doug

 
 
 Kind regards,
 
 Denis M.
 IRC: denis_makogon at Freenode
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Davanum Srinivas
Sounds good to me Doug. +1 for a spec since this will affect every project.

-- dims

On Fri, Feb 6, 2015 at 11:12 AM, Doug Hellmann d...@doughellmann.com wrote:


 On Fri, Feb 6, 2015, at 07:37 AM, Denis Makogon wrote:
 Hello to All.


 As part of oslo.messaging initiative to split up requirements into
 certain
 list of per messaging driver dependencies
 https://review.openstack.org/#/c/83150/

 it was figured that we need to find a way to use pip inner dependencies
 and
 we were able to do that, short info our solution and how it works:



- This is how regular requirements.txt looks:

 dep1

 …

 dep n


- This is how looks requirements.txt with inner dependencies:

 dep1

 -r somefolder/another-requirements.txt

 -r completelyanotherfolder/another-requirements.txt

 …

 dep n

 That’s what we’ve did for oslo.messaging. But we’ve faced with problem
 that
 was defined as openstack-infra/project-config

 tool issue, this tool called project-requirements-change
 https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/project-requirements-change.py
 .As you can see it’s not able to handle inner dependencies in any

 of requirements.txt files, as you can see this tool expects to parse only
 explicit set of requirements (see regular requirements.txt definition
 above).

 So, i decided to fix that tool to make it able to look over inner
 dependencies, and here’s https://review.openstack.org/#/c/153227/ what
 i
 have for yesterday,

 Taking into account suggestion from Monty Taylor i’m bringing this
 discussion to much wider audience.

 And the question is: aren’t we doing something complex or are there any
 less complex ways to

 accomplish the initial idea of splitting requirements?

 After re-reading this message, and discussing it with a few folks in the
 infra channel on IRC, I'm a little concerned that we don't have enough
 background to fully understand the problem and proposed solution. bnemec
 pointed out that the discussion happened before we had the spec process,
 but now that we do have that process I think the best next step is to
 have a spec written in oslo-specs describing the problem we're trying to
 solve and the approaches that were discussed. This may really just be
 summarizing the existing discussions, but let's get all of that
 information into a single document before we go any further.

 Doug



 Kind regards,

 Denis M.
 IRC: denis_makogon at Freenode
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-06 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2015-02-06 06:25:57 -0800:
 On 03/02/15 14:12, Clint Byrum wrote:
  The visible change in making things parallel was minimal. In talking
  about convergence, it's become clear that users can and should expect
  something radically different when they issue stack updates. I'd love to
  say that it can be done to just bind convergence into the old ways, but
  doing so would also remove the benefit of having it.
 
  Also allowing resume wasn't a new behavior, it was fixing a bug really
  (that state was lost on failed operations). Convergence is a pretty
  different beast from the current model,
 
 That's not actually the case for Phase 1; really nothing much should 
 change from the user point of view, except that if you issue an update 
 before a previous one is finished then you won't get an error back any more.
 
 
 In any event, I think Angus's comment on the review is correct, we 
 actually have two different problems here. One is how to land the code, 
 and a config option is indisputably the right choice here: until many, 
 many blueprints have landed then the convergence code path will do 
 literally nothing at all. There is no conceivable advantage to users for 
 opting in to that.
 
 The second question, which we can continue to discuss, is whether to 
 allow individual users to opt in/out once operators have enabled the 
 convergence flow path. I'm not convinced that there is anything 
 particular special about this feature that warrants such a choice more 
 than any other feature that we have developed in the past. However, I 
 don't think we need to decide until around the time that we're preparing 
 to flip the default on. By that time we should have better information 
 about the level of stability we're dealing with, and we can get input 
 from operators on what kind of additional steps we should take to 
 maintain stability in the face of possible regressions.
 

All good points and it seems like a plan is forming that will help
operators deploy rapidly without forcing users to scramble too much.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-06 Thread Nilesh P Bhosale
Hi All,

I see the following error, while deleting a volume from a consistency 
group:
$ [admin]cinder delete vol1
Delete for volume vol1 failed: Bad Request (HTTP 400) (Request-ID: 
req-7c958443-edb2-434f-82a2-4254ab357e99)
ERROR: Unable to delete any of specified volumes.

And when I tried to debug this, found the following at: 
https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L310:
if volume['consistencygroup_id'] is not None:
msg = _(Volume cannot be deleted while in a consistency 
group.)
LOG.info(_LI('Unable to delete volume: %s, '
 'volume is currently part of a '
 'consistency group.'), volume['id'])
raise exception.InvalidVolume(reason=msg)

I understand this is as per design, but curious to understand logic behind 
this.
Why not allow deletion of volumes form the CG? at least when there are no 
dependent snapshots.
With the current implementation, only way to delete the volume is to 
delete the complete CG, deleting all the volumes in that, which I feel is 
not right.

Am I missing anything? Please help understand.

Thanks,
Nilesh Bhosale

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FFE request for instance tagging

2015-02-06 Thread Sergey Nikitin
Hello.

I'd like to ask for a feature freeze exception for the instance tags API
extension:

https://review.openstack.org/#/c/128940/

spec https://review.openstack.org/#/c/127281/

blueprint was approved, but its status was changed to Pending Approval
because of FF. https://blueprints.launchpad.net/nova/+spec/tag-instances

Tree of four pathces are merged.
The last patch has got +2 from Jay Pipes.

This set of patches was pretty close to merge, but FF came.

In most popular REST API interfaces, objects in the domain model can be
tagged with zero or more simple strings. This feature will allow normal
users to add, remove and list tags for an instance and filter instances by
tags.
Also these changes will allow to use tags to tag other nova objects in
future because created Tag object is universal and any nova object with id
could be tagged by it.

Please consider this feature to be a part Kilo release.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE request for instance tagging

2015-02-06 Thread Jay Pipes

On 02/06/2015 08:30 AM, Sergey Nikitin wrote:

Hello.

I'd like to ask for a feature freeze exception for the instance tags API
extension:

https://review.openstack.org/#/c/128940/

spec https://review.openstack.org/#/c/127281/

blueprint was approved, but its status was changed to Pending Approval
because of FF. https://blueprints.launchpad.net/nova/+spec/tag-instances

Tree of four pathces are merged.
The last patch has got +2 from Jay Pipes.

This set of patches was pretty close to merge, but FF came.

In most popular REST API interfaces, objects in the domain model can be
tagged with zero or more simple strings. This feature will allow
normal users to add, remove and list tags for an instance and filter
instances by tags.
Also these changes will allow to use tags to tag other nova objects in
future because created Tag object is universal and any nova object with
id could be tagged by it.

Please consider this feature to be a part Kilo release.


Yes, I will definitely sponsor this. 3 of 4 patches in the series have 
already been merged, the blueprint was approved in both Juno and Kilo, 
and Sergey has been diligent in pushing revisions based on reviews. The 
issue has been lack of eyeballs from core reviewers and followup.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request

2015-02-06 Thread Jay Pipes

On 02/06/2015 08:28 AM, Silvan Kaiser wrote:

Hello!

I am requesting a feature freeze exception for Kilo-2 milestone
regarding https://review.openstack.org/#/c/110722/ .

This change adds support for using the Quobyte Storage system for
provisioning images in Nova. It works in conjunction with the Quobyte
driver in Cinder (which was merged at Kilo-1).
Refraining from merging would mean delay until L release, all the while
having a largely useless Driver in Cinder.

Jay Pipes, Matt Riedemann and Daniel Berrange kindly declared
sponsorship for this FFE.

Please feel free to contact me regarding further FFE procedure or if
there are any more questions (sil...@quobyte.com
mailto:sil...@quobyte.com, kaisers/casusbelli in irc).


Ack'd. I will sponsor this. Patch is smallish, well-defined, and complete.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
Hello to All.


As part of oslo.messaging initiative to split up requirements into certain
list of per messaging driver dependencies
https://review.openstack.org/#/c/83150/

it was figured that we need to find a way to use pip inner dependencies and
we were able to do that, short info our solution and how it works:



   - This is how regular requirements.txt looks:

dep1

…

dep n


   - This is how looks requirements.txt with inner dependencies:

dep1

-r somefolder/another-requirements.txt

-r completelyanotherfolder/another-requirements.txt

…

dep n

That’s what we’ve did for oslo.messaging. But we’ve faced with problem that
was defined as openstack-infra/project-config

tool issue, this tool called project-requirements-change
https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/project-requirements-change.py
.As you can see it’s not able to handle inner dependencies in any

of requirements.txt files, as you can see this tool expects to parse only
explicit set of requirements (see regular requirements.txt definition
above).

So, i decided to fix that tool to make it able to look over inner
dependencies, and here’s https://review.openstack.org/#/c/153227/ what i
have for yesterday,

Taking into account suggestion from Monty Taylor i’m bringing this
discussion to much wider audience.

And the question is: aren’t we doing something complex or are there any
less complex ways to

accomplish the initial idea of splitting requirements?


Kind regards,

Denis M.
IRC: denis_makogon at Freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Telco][NFV][infra] Review process of TelcoWG use cases

2015-02-06 Thread Jeremy Stanley
On 2015-02-06 12:11:40 +0100 (+0100), Marc Koderer wrote:
[...]
 Therefore I uploaded one of them (Session Border Controller) to
 the Gerrit system into the sandbox repo:
 
 https://review.openstack.org/#/c/152940/1
[...]

This looks a lot like the beginnings of a specification which has
implications for multiple OpenStack projects. Would proposing a
cross-project spec in the openstack/openstack-specs repository be an
appropriate alternative?

  - we create a project under Stackforge called telcowg-usecases
  - we link blueprint related to this use case
  - we build a core team and approve/prioritize them

I suppose this somewhat parallels how the API Working Group has
decided to operate, so perhaps you just need a dedicated repository
for Telco Working Group documents in general... some of which would
percolate to cross-project specs (or maybe just related per-project
specs) once sufficiently refined for a broader audience?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-06 Thread Rushi Agrawal
There seems to be an agreement that people are fine if we improve the
in-tree Nova EC2 API more robust by adding proper Tempest tests to it,
regardless of the way forward (in-Nova-tree vs out-of-tree repo).

But there are also concerns that Tempest is not the right place for these
EC2 API tests. While I am not mature enough with testing methodologies to
comment on what is good vs bad, I am seeing a problem if we start blocking
new EC2 Tempest tests, and ask to move them out-of-Tempest first. This will
particularly hurt the EC2-code-in-Nova camp (which includes me) who have
seemingly been given a lifeline until the next summit to prove they care
about in-tree EC2 code.

So I just wanted to know what does concerned people think about this
problem. On solution I can see is allow tests to be added to Tempest for
now, and then make the switch post-summit. I am hoping moving tests out of
Tempest at-once wouldn't be a tough job (mostly tidying import
statements?).

Regards,
Rushi Agrawal
Cloud Engineer,
Reliance Jio Infocomm

On 5 February 2015 at 19:41, Sean Dague s...@dague.net wrote:

 On 02/05/2015 07:01 AM, Alexandre Levine wrote:
  Davanum,
 
  We've added the devstack support. It's in our stackforge repository.
  https://github.com/stackforge/ec2-api/tree/master/contrib/devstack
 
  Best regards,
Alex Levine

 I've converted it to a devstack external plugin structure in this review
 - https://review.openstack.org/#/c/153206/

 so that will make using this as simple as

 enable_plugin ec2-api https://github.com/stackforge/ec2-api

 Once that merges.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-06 Thread Alexandre Levine

Rushi,

We're adding new tempest tests into our stackforge-api/ec2-api. The 
review will appear in a couple of days. These tests will be good for 
running against both nova/ec2-api and stackforge/ec2-api. As soon as 
they are there, you'll be more than welcome to add even more.


Best regards,
  Alex Levine


On 2/6/15 3:20 PM, Rushi Agrawal wrote:
There seems to be an agreement that people are fine if we improve the 
in-tree Nova EC2 API more robust by adding proper Tempest tests to it, 
regardless of the way forward (in-Nova-tree vs out-of-tree repo).


But there are also concerns that Tempest is not the right place for 
these EC2 API tests. While I am not mature enough with testing 
methodologies to comment on what is good vs bad, I am seeing a problem 
if we start blocking new EC2 Tempest tests, and ask to move them 
out-of-Tempest first. This will particularly hurt the EC2-code-in-Nova 
camp (which includes me) who have seemingly been given a lifeline 
until the next summit to prove they care about in-tree EC2 code.


So I just wanted to know what does concerned people think about this 
problem. On solution I can see is allow tests to be added to Tempest 
for now, and then make the switch post-summit. I am hoping moving 
tests out of Tempest at-once wouldn't be a tough job (mostly tidying 
import statements?).


Regards,
Rushi Agrawal
Cloud Engineer,
Reliance Jio Infocomm

On 5 February 2015 at 19:41, Sean Dague s...@dague.net 
mailto:s...@dague.net wrote:


On 02/05/2015 07:01 AM, Alexandre Levine wrote:
 Davanum,

 We've added the devstack support. It's in our stackforge repository.
 https://github.com/stackforge/ec2-api/tree/master/contrib/devstack

 Best regards,
   Alex Levine

I've converted it to a devstack external plugin structure in this
review
- https://review.openstack.org/#/c/153206/

so that will make using this as simple as

enable_plugin ec2-api https://github.com/stackforge/ec2-api

Once that merges.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stuck patches at the nova IRC meeting

2015-02-06 Thread Sean Dague
Ok, my bad. When I proposed this part of the Nova meeting I was also
thinking about lost patches where a couple of weeks had gone by
without any negative feedback and the patch author got a chance to
advocate for it. That's how we used it in Tempest meetings.

The theory being that engaging in more communication might help with
having patches be a little closer to what's needed for merge.

On 02/05/2015 07:46 PM, Michael Still wrote:
 Certainly it was my intent when I created that agenda item to cover
 reviews that wouldn't otherwise reach a decision -- either two cores
 wedged, or something else that we can't resolve trivially in gerrit.
 
 Now, I can see that people don't like reviews sitting for a long time,
 but that's probably too long a list to cover in an IRC meeting. I'm
 not opposed to trying, but we should set expectations that we're going
 to talk about only a few important reviews, not the dozens that are
 unloved.
 
 Michael
 
 On Fri, Feb 6, 2015 at 9:27 AM, Tony Breeds t...@bakeyournoodle.com wrote:
 On Thu, Feb 05, 2015 at 11:13:50PM +0100, Sylvain Bauza wrote:

 I was always considering stuck reviews as reviews where 2 or more cores were
 disagreeing between themselves so that it was needing a debate discussion
 during the meeting.

 I was under the same impression.

 Stuck reviews were for reviewws were there was strong disagreement (amongst
 cores)
 Other reviews can be discussed as part of Open discussion

 Yours Tony.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception Request

2015-02-06 Thread Silvan Kaiser
Hello!

I am requesting a feature freeze exception for Kilo-2 milestone regarding
https://review.openstack.org/#/c/110722/ .

This change adds support for using the Quobyte Storage system for
provisioning images in Nova. It works in conjunction with the Quobyte
driver in Cinder (which was merged at Kilo-1).
Refraining from merging would mean delay until L release, all the while
having a largely useless Driver in Cinder.

Jay Pipes, Matt Riedemann and Daniel Berrange kindly declared sponsorship
for this FFE.

Please feel free to contact me regarding further FFE procedure or if there
are any more questions (sil...@quobyte.com, kaisers/casusbelli in irc).

Best regards
Silvan Kaiser

-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-06 Thread Peter Boros
Hi Angus and everyone,

I would like to reply for a couple of things:
- The behavior of overlapping transactions is dependent on the
transaction isolation level, even in the case of the single server,
for any database. This was pointed out by others earlier as well.

- The deadlock error from Galera can be confusing, but the point is
that the application can actually threat this as a deadlock (or apply
any kind of retry logic, which it would apply to a failed
transaction), I don't know if it would be even more confusing from the
developer's point of view, if it would say brute force error.
Transactions can fail in a database, in the initial example the
transaction will fail with a duplicate key error. The result is pretty
much the same from the application's perspective, the transaction was
not successful (it failed as a block), the application should handle
the failure. There can be a lot more reasons for a transaction to fail
regardless of the database engine, some of these failures are
persistent (for example the disk is full underneath the database), and
some of these are intermittent in nature like the case above. A good
retry mechanism can be good for handling the intermittent failures,
depending on the application logic.

- Like many others said it before me, consistent reads can be achieved
with wsrep_causal_reads set on in the session. I can shed some light
on how this works. Nodes in galera are participating in a group
communication. A global order of the transactions are established as
part of this. Since the global order of the transaction is known, a
session with wsrep_causal_reads on will put a marker in the local
replication queue. Because transaction ordering is global, the session
will be simply blocked until all the other transactions are processed
in the replication queue before that marker. So, setting
wsrep_causal_reads imposes additional latency only for the given
select we are using it on (it literally just waits the queue to be
processed up to the current transaction). So because of this, manual
checking of the global transaction ids is not necessary.

- On synchronous replication: galera only transmits the data
synchronously, it doesn't do synchronous apply. A transaction is sent
in parallel to the rest of the cluster nodes (to be accurate, it's
only sent to the nodes that are in the same group segment, but it
waits until all the group segments get the data). Once the other nodes
received it, the transaction commits locally, the others will apply it
later. The cluster can do this because of certification and because
certification is deterministic (the result of the certification will
be the same on all nodes, otherwise, the nodes have a different state,
for example one of them was written locally). The replication uses
write sets, which is practically row based mysql binary log event and
some metadata. The some metadata is good for 2 things: you can take a
look at 2 write sets and tell if they are conflicting or not, and you
can decide if a write set is applicable to a database. Because this is
checked at certification time, the apply part can be parallel (because
of the certification, it's guaranteed that the transactions are not
conflicting). When it comes to consistency and replication speed,
there are no wonders, there are tradeoffs to make. Two phase commit is
relatively slow, distributed locking is relatively slow, this is a lot
faster, but the application should handle transaction failures (which
it should probably handle anyway).

Here is the xtradb cluster documentation (Percona Server with galera):
http://www.percona.com/doc/percona-xtradb-cluster/5.6/#user-s-manual

Here is the multi-master replication part of the documentation:
http://www.percona.com/doc/percona-xtradb-cluster/5.6/features/multimaster-replication.html


On Fri, Feb 6, 2015 at 3:36 AM, Angus Lees g...@inodes.org wrote:
 On Fri Feb 06 2015 at 12:59:13 PM Gregory Haynes g...@greghaynes.net
 wrote:

 Excerpts from Joshua Harlow's message of 2015-02-06 01:26:25 +:
  Angus Lees wrote:
   On Fri Feb 06 2015 at 4:25:43 AM Clint Byrum cl...@fewbar.com
   mailto:cl...@fewbar.com wrote:
   I'd also like to see consideration given to systems that handle
   distributed consistency in a more active manner. etcd and
   Zookeeper are
   both such systems, and might serve as efficient guards for
   critical
   sections without raising latency.
  
  
   +1 for moving to such systems.  Then we can have a repeat of the above
   conversation without the added complications of SQL semantics ;)
  
 
  So just an fyi:
 
  http://docs.openstack.org/developer/tooz/ exists.
 
  Specifically:
 
 
  http://docs.openstack.org/developer/tooz/developers.html#tooz.coordination.CoordinationDriver.get_lock
 
  It has a locking api that it provides (that plugs into the various
  backends); there is also a WIP https://review.openstack.org/#/c/151463/
  driver that is being worked for etc.d.
 

 An interesting note 

Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
On Fri, Feb 6, 2015 at 4:00 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
  As part of oslo.messaging initiative to split up requirements into
  certain list of per messaging driver dependencies
 [...]

 I'm curious what the end goal is here... when someone does `pip
 install oslo.messaging` what do you/they expect to get installed?
 Your run-parts style requirements.d plan is sort of
 counter-intuitive to me in that I would expect it to contain
 number-prefixed sublists of requirements which should be processed
 collectively in an alphanumeric sort order, but I get the impression
 this is not the goal of the mechanism (I'll be somewhat relieved if
 you tell me I'm mistaken in that regard).


Yes, that's the main goal, as i can foresee, to have an ability to install
oslo.messaging with dependencies for specific driver.


  Taking into account suggestion from Monty Taylor i’m bringing this
  discussion to much wider audience. And the question is: aren’t we
  doing something complex or are there any less complex ways to
  accomplish the initial idea of splitting requirements?

 As for taking this to a wider audience we (OpenStack) are already
 venturing into special snowflake territory with PBR, however
 requirements.txt is a convention used at least somewhat outside of
 OpenStack-related Python projects. It might make sense to get input
 from the broader Python packaging community on something like this
 before we end up alienating ourselves from them entirely.


Sure, that's what i'm looking for.


 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which repo should the API WG use?

2015-02-06 Thread Everett Toews
Top posting to wrap this up.

During the last API WG meeting [1] we discussed this topic. Of the 8 people who 
voted, it was unanimous and we agreed [2] to use the api-wg repo to write our 
guidelines.

This email thread wasn’t conclusive on the subject so we’ll be moving forward 
with the result of the vote at the meeting.

Unless there’s a strong objection or disagreement with my analysis of the 
above, the API WG will move forward and use the api-wg repo.

Thanks,
Everett

[1] 
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-02-05-00.00.html
[2] 
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-02-05-00.00.log.html#l-205


On Jan 31, 2015, at 10:36 AM, James E. Blair 
cor...@inaugust.commailto:cor...@inaugust.com wrote:

Kevin L. Mitchell 
kevin.mitch...@rackspace.commailto:kevin.mitch...@rackspace.com writes:

On Fri, 2015-01-30 at 22:33 +, Everett Toews wrote:
It was suggested that the API WG use the openstack-specs [1] and/or
the api-wg [2] repo to publish its guidelines. We’ve already arrived
at the consensus that we should only use 1 repo [3]. So the purpose of
this thread is to decide...

Should the API WG use the openstack-specs repo or the api-wg repo?

Let’s discuss.

Well, the guidelines are just that: guidelines.  They don't implicitly
propose changes to any OpenStack projects, just provide guidance for
future API changes.  Thus, I think they should go in a repo separate
from any of our *-specs repos; to me, a spec provides documentation of a
change, and is thus independent of the guidelines.

Hi,

As a user of OpenStack I find the APIs inconsistent with each other.  My
understanding is that the API wg hopes to change this (thanks!).  As the
current reality is almost certainly not going to be completely in
alignment with the result of the wg, I think that necessarily there will
be a change in some software.

Consider the logging spec -- it says logs should look like this and use
these levels under these circumstances.  Many projects do not match
that at the moment, and will need changes.  I can imagine something
similar with the API wg.

Perhaps with APIs, things are a bit more complex and in addition to a
cross-project spec, we would need individual project specs to say in
order to get foo's API consistent with the guidelines, we will need to
make these changes and support these behaviors during a deprecation
period.  If that's the case, we can certainly put that level of detail
in an individual project spec repo while keeping the cross-project spec
focused on what things _should_ look like.

At any rate, I think it is important that eventually the result of the
API wg causes technical change to happen, and as such, I think the
openstack-specs repo seems like a good place.  I believe that
openstack-specs also provides a good place for reference documentation
like this (and logging guidelines, etc) to be published indefinitely for
current and new projects.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
On Fri, Feb 6, 2015 at 4:16 PM, Donald Stufft don...@stufft.io wrote:


  On Feb 6, 2015, at 9:00 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
  On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
  As part of oslo.messaging initiative to split up requirements into
  certain list of per messaging driver dependencies
  [...]
 
  I'm curious what the end goal is here... when someone does `pip
  install oslo.messaging` what do you/they expect to get installed?
  Your run-parts style requirements.d plan is sort of
  counter-intuitive to me in that I would expect it to contain
  number-prefixed sublists of requirements which should be processed
  collectively in an alphanumeric sort order, but I get the impression
  this is not the goal of the mechanism (I'll be somewhat relieved if
  you tell me I'm mistaken in that regard).
 
  Taking into account suggestion from Monty Taylor i’m bringing this
  discussion to much wider audience. And the question is: aren’t we
  doing something complex or are there any less complex ways to
  accomplish the initial idea of splitting requirements?
 
  As for taking this to a wider audience we (OpenStack) are already
  venturing into special snowflake territory with PBR, however
  requirements.txt is a convention used at least somewhat outside of
  OpenStack-related Python projects. It might make sense to get input
  from the broader Python packaging community on something like this
  before we end up alienating ourselves from them entirely.

 I’m not sure what exactly is trying to be achieved here, but I still assert
 that requirements.txt is the wrong place for pbr to be looking and it
 should
 instead look for dependencies specified inside of a setup.cfg.

 Sorry, i had to explain what i meant by saying 'inner dependency'. Let me
be more clear at this step to avoid misunderstanding in terminology.
Inner  dependency - is a redirection from requirements.txt to another file
that contains additional dependencies (-r another_deps.txt)

 More on topic, I'm not sure what inner dependencies are, but if what
 you're
 looking for is optional dependencies that only are needed in specific
 situation
 then you probably want extras, defined like:

 setup(
 extras_require={
 somename: [
 dep1,
 dep2,
 ],
 },
 )


That might be the case, but since we want to split up requirements into
per-driver dependecies, it would require to check if setup.cfg can handle
use of inner dependencies. for example:

setup(
extras_require={
somename: [
-r another_file_with_deps.txt,
],
},
)


 Then if you do ``pip install myproject[somename]`` it'll include dep1 and
 dep2
 in the list of dependencies, you can also depend on this in other projects
 like:

 setup(
 install_requires=[myproject[somename]=1.0],
 )


That's i've been looking for, so, for future installations it'll be very
useful if cloud deployer knows which AMQP service will be used,
then he'd be able to install only that type of oslo.messaging that he wants
i.e.

project/requirements.txt:
...
oslo.messaging[amqp1]=${version}

...

Really great input, thanks Donald. Appreciate it.

---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova] GRE Performance Problem, Multi-queue and Bridge MTUs

2015-02-06 Thread Eren Türkay
Hello,

I was having serious network issues using GRE and I have been tracking it for a
few weeks. Finally, I solved the issue but it needs a proper fix. To summarize,
I need a way to set MTU settings of br-int and br-tun interfaces, enable MQ
support in libvirt, and run ethtool -L eth0 combined N command in VMs.

The detailed bug report and the explanation for the issue is here:
https://bugs.launchpad.net/neutron/+bug/1419069

There is a blueprint [0] on MQ support but it wasn't accepted.

What can we do about this performance problem and a possible fix? Most people
complain about GRE/VXLAN performance and the solution appears to be working. I
created a bug report and let the list know to work on a better, flexible, and
working solution. Please ignore the ugly patches as they are just proof of 
concept.

Regards,
Eren

[0] https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-net-multiqueue

-- 
System Administrator
https://skyatlas.com/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-02-06 Thread Adam Young
Drop.  It is wasting cycles, and not something we should use in 
production.  Migrations specific to SQLPlus are the most time consuming 
work-arounds we have.  SQLPlus does not suit our development approach.




On 02/03/2015 01:32 PM, Georgy Okrokvertskhov wrote:
I think we should switch to clean migration path. We do have 
production installations but we can handle initial db uprgade case by 
case for customers. It is better to fix this issue now when we have 
few customers rather then doing later at larger scale.


Thanks
Georgy

On Tue, Feb 3, 2015 at 9:05 AM, Mike Bayer mba...@redhat.com 
mailto:mba...@redhat.com wrote:




Andrew Pashkin apash...@mirantis.com
mailto:apash...@mirantis.com wrote:

 Mike Bayer wrote:
 there’s always a naming convention in place; all databases other than
 SQLite produce them on the fly if you don’t specify one.  The
purpose
 of the Alembic/SQLAlchemy naming_convention feature is so that you
 have *one* naming convention, rather than N unpredictable
conventions.
 I’m not sure if you’re arguing the feature should not be used. 
IMHO

 it should definitely be used for an application that is deploying
 cross-database.  Otherwise you have no choice but to hardcode the
 naming conventions of each target database individually in all
cases
 that you need to refer to them.
 You can't just bring SA/Alembic naming conventions into the
project,
 because they will collide with auto-generated constraint names.

I was proposing a way to fix this for the murano project which
only appears to have four migrations so far, but with the
assumption that there are existing production environments which
cannot do a full rebuild.


 So you need to hardcode reverese-engineered constrants names
into the
 old migrations and then add new migration that renames constraint
 according with naming conventions”.
 OR you need to drop old
 migrations, and create new one with naming conventions - that will
 be backward incompatible, but cleaner.


My proposal was to essentially do both strategies.  Build out
fully clean migrations from the start, but also add an additional
“conditional” migration that will repair a Postgresql / MySQL
database that is already at the head, and is detected as having
the older naming convention.  Because openstack does not appear to
use offline migrations, this would be doable, though not
necessarily worth it.

If Murano can afford to just restart with clean migrations and has
no production deployments yet which would be disrupted by a full
rebuild, then sure, just do this.






 On 03.02.2015 18:32, Mike Bayer wrote:
 Andrew Pashkin apash...@mirantis.com
mailto:apash...@mirantis.com wrote:

 Mike Bayer wrote:
 The patch seems to hardcode the conventions for MySQL and
Postgresql.
 The first thought I had was that in order to remove the
dependence
 on them here, you’d need to instead simply turn off the
 “naming_convention” in the MetaData if you detect that you’re
on one
 of those two databases. That would be a safer idea than trying to
 hardcode these conventions (and would also work for other kinds
 of backends).
 With your solution it is still will be necessary for developers
 to guess constraints names when writing new migrations. And it
will
 be even harder, because they will need also to handle case of
 naming conventions”.

 there’s always a naming convention in place; all databases
other than SQLite produce them on the fly if you don’t specify
one.  The purpose of the Alembic/SQLAlchemy naming_convention
feature is so that you have *one* naming convention, rather than N
unpredictable conventions.   I’m not sure if you’re arguing the
feature should not be used.  IMHO it should definitely be used for
an application that is deploying cross-database.  Otherwise you
have no choice but to hardcode the naming conventions of each
target database individually in all cases that you need to refer
to them.




 Mike Bayer wrote:
 However, it’s probably worthwhile to introduce a migration
that does
 in fact rename existing constraints on MySQL and Postgresql.
 Yes, that's what I want to do in case of the first solution.

 Mike Bayer wrote:
 Another possible solution is to drop all current migrations and
 introduce new one with correct names.
 you definitely shouldn’t need to do that.
 Why?

 On 30.01.2015 22:00, Mike Bayer wrote:
 Andrew Pashkin apash...@mirantis.com
mailto:apash...@mirantis.com wrote:

 Working on this issue I encountered another problem.

 Most indices in the project has no names and because of that,
 developer must reverse-engineer them in every migration.
 Read about that 

Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
On Fri, Feb 6, 2015 at 5:54 PM, Doug Hellmann d...@doughellmann.com wrote:



 On Fri, Feb 6, 2015, at 09:56 AM, Denis Makogon wrote:
  On Fri, Feb 6, 2015 at 4:16 PM, Donald Stufft don...@stufft.io wrote:
 
  
On Feb 6, 2015, at 9:00 AM, Jeremy Stanley fu...@yuggoth.org
 wrote:
   
On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
As part of oslo.messaging initiative to split up requirements into
certain list of per messaging driver dependencies
[...]
   
I'm curious what the end goal is here... when someone does `pip
install oslo.messaging` what do you/they expect to get installed?
Your run-parts style requirements.d plan is sort of
counter-intuitive to me in that I would expect it to contain
number-prefixed sublists of requirements which should be processed
collectively in an alphanumeric sort order, but I get the impression
this is not the goal of the mechanism (I'll be somewhat relieved if
you tell me I'm mistaken in that regard).
   
Taking into account suggestion from Monty Taylor i’m bringing this
discussion to much wider audience. And the question is: aren’t we
doing something complex or are there any less complex ways to
accomplish the initial idea of splitting requirements?
   
As for taking this to a wider audience we (OpenStack) are already
venturing into special snowflake territory with PBR, however
requirements.txt is a convention used at least somewhat outside of
OpenStack-related Python projects. It might make sense to get input
from the broader Python packaging community on something like this
before we end up alienating ourselves from them entirely.
  
   I’m not sure what exactly is trying to be achieved here, but I still
 assert
   that requirements.txt is the wrong place for pbr to be looking and it
   should
   instead look for dependencies specified inside of a setup.cfg.
  
   Sorry, i had to explain what i meant by saying 'inner dependency'. Let
 me
  be more clear at this step to avoid misunderstanding in terminology.
  Inner  dependency - is a redirection from requirements.txt to another
  file
  that contains additional dependencies (-r another_deps.txt)
 
   More on topic, I'm not sure what inner dependencies are, but if what
   you're
   looking for is optional dependencies that only are needed in specific
   situation
   then you probably want extras, defined like:
  
   setup(
   extras_require={
   somename: [
   dep1,
   dep2,
   ],
   },
   )
  
  
  That might be the case, but since we want to split up requirements into
  per-driver dependecies, it would require to check if setup.cfg can handle
  use of inner dependencies. for example:
 
  setup(
  extras_require={
  somename: [
  -r another_file_with_deps.txt,
  ],
  },
  )

 Let's see if we can make pbr add the extras_require values. We can then
 either specify the requirements explicitly in setup.cfg, or use a naming
 convention for separate requirements files. Either way, we shouldn't
 need setuptools to understand that we are managing the list of
 requirements in files.


That might be the case. And probably PBR is the only place where we can
place that logic since distutils already can do that.

Doug, i will take a look at PBR and will try to figure out the easiest way
to get extras_require into it. Thanks for input.


 
   Then if you do ``pip install myproject[somename]`` it'll include dep1
 and
   dep2
   in the list of dependencies, you can also depend on this in other
 projects
   like:
  
   setup(
   install_requires=[myproject[somename]=1.0],
   )
  
  
  That's i've been looking for, so, for future installations it'll be very
  useful if cloud deployer knows which AMQP service will be used,
  then he'd be able to install only that type of oslo.messaging that he
  wants
  i.e.
 
  project/requirements.txt:
  ...
  oslo.messaging[amqp1]=${version}
 
  ...
 
  Really great input, thanks Donald. Appreciate it.
 
  ---
   Donald Stufft
   PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  Kind regards,
  Denis M.
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] Status of NovaOrchestration

2015-02-06 Thread Cameron Seader
Nevermind answered in IRC

On 02/06/2015 09:33 AM, Cameron Seader wrote:
 Is this something that was implemented? If so which version of OpenStack?
 https://wiki.openstack.org/wiki/NovaOrchestration#Long_Running_Transactions_in_Nova
 
 Thanks,
 

-- 
Cameron Seader
Sr. Systems Engineer
SUSE
c...@suse.com
(W)208-577-6857
(M)208-420-2167

Register for SUSECon 2015
www.susecon.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request

2015-02-06 Thread Silvan Kaiser
Of course i asked Daniel directly prior to publicly declaring him a
sponsor!!! :-)

2015-02-06 16:55 GMT+01:00 Matt Riedemann mrie...@linux.vnet.ibm.com:



 On 2/6/2015 7:28 AM, Silvan Kaiser wrote:

 Hello!

 I am requesting a feature freeze exception for Kilo-2 milestone
 regarding https://review.openstack.org/#/c/110722/ .

 This change adds support for using the Quobyte Storage system for
 provisioning images in Nova. It works in conjunction with the Quobyte
 driver in Cinder (which was merged at Kilo-1).
 Refraining from merging would mean delay until L release, all the while
 having a largely useless Driver in Cinder.

 Jay Pipes, Matt Riedemann and Daniel Berrange kindly declared
 sponsorship for this FFE.


 Daniel didn't actually say he'd sponsor this, I said in the review that I
 *thought* he might be a possible third sponsor if it came to that. :)

 I did say I'd sponsor this though. It's close but had enough comments from
 me that I thought it warranted a -1 in it's current form.

 I realize this isn't a priority blueprint though and it's up to the
 nova-drivers team to decide on it, but FWIW it's self-contained for the
 most part and has been sitting around for a long time and I feel that lack
 of reviews shouldn't punish it in that regard (hopefully this doesn't open
 up a ton of other my thing has been around forever without reviews too so
 give me an exception also kind of precedent thing, not my intention).


 Please feel free to contact me regarding further FFE procedure or if
 there are any more questions (sil...@quobyte.com
 mailto:sil...@quobyte.com, kaisers/casusbelli in irc).

 Best regards
 Silvan Kaiser



 --
 *Quobyte* GmbH
 Boyenstr. 41 - 10115 Berlin-Mitte - Germany
 +49-30-814 591 800 - www.quobyte.com http://www.quobyte.com/
 Amtsgericht Berlin-Charlottenburg, HRB 149012B
 management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --

 Thanks,

 Matt Riedemann



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] release request for python-novaclient

2015-02-06 Thread Matt Riedemann
We haven't done a release of python-novaclient in awhile (2.20.0 was 
released on 2014-9-20 before the Juno release).


It looks like there are some important feature adds and bug fixes on 
master so we should do a release, specifically to pick up the change for 
keystone v3 support [1].


So can this be done now or should this wait until closer to the Kilo 
release (library releases are cheap so I don't see why we'd wait).


[1] https://review.openstack.org/#/c/105900/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Status of NovaOrchestration

2015-02-06 Thread Cameron Seader
Is this something that was implemented? If so which version of OpenStack?
https://wiki.openstack.org/wiki/NovaOrchestration#Long_Running_Transactions_in_Nova

Thanks,

-- 
Cameron Seader
Sr. Systems Engineer
SUSE
c...@suse.com
(W)208-577-6857
(M)208-420-2167

Register for SUSECon 2015
www.susecon.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone]

2015-02-06 Thread Fox, Kevin M
But selecting from a list is harder then from a grid. A grid would give you 
ample room for icons, which also make finding what your looking for easier. 
Having a bit more space makes selecting the thing you want with a mouse(or 
finger on a tablet) easier.

To make it not visually overloaded, you hide the rest of the form bits then, 
once you select a method, user/password for example, the grid slides out of the 
way, and the username/pw box shows up. Leave the selected box up in the corner 
with an X on it, so they can cancel what they selected, and slide the grid back 
when clicked so they can select something different.

Thanks,
Kevin

From: Thai Q Tran [tqt...@us.ibm.com]
Sent: Thursday, February 05, 2015 11:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [horizon][keystone]

Hi Ioram,
Thanks for the feedback. I agree that the names are hard to follow, they can 
change to something more intuitive. Or we can even provide a tooltip for more 
information.
As for the look and feel, I don't agree that its easier if all the options are 
listed. Image if you had 5 different ways for users to log in and they are all 
shown at once. That's a lot to take in.
This approach keep things simply, it's really not that hard to pick from a list.

Hi Anton,
I'm just building on top of the visuals we already have without changing things 
too drastically. If you have a better idea, I would love to see it.

-Ioram Schechtman Sette i...@cin.ufpe.br wrote: -
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
From: Ioram Schechtman Sette i...@cin.ufpe.br
Date: 02/05/2015 03:15AM
Subject: Re: [openstack-dev] [horizon][keystone]

Hi Thai,

I agree with Anton that the names are not intuitive for users.

I would use something like:
- Local authentication (for local credentials)
- ?? (I also have no idea of what is a Default protocol)
- Authenticate using name of IdPs or federation (something which is easy to 
the user understand that he could use or not - this is for the discovery 
service or remote IdP)

Here in the University of Kent we used another approach.
Instead of selecting the method using a list/combo box, we present all the 
options in a single screen.
I think it's not beautiful, but functional.
I think it would be easier to the user if they could have all the options in a 
single interface, since it doesn't become too much loaded (visually polluted).

[Imagem inline 1]
Regards,
Ioram


2015-02-05 9:20 GMT+00:00 Anton Zemlyanov 
azemlya...@mirantis.commailto:azemlya...@mirantis.com:
Hi,

I guess Credentials is login and password. I have no idea what is Default 
Protocol or Discovery Service.
The proposed UI is rather embarrassing.

Anton

On Thu, Feb 5, 2015 at 12:54 AM, Thai Q Tran 
tqt...@us.ibm.commailto:tqt...@us.ibm.com wrote:
Hi all,

I have been helping with the websso effort and wanted to get some feedback.
Basically, users are presented with a login screen where they can select: 
credentials, default protocol, or discovery service.
If user selects credentials, it works exactly the same way it works today.
If user selects default protocol or discovery service, they can choose to be 
redirected to those pages.

Keep in mind that this is a prototype, early feedback will be good.
Here are the relevant patches:
https://review.openstack.org/#/c/136177/
https://review.openstack.org/#/c/136178/
https://review.openstack.org/#/c/151842/

I have attached the files and present them below:

[cid:123937071170][cid:105849783951][cid:165446319914][cid:914204953945]



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Status of NovaOrchestration

2015-02-06 Thread Matt Riedemann



On 2/6/2015 10:44 AM, Cameron Seader wrote:

Nevermind answered in IRC

On 02/06/2015 09:33 AM, Cameron Seader wrote:

Is this something that was implemented? If so which version of OpenStack?
https://wiki.openstack.org/wiki/NovaOrchestration#Long_Running_Transactions_in_Nova

Thanks,





Maybe share your findings? :)

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo.db] Repeatable Read considered harmful

2015-02-06 Thread Matthew Booth
I was surprised recently to discover that MySQL uses repeatable read for
transactions by default. Postgres uses read committed by default, and
SQLite uses serializable. We don't set the isolation level explicitly
anywhere, so our applications are running under different isolation
levels depending on backend. This doesn't sound like a good idea to me.
It's one thing to support multiple sql syntaxes, but different isolation
levels have different semantics. Supporting that is much harder, and
currently we're not even trying.

I'm aware that the same isolation level on different databases will
still have subtly different semantics, but at least they should agree on
the big things. I think we should pick one, and it should be read committed.

Also note that 'repeatable read' on both MySQL and Postgres is actually
snapshot isolation, which isn't quite the same thing. For example, it
doesn't get phantom reads.

The most important reason I think we need read committed is recovery
from concurrent changes within the scope of a single transaction. To
date, in Nova at least, this hasn't been an issue as transactions have
had an extremely small scope. However, we're trying to expand that scope
with the new enginefacade in oslo.db:
https://review.openstack.org/#/c/138215/ . With this expanded scope,
transaction failure in a library function can't simply be replayed
because the transaction scope is larger than the function.

So, 3 concrete examples of how repeatable read will make Nova worse:

* https://review.openstack.org/#/c/140622/

This was committed to Nova recently. Note how it involves a retry in the
case of concurrent change. This works fine, because the retry is creates
a new transaction. However, if the transaction was larger than the scope
of this function this would not work, because each iteration would
continue to read the old data. The solution to this is to create a new
transaction. However, because the transaction is outside of the scope of
this function, the only thing we can do locally is fail. The caller then
has to re-execute the whole transaction, or fail itself.

This is a local concurrency problem which can be very easily handled
locally, but not if we're using repeatable read.

*
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4749

Nova has multiple functions of this type which attempt to update a
key/value metadata table. I'd expect to find multiple concurrency issues
with this if I stopped to give it enough thought, but concentrating just
on what's there, notice how the retry loop starts a new transaction. If
we want to get to a place where we don't do that, with repeatable read
we're left failing the whole transaction.

* https://review.openstack.org/#/c/136409/

This one isn't upstream, yet. It's broken, and I can't currently think
of a solution if we're using repeatable read.

The issue is atomic creation of a shared resource. We want to handle a
creation race safely. This patch:

* Attempts to reads the default (it will normally exist)
* Creates a new one if it doesn't exist
* Goes back to the start if creation failed due to a duplicate

Seems fine, but it will fail because the re-read will continue to not
return the new value under repeatable read (no phantom reads). The only
way to see the new row is a new transaction. Is this will no longer be
in the scope of this function, the only solution will be to fail. Read
committed could continue without failing.

Incidentally, this currently works by using multiple transactions, which
we are trying to avoid. It has also been suggested that in this specific
instance the default security group could be created with the project.
However, that would both be more complicated, because it would require
putting a hook into another piece of code, and less robust, because it
wouldn't recover if somebody deleted the default security group.


To summarise, with repeatable read we're forced to abort the current
transaction to deal with certain relatively common classes of
concurrency issue, whereas with read committed we can safely recover. If
we want to reduce the number of transactions we're using, which we do,
the impact of this is going to dramatically increase. We should
standardise on read committed.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need nova-specs core reviews for Scheduler spec

2015-02-06 Thread Nikola Đipanov
On 02/06/2015 02:15 PM, Ed Leafe wrote:
 At the mid-cycle we discussed the last spec for the scheduler cleanup: 
 Isolate Scheduler DB for Instances
 
 https://review.openstack.org/#/c/138444/
 
 There was a lot of great feedback from those discussions, and that has been 
 incorporated into the spec. It has been re-reviewed by most of the scheduler 
 team with several +1s, but we really need the cores to approve it so we can 
 move ahead with the patches.
 

Hey Ed,

I've left a comment on the spec - basically I don't think this is an
approach we should take.

Since I was not at the midcycle, I am sorry the discussions happened so
close to the FF freeze, and there was not enough time to get broader
feedback from the community in time.

Best,
N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Ben Nemec
On 02/06/2015 10:12 AM, Doug Hellmann wrote:
 
 
 On Fri, Feb 6, 2015, at 07:37 AM, Denis Makogon wrote:
 Hello to All.


 As part of oslo.messaging initiative to split up requirements into
 certain
 list of per messaging driver dependencies
 https://review.openstack.org/#/c/83150/

 it was figured that we need to find a way to use pip inner dependencies
 and
 we were able to do that, short info our solution and how it works:



- This is how regular requirements.txt looks:

 dep1

 …

 dep n


- This is how looks requirements.txt with inner dependencies:

 dep1

 -r somefolder/another-requirements.txt

 -r completelyanotherfolder/another-requirements.txt

 …

 dep n

 That’s what we’ve did for oslo.messaging. But we’ve faced with problem
 that
 was defined as openstack-infra/project-config

 tool issue, this tool called project-requirements-change
 https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/project-requirements-change.py
 .As you can see it’s not able to handle inner dependencies in any

 of requirements.txt files, as you can see this tool expects to parse only
 explicit set of requirements (see regular requirements.txt definition
 above).

 So, i decided to fix that tool to make it able to look over inner
 dependencies, and here’s https://review.openstack.org/#/c/153227/ what
 i
 have for yesterday,

 Taking into account suggestion from Monty Taylor i’m bringing this
 discussion to much wider audience.

 And the question is: aren’t we doing something complex or are there any
 less complex ways to

 accomplish the initial idea of splitting requirements?
 
 After re-reading this message, and discussing it with a few folks in the
 infra channel on IRC, I'm a little concerned that we don't have enough
 background to fully understand the problem and proposed solution. bnemec
 pointed out that the discussion happened before we had the spec process,
 but now that we do have that process I think the best next step is to
 have a spec written in oslo-specs describing the problem we're trying to
 solve and the approaches that were discussed. This may really just be
 summarizing the existing discussions, but let's get all of that
 information into a single document before we go any further.

For reference, here are the major discussions I'm aware of around this
issue:

http://lists.openstack.org/pipermail/openstack-dev/2014-February/026976.html

http://lists.openstack.org/pipermail/openstack-dev/2015-January/055229.html

https://bugs.launchpad.net/heat/+bug/1225191

https://bugs.launchpad.net/neutron/+bug/1225232

-Ben



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev