[openstack-dev] [nova] upgrade_levels in nova upgrade

2015-05-07 Thread Guo, Ruijing
Hi, All,

In existing design, we need to reconfig nova.conf and restart nova service 
during post-upgrade cleanup
As https://www.rdoproject.org/Upgrading_RDO_To_Icehouse:

I propose to send RPC message to remove RPC API version pin.



1.   Stop services  (same with existing)

2.   Upgrade packages (same with existing)

3.   Upgrade DB schema (same with existint)

4.   Start service with upgrade  (add upgrade parameter so that nova will 
use old version of RPC API. We may add more parameter for other purpose 
including query upgrade progress)

5.   Send RPC message to remove RPC API version pin. (we don't need to 
reconfig nova.conf and restart nova service)

What do you think?

Thanks,
-Ruijing


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread marios
On 07/05/15 05:32, Dan Prince wrote:
 Looking over some of the Puppet pacemaker stuff today. I appreciate all
 the hard work going into this effort but I'm not quite happy about all
 of the conditionals we are adding to our puppet overcloud_controller.pp
 manifest. Specifically it seems that every service will basically have
 its resources duplicated for pacemaker and non-pacemaker version of the
 controller by checking the $enable_pacemaker variable.
 
 After seeing it play out for a couple services I think I might prefer it
 better if we had an entirely separate template for the pacemaker
 version of the controller. One easy way to kick off this effort would be
 to use the Heat resource registry to enable pacemaker rather than a
 parameter.
 
 Something like this:
 
 https://review.openstack.org/#/c/180833/

+1 I like this as an idea. Given we've already got quite a few reviews
in flight making changes to overcloud_controller.pp (we're still working
out how to, and enabling services) I'd be happier to let those land and
have the tidy up once it settles (early next week at the latest) -
especially since there's some working out+refactoring to do still,

thanks, marios

 
 If we were to split out the controller into two separate templates I
 think it might be appropriate to move a few things into puppet-tripleo
 to de-duplicate a bit. Things like the database creation for example.
 But probably not all of the services... because we are trying as much as
 possible to use the stackforge puppet modules directly (and not our own
 composition layer).
 
 I think this split is a good compromise and would probably even speed up
 the implementation of the remaining pacemaker features too. And removing
 all the pacemaker conditionals we have from the non-pacemaker version
 puts us back in a reasonably clean state I think.
 
 Dan
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Exception in rpc_dispatcher

2015-05-07 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

This is a well known issue when eventlet monkey patching is not done 
correctly.
The application must do the monkey patching before anything else even 
loading another module that eventlet.


You can find more information here: 
https://bugs.launchpad.net/oslo.messaging/+bug/1288878


Or some examples of how nova and ceilometer ensure that:

 https://github.com/openstack/nova/blob/master/nova/cmd/__init__.py
 
https://github.com/openstack/ceilometer/blob/master/ceilometer/cmd/__init__.py



More recent version of oslo.messaging already outputs a better error 
message in this case.


Cheers,

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2015-05-07 08:11, Vikash Kumar a écrit :

Hi,

   I am getting this error in my agent side. I am getting same message
twice, one after other.

2015-05-07 11:39:28.189 11363 ERROR oslo.messaging.rpc.dispatcher
[req-43875dc3-99a9-4803-aba2-5cff22943c2c ] Exception during message
handling: _oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher 
Traceback

(most recent call last):
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   
File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line

134, in _dispatch_and_reply
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
incoming.message))
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   
File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line

179, in _dispatch
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
localcontext.clear_local_context()
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   
File
/usr/lib/python2.7/dist-packages/oslo/messaging/localcontext.py, line 
55,

in clear_local_context
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
delattr(_STORE, _KEY)
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
AttributeError:
_oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJVSxmkCRAYkrQvzqrryAAAcA4P/iTf59F9HbqQF6uuKDM6
HMPWg0PovpzUg0opOMMBE8ZwiQ6B+w5MS3rkwDzbcXfqijDxM8A0BAOwG5/v
iFGfENKPIVm2Y/7iHmg84+MXSKSYDNmsuZc0AOP0i9Ar9D6E8SZbC5hMSfAO
KOZBbVmBP14/KhXesxJPx5nbDknhRPLravV9o/iyMnLSWBGQa80X92G1tkz6
6PI/UCCp1SGyky0eg0ZoZ5+IX3r9UsyjGDRS+le+lQEu4T0e05G1jBnvw9H7
qIo7ecWDSUwwxl7sz2HlgaF0st4bjCtRtSPbbcW2nShKBVbIdAxfncj2O8Ux
PVwk4ZaEdyQ+O2RJp/vq6v9jcNsoh/3jCLojEwUv4BlLS7qEW4Ime0coJoxD
zgC1vdgSojS8pxRto8kh7NJ91MpILRDfm3bJ/bpTGb04Wh4LYGHmoeQMrCex
rPmYgDkWTXUpsqAgwHpP8DZpRXY40hx6yCiWp/1lNLI/CYx4B6fDOOS7Xf8k
kjmRrzV8QriQ+02M9cCWIgLyskAUIWRzEn/ZhtAiQTvaCJEuMvQTHSYMf30J
m1hW2bL5UcuwA+4Or6nxzfF14EDaWZv2dD2hPNvfJ3qMXgCHQoQUVp061sto
p7RPmzWVWuNrwmVIp0JcJSLDGRSifGadDw/3Uiygpo9M9RfUJEpC2RJ85WjU
DSE8
=nSuZ
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QoS] Neutron QoS (Quality Of Service) Update

2015-05-07 Thread Gal Sagie
Hello All,

I think that the Neutron QoS effort is progressing into critical point and
i asked Miguel if i could post an update on our progress.

First, i would like to thank Sean and Miguel for running this effort and
everyone else that is involved, i personally think its on the right track,
However i would like to see more people involved, especially more Neutron
experienced members because i believe we want to make the right decisions
and learn from past mistakes when making the API design.

Feel free to update in the meeting wiki [1], and the spec review [2]

*Topics*

*API microversioning spec implications [3]*
QoS can benefit from this design, however some concerns were raised that
this will
only be available at mid L cycle.
I think a better view is needed how this aligns with the new QoS design and
any feedback/recommendation is use full

*Changes to the QoS API spec: scoping into bandwidth limiting*
At this point the concentration is on the API and implementation
of bandwidth limiting.

However it is important to keep the design easily extensible for some next
steps
like traffic classification and marking
*.*
*Changes to the QoS API spec: modeling of rules (class hierarchy)
(Guarantee split out)*
There is a QoSPolicy which is composed of different QoSRules, there is
a discussion of splitting the rules into different types like QoSTypeRule.
(This in order to support easy extension of this model by adding new type
of rules which extend the possible parameters)

Plugins can then separate optional aspects into separate rules.
Any feedback on this approach is appreciated.

*Discuss multiple API end points (per rule type) vs single*
In summary this means  that in the above model, do we want to support
/v1/qosrule/..  or   /v1/qostyperule/ API's
I think the consensus right now is that the later is more flexible.

Miguel is also checking the possibility of using something like:
/v1/qosrule/type/... kind of parsing
Feedback is desired here too :)

*Traffic Classification considerations*
The idea right now is to extract the TC classification to another data model
and attach it to rule
that way no need to repeat same filters for the same kind of traffic.

Of course we need to consider here what it means to update a classifier
and not to introduce too much dependencies

*The ingress vs egress differences and issues*
Egress bandwidth limiting is much more use full and supported,
There is still doubt on the support of Ingress bandwidth limiting in OVS,
anyone
that knows if Ingress QoS is supported in OVS we want your feedback :)
(For example implementing OF1.3 Metering spec)

Thanks all (Miguel, Sean or anyone else, please update this if i forgot
anything)

[1] https://wiki.openstack.org/wiki/Meetings/QoS
[2] https://review.openstack.org/#/c/88599/
[3] https://review.openstack.org/#/c/136760/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] what is the direction of trove?

2015-05-07 Thread Li Tianqing
Hello,
1. Why the trove-mgmt-cli disappeared? 
2.Why we put the tenant into the user's tenant? no trove's tenant?
3.The vm has two net-card, how do we make the vm connect to rabbitmq? 
and billing server?


 




--

Best
Li Tianqing__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

On 05/07/2015 11:15 AM, marios wrote:

On 07/05/15 05:32, Dan Prince wrote:


[..]


Something like this:

https://review.openstack.org/#/c/180833/


+1 I like this as an idea. Given we've already got quite a few reviews
in flight making changes to overcloud_controller.pp (we're still working
out how to, and enabling services) I'd be happier to let those land and
have the tidy up once it settles (early next week at the latest) -
especially since there's some working out+refactoring to do still,


+1 on not block ongoing work

as of today a split would cause the two .pp to have a lot of duplicated 
data, making them not better than one with the ifs IMHO


we should probably move out of the existing .pp the duplicated parts 
first (see my other email on the matter)

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-07 Thread Filip Blaha
Thanks for confirmation, that trying direct from mistral ssh to VM via 
fixed IP is not good idea.


Btw. It would probably not work even if mistral run on the same network 
node hosting the router for the tenant because neutron creates separate 
network namespace (ip netns qrouter-x) for each router and VMs are 
accessible only from that namespace not from default.


Filip


On 05/06/2015 06:31 PM, Georgy Okrokvertskhov wrote:



On Wed, May 6, 2015 at 9:26 AM, Fox, Kevin M kevin@pnnl.gov 
mailto:kevin@pnnl.gov wrote:


If your Mistral engine is on the same host as the network node
hosting the router for the tenant, then it would probably work
there are a lot of conditions in that statement though... Too many
for my tastes. :/

While I dislike agents running in the vm's, this still might be a
good use case for one...

This would also probably be a good use case for Zaqar I think.
Have a generic run shell commands from Zaqar queue agent, that
pulls commands from a Zaqar queue, and executes it.

The vm's don't have to be directly reachable from the network
then. You just have to push messages into Zaqar.

From Murano's perspective though, maybe it shouldn't care. Should
Mistral abstract away how to execute the action, leaving it up to
Mistral how to get the action to the vm? If that's the case, then
ssh vs queue/agent is just a Mistral implementation detail? Maybe
the OpenStack Deployer chooses what's the best route for their cloud?

Thanks,
Kevins


+1 for MQ.

That is the path which proved itself to be working in most of the cases.

-1 for ssh as this is a big headache.

Thanks,
Gosha


From: Filip Blaha [filip.bl...@hp.com mailto:filip.bl...@hp.com]
Sent: Wednesday, May 06, 2015 8:42 AM
To: openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev]  [Murano] [Mistral] SSH workflow action

Hello

We are considering implementing  actions on services of a murano
environment via mistral workflows. We are considering whether mistral
std.ssh action could be used to run some command on an instance.
Example
of such action in murano could be restart action on Mysql DB service.
Mistral workflow would ssh to that instance running Mysql and run
service mysql restart. From my point of view trying to use SSH to
access instances from mistral workflow is not good
idea but I would like to confirm it.

The biggest problem I see there is openstack networking. Mistral
service
running on some openstack node would not be able to access
instance via
its fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh
from namespace of its gateway router e.g. ip netns exec
qrouter-... ssh
cirros@10.0.0.5 mailto:cirros@10.0.0.5 but I think it is not
good to rely on implementation
detail of  neutron and use it. In multinode openstack deployment it
could be even more complicated.

In other words I am asking whether we can use std.ssh mistral
action to
access instances via ssh on theirs fixed IPs? I think no but I would
like to confirm it.

Thanks
Filip

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com http://www.mirantis.com/
Tel. +1 650 963 9828
Mob. +1 650 996 3284


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc. - Role Assignment

2015-05-07 Thread David Chadwick
Hi Tim

On 06/05/2015 21:53, Tim Hinrichs wrote:
 I wondered if we could properly protect the API call for adding a new
 Role using the current mechanism.  So I came up with a simple example.
 
 Suppose we want to write policy about the API call: addRole(user,
 role-name).  If we’re hosting both Pepsi and Coke, we want to write a
 policy that says that only someone in the Pepsi admin role can change
 roles for Pepsi users (likewise for Coke).  We’d want to write something
 like…
 
 addRole(user, role) is permitted for caller if 
 caller belongs to the Pepsi-admin role and
 user belongs to the Pepsi role
 
 The policy engine knows if “caller belongs to the Pepsi-admin role”
 because that’s part of the token.  But the policy engine doesn’t know if
 “user belongs to the Pepsi role” because user is just an argument to
 the API call, so we don’t have role info about user.  This helps me
 understand *why* we can’t handle the multi-customer use case right now:
 the policy engine doesn’t have all the info it needs.
 
 But otherwise, it seems, we could handle the multi-customer use-case
 using mechanism that already exists.  Are there other examples where
 they can’t write policy because the engine doesn’t have enough info?  
 

Your simple example does not work in the federated case. This is because
role and attribute assignments are not done by Keystone, or by any part
of Openstack, but by a remote IDP. It is assumed that the administrator
of this remote IDP knows who his users are, and will assign the correct
attributes to them. However, these are not necessarily OpenStack roles
(they most certainly wont be).

Therefore, we have built a perfectly good mechanism into Keystone, to
ensure that the users from any IDP (Coke, Pepsi or Virgin Cola etc.) get
the right Keystone/Openstack role(s), and this is via attibute mapping.
When the mapping takes place, the user is in the process of logging in,
therefore Keystone knows the attributes of the user (assigned by the
IDP) and can therefore know which Openstack role to assign to him/her.

I hope this helps.

regards

David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

hi Dan!

On 05/07/2015 04:32 AM, Dan Prince wrote:

Looking over some of the Puppet pacemaker stuff today. I appreciate all
the hard work going into this effort but I'm not quite happy about all
of the conditionals we are adding to our puppet overcloud_controller.pp
manifest. Specifically it seems that every service will basically have
its resources duplicated for pacemaker and non-pacemaker version of the
controller by checking the $enable_pacemaker variable.


not sure about the meaning of 'resources duplicated' but I think it is 
safe to say that the pacemaker ifs are there coping mainly with the 
following two:


1. when pacemaker, we don't want puppet to enable/start the service, 
pacemaker will manage so we need to tell the module not to


2. when pacemaker, there are some pacemaker related steps to be 
performed, like adding a resource into the cluster so that it is 
effectively monitoring the service status


in the future, we might need to pass some specific config params to a 
module only when pacemaker, but that looks like covered by 1) already



After seeing it play out for a couple services I think I might prefer it
better if we had an entirely separate template for the pacemaker
version of the controller. One easy way to kick off this effort would be
to use the Heat resource registry to enable pacemaker rather than a
parameter.

Something like this:

https://review.openstack.org/#/c/180833/

If we were to split out the controller into two separate templates I
think it might be appropriate to move a few things into puppet-tripleo
to de-duplicate a bit. Things like the database creation for example.
But probably not all of the services... because we are trying as much as
possible to use the stackforge puppet modules directly (and not our own
composition layer).


I think the change is good, I am assuming we don't want the shared parts 
to get duplicated into the two .pp though.


What is your idea about those shared parts? To move them into 
puppet-tripleo? To provision a shared .pp in addition to a 
differentiated top-level template maybe? Something else?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Exception in rpc_dispatcher

2015-05-07 Thread Vikash Kumar
I did following in my agent code:

import eventlet

eventlet.monkey_patch()

but still I see same issue.

On Thu, May 7, 2015 at 1:22 PM, Mehdi Abaakouk sil...@sileht.net wrote:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Hi,

 This is a well known issue when eventlet monkey patching is not done
 correctly.
 The application must do the monkey patching before anything else even
 loading another module that eventlet.

 You can find more information here:
 https://bugs.launchpad.net/oslo.messaging/+bug/1288878

 Or some examples of how nova and ceilometer ensure that:

  https://github.com/openstack/nova/blob/master/nova/cmd/__init__.py

 https://github.com/openstack/ceilometer/blob/master/ceilometer/cmd/__init__.py


 More recent version of oslo.messaging already outputs a better error
 message in this case.

 Cheers,

 - ---
 Mehdi Abaakouk
 mail: sil...@sileht.net
 irc: sileht



 Le 2015-05-07 08:11, Vikash Kumar a écrit :

 Hi,

I am getting this error in my agent side. I am getting same message
 twice, one after other.

 2015-05-07 11:39:28.189 11363 ERROR oslo.messaging.rpc.dispatcher
 [req-43875dc3-99a9-4803-aba2-5cff22943c2c ] Exception during message
 handling: _oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 Traceback
 (most recent call last):
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
 134, in _dispatch_and_reply
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 incoming.message))
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
 179, in _dispatch
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 localcontext.clear_local_context()
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/oslo/messaging/localcontext.py, line
 55,
 in clear_local_context
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 delattr(_STORE, _KEY)
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 AttributeError:
 _oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 -BEGIN PGP SIGNATURE-
 Version: OpenPGP.js v.1.20131017
 Comment: http://openpgpjs.org

 wsFcBAEBCAAQBQJVSxmkCRAYkrQvzqrryAAAcA4P/iTf59F9HbqQF6uuKDM6
 HMPWg0PovpzUg0opOMMBE8ZwiQ6B+w5MS3rkwDzbcXfqijDxM8A0BAOwG5/v
 iFGfENKPIVm2Y/7iHmg84+MXSKSYDNmsuZc0AOP0i9Ar9D6E8SZbC5hMSfAO
 KOZBbVmBP14/KhXesxJPx5nbDknhRPLravV9o/iyMnLSWBGQa80X92G1tkz6
 6PI/UCCp1SGyky0eg0ZoZ5+IX3r9UsyjGDRS+le+lQEu4T0e05G1jBnvw9H7
 qIo7ecWDSUwwxl7sz2HlgaF0st4bjCtRtSPbbcW2nShKBVbIdAxfncj2O8Ux
 PVwk4ZaEdyQ+O2RJp/vq6v9jcNsoh/3jCLojEwUv4BlLS7qEW4Ime0coJoxD
 zgC1vdgSojS8pxRto8kh7NJ91MpILRDfm3bJ/bpTGb04Wh4LYGHmoeQMrCex
 rPmYgDkWTXUpsqAgwHpP8DZpRXY40hx6yCiWp/1lNLI/CYx4B6fDOOS7Xf8k
 kjmRrzV8QriQ+02M9cCWIgLyskAUIWRzEn/ZhtAiQTvaCJEuMvQTHSYMf30J
 m1hW2bL5UcuwA+4Or6nxzfF14EDaWZv2dD2hPNvfJ3qMXgCHQoQUVp061sto
 p7RPmzWVWuNrwmVIp0JcJSLDGRSifGadDw/3Uiygpo9M9RfUJEpC2RJ85WjU
 DSE8
 =nSuZ
 -END PGP SIGNATURE-


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Can't upload jar file to Job Binaries from Horizon

2015-05-07 Thread Li, Chen
Hi sahara,

I have a fresh installed devstack environment.

I try to upload sahara/etc/edp-examples/edp-pig/trim-spaces/udf.jar to Job 
binaries (store in internal database) but failed.
I get error in horizon_error.log, which complains UnicodeDecodeError: 'ascii' 
codec can't decode byte 0xe6 in position 14: ordinal not in range(128). 
(https://bugs.launchpad.net/sahara/+bug/1452116)

I checked everywhere I know, but can't find any clue why this happen because 
this used to work.

There is message in locale/sahara.pot:
msgid Job binary internal data must be a string of length greater than 
zero
Is this means I can't upload jar file to Job binary because Job binary 
internal data must be a string ???

Anything I have missed ???

Looking forward to your reply!

Thanks.
-chen




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] HEAD Object API status code

2015-05-07 Thread Ouchi, Atsuo
Hi Clay,

Thanks for your information. I am sorry that I made a mistake. You're right that
HEAD object returns 200 not 204, that's perfectly ok.

Meanwhile, HEAD account / container return 204 with Content-Length: 0. This is 
against
RFC7230, but from the discussion on the Change-Id: 
I5ab1eb85e777f259f4bd73a3a4a1802901925ed7
I understand that it can't be fixed in an easy way, and may not be worth the 
effort anyway.

Atsuo
--
   Ouchi Atsuo / ouchi.at...@jp.fujitsu.com
Fujitsu Limited


-Original Message-
From: Clay Gerrard [mailto:clay.gerr...@gmail.com] 
Sent: Thursday, May 07, 2015 1:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Swift] HEAD Object API status code

Can you give an example of an Object HEAD request returning 204?  I tried a 
HEAD of an object with a body and also a HEAD of an object of length 0 and I 
seem to get 200's...

Container's and accounts are a little more interesting story... [2]

-Clay

2. https://review.openstack.org/#/c/32647/

On Wed, May 6, 2015 at 5:40 PM, Ouchi, Atsuo ouchi.at...@jp.fujitsu.com wrote:


Hello Swift developers,

I would like to ask you on a Swift API specification.

Swift returns 204 status code to a valid HEAD Object request with a 
Content-Length header,
whereas the latest HTTP/1.1 specification (RFC7230) states that you 
must not send
the header with a 204 status code.

 3.3.2.  Content-Length
(snip)
A server MUST NOT send a Content-Length header field in any 
response
with a status code of 1xx (Informational) or 204 (No Content).  A
server MUST NOT send a Content-Length header field in any 2xx
(Successful) response to a CONNECT request (Section 4.3.6 of
[RFC7231]).

What I would like to know is, when you designed Swift APIs what was the 
reasoning
behind choosing 204 status code to HEAD Object, over other status codes 
such as 200?

Thanks,
Atsuo
--
   Ouchi Atsuo / 
ouchi.at...@jp.fujitsu.com
   tel. 03-6424-6612 / ext. 
72-60728968
Service Development Department, Foundation Service 
Division
Fujitsu 
Limited



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Heat-engine fails to start

2015-05-07 Thread Murali B
Hi

I installed heat on juno version.

When I start heat-engine it fails and I am seeing the below error

cal/lib/python2.7/dist-packages/stevedore/extension.py:156
2015-05-07 13:06:36.076 10670 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('routing =
oslo.messaging.notify._impl_routing:RoutingDriver') _load_plugins
/usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
2015-05-07 13:06:36.077 10670 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('test = oslo.messaging.notify._impl_test:TestDriver')
_load_plugins
/usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
2015-05-07 13:06:36.077 10670 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('messaging =
oslo.messaging.notify._impl_messaging:MessagingDriver') _load_plugins
/usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
2015-05-07 13:06:36.077 10670 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('AWSTemplateFormatVersion.2010-09-09 =
heat.engine.cfn.template:CfnTemplate') _load_plugins
/usr/local/lib/python2.7/dist-packages/stevedore/extension.py:156
2015-05-07 13:06:36.084 10670 CRITICAL heat.engine [-] Could not load
AWSTemplateFormatVersion.2010-09-09: (sqlalchemy-migrate 0.9.6
(/usr/local/lib/python2.7/dist-packages),
Requirement.parse('sqlalchemy-migrate==0.9.1'))


your help is appreciated



Thanks
-Murali
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding Joshua Harlow to oslo-core

2015-05-07 Thread Mehdi Abaakouk



I always felt that was the case, so +1 of course

---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2015-05-05 16:47, Julien Danjou a écrit :

Hi fellows,

I'd like to propose that we add Joshua Harlow to oslo-core. He is
already maintaining some of the Oslo libraries (taskflow, tooz…) and
he's helping on a lot of other ones for a while now. Let's bring him in
for real!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Neutron QoS (Quality Of Service) update

2015-05-07 Thread Miguel Ángel Ajo
Gal, thank you very much for the update to the list, I believe it’s very 
helpful,
I’ll add some inline notes.

On Thursday, 7 de May de 2015 at 8:51, Gal Sagie wrote:

 Hello All,
  
 I think that the Neutron QoS effort is progressing into critical point and i 
 asked Miguel if i could post an update on our progress.
  
 First, i would like to thank Sean and Miguel for running this effort and 
 everyone else that is involved, i personally think its on the right track, 
 However i would like to see more people involved, especially more Neutron 
 experienced members because i believe we want to make the right decisions and 
 learn from past mistakes when making the API design.
  
 Feel free to update in the meeting wiki [1], and the spec review [2]
  
 Topics
  
 API microversioning spec implications [3]
 QoS can benefit from this design, however some concerns were raised that this 
 will
 only be available at mid L cycle.
 I think a better view is needed how this aligns with the new QoS design and
 any feedback/recommendation is use full
I guess an strategy here could be: go on with an extension, and translate that 
into
an experimental API once micro versioning is ready, then after one cycle we 
could
“graduate it” to get versioned.
  
  
 Changes to the QoS API spec: scoping into bandwidth limiting
 At this point the concentration is on the API and implementation
 of bandwidth limiting.
  
 However it is important to keep the design easily extensible for some next 
 steps
 like traffic classification and marking.
  
 Changes to the QoS API spec: modeling of rules (class hierarchy) (Guarantee 
 split out)
 There is a QoSPolicy which is composed of different QoSRules, there is
 a discussion of splitting the rules into different types like QoSTypeRule.
 (This in order to support easy extension of this model by adding new type  
 of rules which extend the possible parameters)
  
 Plugins can then separate optional aspects into separate rules.
 Any feedback on this approach is appreciated.
  
 Discuss multiple API end points (per rule type) vs single

here, the topic name was incorrect, where I said API end points, we were
meaning URLs or REST resources.. (thanks Irena for the correction)

  
 In summary this means  that in the above model, do we want to support
 /v1/qosrule/..  or   /v1/qostyperule/ API's
 I think the consensus right now is that the later is more flexible.
  
 Miguel is also checking the possibility of using something like:
 /v1/qosrule/type/... kind of parsing
 Feedback is desired here too :)
  
 Traffic Classification considerations
 The idea right now is to extract the TC classification to another data model
 and attach it to rule
 that way no need to repeat same filters for the same kind of traffic.
  
 Of course we need to consider here what it means to update a classifier
 and not to introduce too much dependencies
About this, the intention is not to fully model this, or to include it in the 
data model now,
but try to see how could we do it in future iterations and see if it fits the 
current data model
and APIs we’re proposing.
  
  
  
 The ingress vs egress differences and issues
 Egress bandwidth limiting is much more use full and supported,
 There is still doubt on the support of Ingress bandwidth limiting in OVS, 
 anyone
 that knows if Ingress QoS is supported in OVS we want your feedback :)
 (For example implementing OF1.3 Metering spec)
  
 Thanks all (Miguel, Sean or anyone else, please update this if i forgot 
 anything)
  
 [1] https://wiki.openstack.org/wiki/Meetings/QoS
 [2] https://review.openstack.org/#/c/88599/
 [3] https://review.openstack.org/#/c/136760/
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Exception in rpc_dispatcher

2015-05-07 Thread Vikash Kumar
Hi,

   I am getting this error in my agent side. I am getting same message
twice, one after other.

2015-05-07 11:39:28.189 11363 ERROR oslo.messaging.rpc.dispatcher
[req-43875dc3-99a9-4803-aba2-5cff22943c2c ] Exception during message
handling: _oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher Traceback
(most recent call last):
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
134, in _dispatch_and_reply
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
incoming.message))
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
179, in _dispatch
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
localcontext.clear_local_context()
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/oslo/messaging/localcontext.py, line 55,
in clear_local_context
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
delattr(_STORE, _KEY)
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
AttributeError:
_oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher

-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-07 Thread Pospisil, Radek
Hello,

Ad ... The networking in OpenStack in general works in such a way so that 
connections from VM are allowed to almost anywhere. )
IMO it is defined by user what networks are accessible from VM – i.e., there 
can be several ‚public networks‘
Ad There is difference in direction who initiates connection. In case of murano 
agent -- rabbit MQ is connection initiated from VM to openstack 
service(rabbit). In case of std.ssh mistral action is direction opposite from 
openstack service (mistral) to ssh server on VM.)
And In Murano production deployment we use separate MQ instance so that VMs 
have no access to OpenStack MQ.

Yes and no ☺ In case of SSH the direction is obvious – from Mistral to VM.
But in case of MQ it is nearly the same, but both VM and Mistral are accessing 
the MQ – so the direction is Mistral to MQ, and VM to MQ. In this case it is 
important on what network the MQ is running – is MQ running on VM (managed by 
nova), or on O~S node? In both cases we have to solve how neutron network will 
be available to O~S node:

· MQ is on VM (managed by nova)

o   VM with Murano agent has to be on the same network, or via router as MQ

o   Mistral (and of course Murano engine) has to be configured to have access 
to VM with MQ e.g., via floating IP, or manually configured namespaces ?

· MQ is on O~S node

o   VM with Murano agent has to be configured to access ‚public network‘ with MQ

o   Mistral and (Murano engine) will have access to MQ (as they are running 
with all O~S nodes)

Gosha) In production environment - do you have ‚management network‘ on which 
MQ, VMs-with-Murano-agent, and Murano-engine, Mistral are running ?

Anyway I like more idea of using MQ for execution of actions (such as ssh) 
instead of direct ssh.

 Regards,
Radek

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Wednesday, May 06, 2015 6:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

Connection direction here is important only in the frame of networking 
connectivity problem solving. The networking in OpenStack in general works in 
such a way so that connections from VM are allowed to almost anywhere. In 
Murano production deployment we use separate MQ instance so that VMs have no 
access to OpenStack MQ.
In the sense who initiates task execution it always a Murano service which 
publishes tasks (shell script + necessary files) in the MQ so that agent can 
pull them and execute.
Thanks
Gosha


On Wed, May 6, 2015 at 9:31 AM, Filip Blaha 
filip.bl...@hp.commailto:filip.bl...@hp.com wrote:
Hello

one more note on that. There is difference in direction who initiates 
connection. In case of murano agent -- rabbit MQ is connection initiated from 
VM to openstack service(rabbit). In case of std.ssh mistral action is direction 
opposite from openstack service (mistral) to ssh server on VM.

Filip


On 05/06/2015 06:00 PM, Pospisil, Radek wrote:
Hello,

I think that the generic question is - can be O~S services also accessible on 
Neutron networks, so VM (created by Nova) can access it? We (I and Filip) were 
discussing this today and we were not make a final decision.
Another example is Murano agent running on VMs - it connects to RabbitMQ which 
is also accessed by Murano engine

   Regards,

Radek

-Original Message-
From: Blaha, Filip
Sent: Wednesday, May 06, 2015 5:43 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action

Hello

We are considering implementing  actions on services of a murano environment 
via mistral workflows. We are considering whether mistral std.ssh action could 
be used to run some command on an instance. Example of such action in murano 
could be restart action on Mysql DB service.
Mistral workflow would ssh to that instance running Mysql and run service 
mysql restart. From my point of view trying to use SSH to access instances 
from mistral workflow is not good idea but I would like to confirm it.

The biggest problem I see there is openstack networking. Mistral service 
running on some openstack node would not be able to access instance via its 
fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh from 
namespace of its gateway router e.g. ip netns exec qrouter-... ssh 
cirros@10.0.0.5mailto:cirros@10.0.0.5 but I think it is not good to rely on 
implementation detail of  neutron and use it. In multinode openstack deployment 
it could be even more complicated.

In other words I am asking whether we can use std.ssh mistral action to access 
instances via ssh on theirs fixed IPs? I think no but I would like to confirm 
it.

Thanks
Filip

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-07 Thread Filip Blaha
yes. I agree that direction is important from only networking piont of 
view. Usually is more probable that VM on neutron network will be able 
to access O~S service ( VM -- rabbit) then opposite direction from O~S 
service to VM running on neutron network (mistral -- VM).


Filip


On 05/06/2015 06:39 PM, Georgy Okrokvertskhov wrote:
Connection direction here is important only in the frame of networking 
connectivity problem solving. The networking in OpenStack in general 
works in such a way so that connections from VM are allowed to almost 
anywhere. In Murano production deployment we use separate MQ instance 
so that VMs have no access to OpenStack MQ.


In the sense who initiates task execution it always a Murano service 
which publishes tasks (shell script + necessary files) in the MQ so 
that agent can pull them and execute.


Thanks
Gosha



On Wed, May 6, 2015 at 9:31 AM, Filip Blaha filip.bl...@hp.com 
mailto:filip.bl...@hp.com wrote:


Hello

one more note on that. There is difference in direction who
initiates connection. In case of murano agent -- rabbit MQ is
connection initiated from VM to openstack service(rabbit). In case
of std.ssh mistral action is direction opposite from openstack
service (mistral) to ssh server on VM.

Filip


On 05/06/2015 06:00 PM, Pospisil, Radek wrote:

Hello,

I think that the generic question is - can be O~S services
also accessible on Neutron networks, so VM (created by Nova)
can access it? We (I and Filip) were discussing this today and
we were not make a final decision.
Another example is Murano agent running on VMs - it connects
to RabbitMQ which is also accessed by Murano engine

   Regards,

Radek

-Original Message-
From: Blaha, Filip
Sent: Wednesday, May 06, 2015 5:43 PM
To: openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action

Hello

We are considering implementing  actions on services of a
murano environment via mistral workflows. We are considering
whether mistral std.ssh action could be used to run some
command on an instance. Example of such action in murano could
be restart action on Mysql DB service.
Mistral workflow would ssh to that instance running Mysql and
run service mysql restart. From my point of view trying to
use SSH to access instances from mistral workflow is not good
idea but I would like to confirm it.

The biggest problem I see there is openstack networking.
Mistral service running on some openstack node would not be
able to access instance via its fixed IP (e.g. 10.0.0.5) via
SSH. Instance could accessed via ssh from namespace of its
gateway router e.g. ip netns exec qrouter-... ssh
cirros@10.0.0.5 mailto:cirros@10.0.0.5 but I think it is
not good to rely on implementation detail of neutron and use
it. In multinode openstack deployment it could be even more
complicated.

In other words I am asking whether we can use std.ssh mistral
action to access instances via ssh on theirs fixed IPs? I
think no but I would like to confirm it.

Thanks
Filip


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com http://www.mirantis.com/
Tel. +1 650 963 9828
Mob. +1 650 996 3284


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-07 Thread Sahid Orentino Ferdjaoui
Hi,

The primary point of this expected discussion around asynchronous
communication is to optimize performance by reducing latency.

For instance the design used in Nova and probably other projects let
able to operate ascynchronous operations from two way.

1. When communicate between inter-services
2. When communicate to the database

1 and 2 are close since they use the same API but I prefer to keep a
difference here since the high level layer is not the same.

From Oslo Messaging point of view we currently have two methods to
invoke an RPC:

  Cast and Call: The first one is not bloking and will invoke a RPC
without to wait any response while the second will block the
process and wait for the response.

The aim is to add new method which will return without to block the
process an object let's call it Future which will provide some basic
methods to wait and get a response at any time.

The benefice from Nova will comes on a higher level:

1. When communicate between services it will be not necessary to block
   the process and use this free time to execute some other
   computations.

  future = rpcapi.invoke_long_process()
 ... do something else here ...
  result = future.get_response()

2. We can use the benefice of all of the work previously done with the
   Conductor and so by updating the framework Objects and Indirection
   Api we should take advantage of async operations to the database.

   MyObject = MyClassObject.get_async()
 ... do something else here ...
   MyObject.wait()

   MyObject.foo = bar
   MyObject.save_async()
 ... do something else here ...
   MyObject.wait()

All of this is to illustrate and have to be discussed.

I guess the first job needs to come from Oslo Messaging so the
question is to know the feeling here and then from Nova since it will
be the primary consumer of this feature.

https://blueprints.launchpad.net/nova/+spec/asynchronous-communication

Thanks,
s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Jiří Stránský

Hi Dan,

On 7.5.2015 04:32, Dan Prince wrote:

Looking over some of the Puppet pacemaker stuff today. I appreciate all
the hard work going into this effort but I'm not quite happy about all
of the conditionals we are adding to our puppet overcloud_controller.pp
manifest. Specifically it seems that every service will basically have
its resources duplicated for pacemaker and non-pacemaker version of the
controller by checking the $enable_pacemaker variable.


+1



After seeing it play out for a couple services I think I might prefer it
better if we had an entirely separate template for the pacemaker
version of the controller. One easy way to kick off this effort would be
to use the Heat resource registry to enable pacemaker rather than a
parameter.

Something like this:

https://review.openstack.org/#/c/180833/


I have two mild concerns about this approach:

1) We'd duplicate the logic (or at least the inclusion logic) for the 
common parts in two places, making it prone for the two .pp variants to 
get out of sync. The default switches from if i want to make a 
difference between the two variants, i need to put in a conditional to 
if i want to *not* make a difference between the two variants, i need 
to put this / include this in two places.


2) If we see some other bit emerging in the future, which would be 
optional but at the same time omnipresent in a similar way as 
Pacemaker is, we'll see the same if/else pattern popping up. Using the 
same solution would mean we'd have 4 .pp files (a 2x2 matrix) doing the 
same thing to cover all scenarios. This is a somewhat hypothetical 
concern at this point, but it might become real in the future (?).




If we were to split out the controller into two separate templates I
think it might be appropriate to move a few things into puppet-tripleo
to de-duplicate a bit. Things like the database creation for example.
But probably not all of the services... because we are trying as much as
possible to use the stackforge puppet modules directly (and not our own
composition layer).


I think our restraint from having a composition layer (extracting things 
into puppet-tripleo) is what's behind my concern no. 1 above. I know one 
of the arguments against having a composition layer is that it makes 
things less hackable, but if we could amend puppet modules without 
rebuilding or altering the image, it should mitigate the problem a bit 
[1]. (It's almost a matter that would deserve a separate thread though :) )




I think this split is a good compromise and would probably even speed up
the implementation of the remaining pacemaker features too. And removing
all the pacemaker conditionals we have from the non-pacemaker version
puts us back in a reasonably clean state I think.

Dan



An alternative approach could be something like:

if hiera('step') = 2 {
include ::tripleo::mongodb
}

and move all the mongodb related logic to that class and let it deal 
with both pacemaker and non-pacemaker use cases. This would reduce the 
stress on the top-level .pp significantly, and we'd keep things 
contained in logical units. The extracted bits will still have 
conditionals but it's going to be more manageable because the bits will 
be a lot smaller. So this would mean splitting up the manifest per 
service rather than based on pacemaker on/off status. This would require 
more extraction into puppet-tripleo though, so it kinda goes against the 
idea of not having a composition layer. It would also probably consume a 
bit more time to implement initially and be more disruptive to the 
current state of things.


At this point i don't lean strongly towards one or the other solution, i 
just want us to have an option to discuss and consider benefits and 
drawbacks of both, so that we can take an informed decision. I think i 
need to let this sink in a bit more myself.



Cheers

Jirka

[1] https://review.openstack.org/#/c/179177/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Joshua Harlow

Boris Pavlovic wrote:

Sean,

Nobody is able to track and know *everything*.

Friendly reminder that Heat is going to be removed and not installed by
default would help to avoid such situations.


Doesn't keystone have a service listing? Use that in rally (and 
elsewhere?), if keystone had a service and each service had a API 
discovery ability, there u go, profit! ;)






Best regards,
Boris Pavlovic

On Thu, May 7, 2015 at 8:06 PM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

On 05/07/2015 12:51 PM, Boris Pavlovic wrote:
  Hi stackers,
 
  Recently was merged patch that removes Heat from list of service that
  are installed by default DevStack
 
  Please next time make sure that all PTL of all projects in OpenStack
  know about such big not backward compatible changes.
 
  P.S This change paralyzed work on Rally for 2 days. =(

This should in no way impact gate jobs, they should all be explicitly
setting service lists themselves for their environments. The ensure
devstack-gate model is built around that. If they are not, then that is
a bug in how they were built.

This has also been a long time coming, we merged the direction patch for
this in Feb in the FUTURE.rst document.

 -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about multi-host mode while using nova-network

2015-05-07 Thread Christopher Aedo
The openstack-dev list is primarily intended to be used for OpenStack
development discussions and planning, rather than dealing with
operational/usage questions.  Your best place to start is
http://ask.openstack.org if you can't find the answer in our excellent
docs (http://docs.openstack.org/).

-Christopher

On Thu, May 7, 2015 at 12:08 AM, BYEONG-GI KIM kimbyeon...@gmail.com wrote:
 Hello.

 It seems that this question would be quite outdated question, because this
 is a question about nova-network instead of neutron.

 I wonder whether VMs located in a Compute Node, e.g., Compute A, are
 accessible while its nova-network service is down if the other nova-network
 is running on the other Compute Nodes, such as Compute B, Compute C, etc.

 Or, does the multi-host just provide continuity of the networking service
 via avoiding single point failure?

 Thanks in advance!

 Regards,
 Byeong-gi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Chuck Thier
I think most are missing the point a bit.  The question that should really
be asked is, what is right for Swift to continue to scale.  Since the
inception of Openstack, Swift has had to solve for problems of scale that
generally are not shared with the rest of Openstack.

When we first set out to write Swift, we had set, what we thought at the
time were pretty lofty goals for ourselves:

* 100 Billion objects
* 100 Petabytes of data
* 100 K requests/second
* 100 Gb/s throughput

We started with Python figuring that when we hit major bottlenecks, we
would look at other options.  We have been surprised at how far we have
been able to push Python and have met most if not all of the goals above.

As we look toward the future, we realize that we are now looking for how we
will support trillions of objects, 100's of petabytes to exabytes of data,
etc.  We feel that we have finally hit that point that we need more than
incremental improvements, and that we are running out of incremental
improvements that can be made with Python.

What started as a simple experiment by Mike Barton, has turned into quite a
significant improvement in performance and builds a base that can be built
off of for future improvements.  This wasn't built because of it being
shiny but out of direct need, and is currently being tested with great
results on production workloads.

I applaud the team that has worked on this at Rackspace, and hope the
community can look at the current needs of Swift, and the merits of the
work that has been accomplished, rather than the politics of shiny.

Thanks,

--
Chuck


On Thu, Apr 30, 2015 at 11:45 AM John Dickinson m...@not.mn wrote:

 Swift is a scalable and durable storage engine for storing unstructured
 data. It's been proven time and time again in production in clusters all
 over the world.

 We in the Swift developer community are constantly looking for ways to
 improve the codebase and deliver a better quality codebase to users
 everywhere. During the past year, the Rackspace Cloud Files team has been
 exploring the idea of reimplementing parts of Swift in Go. Yesterday, they
 released some of this code, called hummingbird, for the first time. It's
 been proposed to a feature/hummingbird branch in Swift's source repo.

 https://review.openstack.org/#/c/178851

 I am very excited about this work being in the greater OpenStack Swift
 developer community. If you look at the patch above, you'll see that there
 are various parts of Swift reimplemented in Go. During the next six months
 (i.e. before Tokyo), I would like us to answer this question:

 What advantages does a compiled-language object server bring, and do they
 outweigh the costs of using a different language?

 Of course, there are a ton of things we need to explore on this topic, but
 I'm happy that we'll be doing it in the context of the open community
 instead of behind closed doors. We will have a fishbowl session in
 Vancouver on this topic. I'm looking forward to the discussion.


 --John




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Sean Dague
On 05/07/2015 02:29 PM, Joshua Harlow wrote:
 Boris Pavlovic wrote:
 Sean,

 Nobody is able to track and know *everything*.

 Friendly reminder that Heat is going to be removed and not installed by
 default would help to avoid such situations.
 
 Doesn't keystone have a service listing? Use that in rally (and
 elsewhere?), if keystone had a service and each service had a API
 discovery ability, there u go, profit! ;)

Service listing for test jobs is actually quite dangerous, because then
something can change something about which services are registered, and
you automatically start skipping 30% of your tests because you react
correctly to this change. However, that means the job stopped doing what
you think it should do.

*This has happened multiple times in the past*. And typically days,
weeks, or months go by before someone notices in investigating an
unrelated failure. And then it's days, weeks, or months to dig out of
the regressions introduced.

So... test jobs should be extremely explicit about what they setup and
what they expect.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] interaction between fuel-plugin and fuel-UI

2015-05-07 Thread Samuel Bartel
Vitaly, Simon, thanks for your answers.
In fact for the cinder multi backend use case, it is more complicated and
closer from simon's case. For each filer, we have several parameters
(hostname/ip, username, password, volume, storage protocoleand so on). So I
thing that we are going to use Simon's approach a little bit modified :
having  a pick list corresponding to the number of filers to declare and
then display the corresponding number of group of fields and hide the
others.

But for sure JS part for plugin would be a great improvement for
developping complex plugins.


2015-05-07 18:41 GMT+02:00 Vitaly Kramskikh vkramsk...@mirantis.com:

 Samuel,

 There are plans to solve this:

 1) Add a flag/field to control declaration so it can have multiple values:

   ntp_list:
 *multiple_values: true*
 value:
   - 1.1.1.1
   - 2.2.2.2
 label: NTP server list
 description: List of upstream NTP servers
 type: text

 Now we use one input with comma-separated values to enter DNS and NTP
 servers which is hacky. This proposal with also solve your issue, but won't
 help for Simon's case (as there are groups of 2 fields).

 2) For complex controls we plan to implement support for JS parts of
 plugins https://blueprints.launchpad.net/fuel/+spec/ui-plugins, so you
 can implement configuration UI of any complexity by providing custom JS.
 repo_setup control in 6.1 is a great example of a complex control.

 For 6.1 I suggest you to use current DNS/NTP approach (comma separated
 values) or Simon's approach (though I'd use action: hide instead of action:
 disable)


 2015-05-07 17:36 GMT+03:00 Simon Pasquier spasqu...@mirantis.com:

 Hello Samuel,
 As far as I know, this isn't possible unfortunately. For our own needs,
 we ended up adding a fixed-size list with all items but the first one
 disabled. When you enter something in the first input box, it enabled the
 second box and so on (see [1]). In any case, this would be a good
 addition...
 BR,
 Simon
 [1]
 https://github.com/stackforge/fuel-plugin-elasticsearch-kibana/blob/master/environment_config.yaml#L21

 On Thu, May 7, 2015 at 3:37 PM, Samuel Bartel 
 samuel.bartel@gmail.com wrote:

 Hi all,



 I am working on two plugins for fuel : logrotate and cinder-netapp (to
 add multibackend feature)

 In this two plugins I face the same problem. Is it possible in the
 environment yaml config describing the fields to display for the plugin in
 the UI to have some dynamic element.

 I explain my need. I would like to be able to add additional element by
 clicking on a “+” button as the IP range for network tab in order to be
 able to:

 -add new log file to manage for the logrorate instead of having a static
 list

 -add extra netapp filer/volume instead ofbeing able to setup only one
 for the cinder netapp in a multibackend scope.

 For the cinder netapp for example, I would be able to access to the
 netapp server hostname with:

 $::fuel_settings[‘cinder_netapp’][0][‘netapp_server_hostname’]  #for the
 first one

 $::fuel_settings[‘cinder_netapp’][1][‘netapp_server_hostname’]  #for the
 second  one

 And so on.



 Can we do that with the actual plugin feature.  If not is it planned to
 add such a feature?



 Regards,


 Samuel


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Juno is completely broken in Trusty + Linux 3.19!

2015-05-07 Thread Martinx - ジェームズ
Guys,

 I just upgraded my Trusty servers, that I'm running OpenStack Juno, to
Linux 3.19, which is already available at Proposed repository.

 OpenStack is dead here, no connectivity for the tenants.

 https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1452868

 I appreciate any help!

 It works okay with Linux 3.19.

Best,
Thiago
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about multi-host mode while using nova-network

2015-05-07 Thread Christopher Aedo
The openstack-dev list is primarily intended to be used for OpenStack
development discussions and planning, rather than dealing with
operational/usage questions.  Your best place to start is
http://ask.openstack.org if you can't find the answer in our excellent
docs (http://docs.openstack.org/).

-Christopher


On Thu, May 7, 2015 at 12:08 AM, BYEONG-GI KIM kimbyeon...@gmail.com wrote:
 Hello.

 It seems that this question would be quite outdated question, because this
 is a question about nova-network instead of neutron.

 I wonder whether VMs located in a Compute Node, e.g., Compute A, are
 accessible while its nova-network service is down if the other nova-network
 is running on the other Compute Nodes, such as Compute B, Compute C, etc.

 Or, does the multi-host just provide continuity of the networking service
 via avoiding single point failure?

 Thanks in advance!

 Regards,
 Byeong-gi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Juno is completely broken in Trusty + Linux 3.19!

2015-05-07 Thread Martinx - ジェームズ
I meatn, it works okay with Linux 3.16, not 3.19. Sorry...

On Thu, May 7, 2015 at 4:26 PM Martinx - ジェームズ thiagocmarti...@gmail.com
wrote:

 Guys,

  I just upgraded my Trusty servers, that I'm running OpenStack Juno, to
 Linux 3.19, which is already available at Proposed repository.

  OpenStack is dead here, no connectivity for the tenants.

  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1452868

  I appreciate any help!

  It works okay with Linux 3.19.

 Best,
 Thiago

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Sean M. Collins
On Thu, May 07, 2015 at 01:40:53PM EDT, Sean Dague wrote:
 On 05/07/2015 01:37 PM, Boris Pavlovic wrote:
  Sean,
  
  Nobody is able to track and know *everything*.
  
  Friendly reminder that Heat is going to be removed and not installed by
  default would help to avoid such situations. 
 
 Sure, but that misses the first point, that gate jobs should really be
 explicit about the services they run.
 
 It's a vast surprise that anything running in our gate just runs with
 defaults.
 
 So I would suggest now is an excellent time to make the rally jobs work
 like the grenade and tempest jobs and be explicit about such things.
 
   -Sean


I'd like to +1 this - that gate jobs should be also explicit about their
configuration. For example, on the network side I've been listening to
my own advice and making the Ironic job explicitly set the OVS agent for
Neutron.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How should edge services APIs integrate into Neutron?

2015-05-07 Thread Paul Michali
Sridar R is planning on having a proposal for DM VPN ready (today?) that he
wants to propose for Liberty release. We're going to have a VPN meeting
next Tuesday (per his request), to discuss this more.

Regards,

PCM

On Thu, May 7, 2015 at 10:58 AM Mathieu Rohon mathieu.ro...@gmail.com
wrote:

 Hi,

 On Wed, May 6, 2015 at 8:42 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 I think Paul is correctly scoping this discussion in terms of APIs and
 management layer.
 For instance, it is true that dynamic routing support, and BGP support
 might be a prerequisite for BGP VPNs, but it should be possible to have at
 least an idea of how user and admin APIs for this VPN use case should look
 like.


 the spec [4] is mainly focusing on API and data model. Of course there
 might be some overlap with BGP support and/or dynamic routing support, but
 this is more about implementation details to my POV.
 We hope we'll see some good progress about the API during reviews and
 design summit, since it seem to suit to several players.


 In particular the discussion on service chaining is a bit out of scope
 here. I'd just note that [1] seems to have a lot of overlap with
 group-based-policies [2], and that it appears to be a service that consumes
 Neutron rather than an extension to it.

 The current VPN service was conceived to be fairly generic. IPSEC VPN is
 the only implemented one, but SSL VPN and BGP VPN were on the map as far as
 I recall.
 Personally having a lot of different VPN APIs is not ideal for users. As
 a user, I probably don't even care about configuring a VPN. What is
 important for me is to get L2 or L3 access to a network in the cloud;
 therefore I would seek for common abstractions that might allow a user for
 configuring a VPN service using the same APIs. Obviously then there will be
 parameters which will be specific for the particular class of VPN being
 created.


 I listened to several contributors in the area in the past, and there are
 plenty of opinions across a spectrum which goes from total abstraction
 (just expose edges at the API layer) to what could be tantamount to a
 RESTful configuration of a VPN appliance. I am not in a position such to
 prescribe what direction the community should take; so, for instance, if
 the people working on XXX VPN believe the best way forward for them is to
 start a new project, so be it.


 that's what BGP VPN and Edge VPN did by creating their own stackforge
 project. But I think the idea was more about sharing the framework upstream
 after failing at finding a consensus during design summits, rather than
 promoting the fact that this has nothing to do with other VPN stuff in
 Neutron.



 The other approach would obviously to build onto the current APIs. The
 only way the Neutron API layer provides to do that is to extend and
 extension. This sounds terrible, and it is indeed terrible. There is a
 proposal for moving toward versioned APIs [3], but until that proposal is
 approved and implemented extensions are the only thing we have.


 Advanced services, such as VPNaaS, are out of the scope of the current
 proposal [3]. It might take a while before the VPNaaS team moves to the
 micro-versionning framework.


 From an API perspective the mechanism would be simpler:
 1 - declare the extension, and implement get_required_extension to put
 'vpnaas' as a requirement
 2 - implement a DB mixin for it providing basic CRUD operations
 3 - add it to the VPN service plugin and add its alias to
 'supported_extensions_aliases' (step 2 and 3 can be merged if you wish not
 to have a mixin)

 What might be a bit more challenging is defining how this reflects onto
 VPN. Ideally you would have a driver for every VPN type you support, and
 then have a little dispatcher to route the API call to the appropriate
 driver according to the VPN type.

 Salvatore

 [1]
 https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining
 [2] https://wiki.openstack.org/wiki/GroupBasedPolicy
 [3] https://review.openstack.org/#/c/136760


 [4]  https://review.openstack.org/#/c/177740/


 On 6 May 2015 at 07:14, Vikram Choudhary vikram.choudh...@huawei.com
 wrote:

  Hi Paul,



 Thanks for starting this mail thread.  We are also eyeing for supporting
 MPBGP in neutron and will like to actively participate in this discussion.

 Please let me know about the IRC channels which we will be following for
 this discussion.



 Currently, I am following below BP’s for this work.

 https://blueprints.launchpad.net/neutron/+spec/edge-vpn

 https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing

 https://blueprints.launchpad.net/neutron/+spec/dynamic-routing-framework


 https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol



 Moreover, a similar kind of work is being headed by Cathy for defining
 an intent framework which can extended for various use case. Currently it
 will be leveraged for SFC but I feel the same can be 

Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Boris Pavlovic

 So... test jobs should be extremely explicit about what they setup and
 what they expect.


+2

Best regards,
Boris Pavlovic

On Thu, May 7, 2015 at 9:44 PM, Sean Dague s...@dague.net wrote:

 On 05/07/2015 02:29 PM, Joshua Harlow wrote:
  Boris Pavlovic wrote:
  Sean,
 
  Nobody is able to track and know *everything*.
 
  Friendly reminder that Heat is going to be removed and not installed by
  default would help to avoid such situations.
 
  Doesn't keystone have a service listing? Use that in rally (and
  elsewhere?), if keystone had a service and each service had a API
  discovery ability, there u go, profit! ;)

 Service listing for test jobs is actually quite dangerous, because then
 something can change something about which services are registered, and
 you automatically start skipping 30% of your tests because you react
 correctly to this change. However, that means the job stopped doing what
 you think it should do.

 *This has happened multiple times in the past*. And typically days,
 weeks, or months go by before someone notices in investigating an
 unrelated failure. And then it's days, weeks, or months to dig out of
 the regressions introduced.

 So... test jobs should be extremely explicit about what they setup and
 what they expect.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] removing Angus Salkeld and Nick Barcet from ceilometer-core‏

2015-05-07 Thread gordon chung
hi folks,
as both have moved on to other endeavours, today we will be removing two 
founding contributors of Ceilometer from the core team. thanks to both of you 
for guiding the project in it's early days!
cheers,gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Morgan Fainberg


 On May 7, 2015, at 10:40, Sean Dague s...@dague.net wrote:
 
 On 05/07/2015 01:37 PM, Boris Pavlovic wrote:
 Sean,
 
 Nobody is able to track and know *everything*.
 
 Friendly reminder that Heat is going to be removed and not installed by
 default would help to avoid such situations.
 
 Sure, but that misses the first point, that gate jobs should really be
 explicit about the services they run.
 
 It's a vast surprise that anything running in our gate just runs with
 defaults.
 
 So I would suggest now is an excellent time to make the rally jobs work
 like the grenade and tempest jobs and be explicit about such things.

Huge +1. Gate jobs absolutely should be explicit. Guessing what is being tested 
is no fun. 

-Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deleting 'default' security group for deleted tenant with nova-net

2015-05-07 Thread Morgan Fainberg
Hi Chris,

So there is no rule saying you can't ask keystone. However, we do emit events 
(audit, needs to be configured) to the message bus when tenants (or in v3 
parlance, projects) are deleted. This allows nova to mark things in a way to 
cleanup / do direct cleanup. 

There have been a few conversations about this, but we haven't made significant 
progress (as far as I know) on this topic. 

The best solution proposal (iirc) was that we need to creat a listener or 
similar that the other services could hook a callback to that will do the 
cleanup directly rather than require blocking the main API for the cleanup. 

Keystone is open to these improvements and ideas. It just doesn't scale of 
every action from every service has to ask keystone if thing still exists. 
Let's make sure we don't start using a pattern that will cause significant 
issues down the road.  

--Morgan

Sent via mobile

 On May 7, 2015, at 09:37, Chris St. Pierre chris.a.st.pie...@gmail.com 
 wrote:
 
 This bug recently came to my attention: 
 https://bugs.launchpad.net/nova/+bug/1241587
 
 I've reopened it, because it is an actual problem, especially for people 
 using nova-network and Rally, which creates and deletes tons of tenants.
 
 The obvious simple solution is to allow deletion of the 'default' security 
 group if it is assigned to a tenant that doesn't exist, but I wasn't sure 
 what the most acceptable way to do that within Nova would be. Is it 
 acceptable to perform a call to the Keystone API to check for the tenant? Or 
 is there another, better way?
 
 Alternatively, is there a better way to tackle the problem altogether?
 
 Thanks!
 
 -- 
 Chris St. Pierre
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deleting 'default' security group for deleted tenant with nova-net

2015-05-07 Thread Chris St. Pierre
Jinkies, that sounds like *work*. Got any links to docs I can start diving
into? In particular, keystone audit events and anything that might be handy
about the solution proposal you mention. Keystone is mostly foreign
territory to me so some learning will be in order.

Thanks!

On Thu, May 7, 2015 at 12:49 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Hi Chris,

 So there is no rule saying you can't ask keystone. However, we do emit
 events (audit, needs to be configured) to the message bus when tenants (or
 in v3 parlance, projects) are deleted. This allows nova to mark things in a
 way to cleanup / do direct cleanup.

 There have been a few conversations about this, but we haven't made
 significant progress (as far as I know) on this topic.

 The best solution proposal (iirc) was that we need to creat a listener or
 similar that the other services could hook a callback to that will do the
 cleanup directly rather than require blocking the main API for the cleanup.

 Keystone is open to these improvements and ideas. It just doesn't scale of
 every action from every service has to ask keystone if thing still
 exists. Let's make sure we don't start using a pattern that will cause
 significant issues down the road.

 --Morgan

 Sent via mobile

 On May 7, 2015, at 09:37, Chris St. Pierre chris.a.st.pie...@gmail.com
 wrote:

 This bug recently came to my attention:
 https://bugs.launchpad.net/nova/+bug/1241587

 I've reopened it, because it is an actual problem, especially for people
 using nova-network and Rally, which creates and deletes tons of tenants.

 The obvious simple solution is to allow deletion of the 'default' security
 group if it is assigned to a tenant that doesn't exist, but I wasn't sure
 what the most acceptable way to do that within Nova would be. Is it
 acceptable to perform a call to the Keystone API to check for the tenant?
 Or is there another, better way?

 Alternatively, is there a better way to tackle the problem altogether?

 Thanks!

 --
 Chris St. Pierre

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Boris Pavlovic
Joshua,


Doesn't keystone have a service listing? Use that in rally (and
 elsewhere?), if keystone had a service and each service had a API discovery
 ability, there u go, profit! ;)


Exactly that happened. We were running benchmarks against Heat and Rally
task validation start failing saying  that there is no heat service=)

Best regards,
Boris Pavlovic

On Thu, May 7, 2015 at 9:29 PM, Joshua Harlow harlo...@outlook.com wrote:

 Boris Pavlovic wrote:

 Sean,

 Nobody is able to track and know *everything*.

 Friendly reminder that Heat is going to be removed and not installed by
 default would help to avoid such situations.


 Doesn't keystone have a service listing? Use that in rally (and
 elsewhere?), if keystone had a service and each service had a API discovery
 ability, there u go, profit! ;)




 Best regards,
 Boris Pavlovic

 On Thu, May 7, 2015 at 8:06 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 On 05/07/2015 12:51 PM, Boris Pavlovic wrote:
   Hi stackers,
  
   Recently was merged patch that removes Heat from list of service
 that
   are installed by default DevStack
  
   Please next time make sure that all PTL of all projects in
 OpenStack
   know about such big not backward compatible changes.
  
   P.S This change paralyzed work on Rally for 2 days. =(

 This should in no way impact gate jobs, they should all be explicitly
 setting service lists themselves for their environments. The ensure
 devstack-gate model is built around that. If they are not, then that
 is
 a bug in how they were built.

 This has also been a long time coming, we merged the direction patch
 for
 this in Feb in the FUTURE.rst document.

  -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Boris Pavlovic
Joshua,


Makes sense, perhaps all the test (and/or test-like) frameworks could share
 some code + common config that does this, seems to be something simple (and
 something that all could use for pre-testing validation of all the expected
 services being alive/active/up/responding...)?


In Rally this is part of life cycle of running task.
Not sure that it is possible to share it, because it is thigh related to
checking what did you write in task. =)


Best regards,
Boris Pavlovic

On Thu, May 7, 2015 at 9:54 PM, Joshua Harlow harlo...@outlook.com wrote:

 Sean Dague wrote:

 On 05/07/2015 02:29 PM, Joshua Harlow wrote:

 Boris Pavlovic wrote:

 Sean,

 Nobody is able to track and know *everything*.

 Friendly reminder that Heat is going to be removed and not installed by
 default would help to avoid such situations.

 Doesn't keystone have a service listing? Use that in rally (and
 elsewhere?), if keystone had a service and each service had a API
 discovery ability, there u go, profit! ;)


 Service listing for test jobs is actually quite dangerous, because then
 something can change something about which services are registered, and
 you automatically start skipping 30% of your tests because you react
 correctly to this change. However, that means the job stopped doing what
 you think it should do.

 *This has happened multiple times in the past*. And typically days,
 weeks, or months go by before someone notices in investigating an
 unrelated failure. And then it's days, weeks, or months to dig out of
 the regressions introduced.

 So... test jobs should be extremely explicit about what they setup and
 what they expect.


 Makes sense, perhaps all the test (and/or test-like) frameworks could
 share some code + common config that does this, seems to be something
 simple (and something that all could use for pre-testing validation of all
 the expected services being alive/active/up/responding...)?

 ^^ Just an idear,

 -Josh


 -Sean


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][python-heatclient] Does python-heatclient works with keystone sessions?

2015-05-07 Thread Jay Reslock
Hi,
This is my first mail to the group.  I hope I set the subject correctly and
that this hasn't been asked already.  I searched archives and did not see
this question asked or answered previously.

I am working on a client thing that uses the python-keystoneclient and
python-heatclient api bindings to set up an authenticated session and then
use that session to talk to the heat service.  This doesn't work for heat
but does work for other services such as nova and sahara.  Is this because
sessions aren't supported in the heatclient api yet?

sample code:

https://gist.github.com/jreslock/a525abdcce53ca0492a7

I'm using fabric to define tasks so I can call them via another tool.  When
I run the task I get:

TypeError: Client() takes at least 1 argument (0 given)

The documentation does not say anything about being able to pass session to
the heatclient but the others seem to work.  I just want to know if this is
intended/expected behavior or not.

-Jason
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Sean Dague
On 05/07/2015 01:37 PM, Boris Pavlovic wrote:
 Sean,
 
 Nobody is able to track and know *everything*.
 
 Friendly reminder that Heat is going to be removed and not installed by
 default would help to avoid such situations. 

Sure, but that misses the first point, that gate jobs should really be
explicit about the services they run.

It's a vast surprise that anything running in our gate just runs with
defaults.

So I would suggest now is an excellent time to make the rally jobs work
like the grenade and tempest jobs and be explicit about such things.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deleting 'default' security group for deleted tenant with nova-net

2015-05-07 Thread Morgan Fainberg
On May 7, 2015 at 11:21:00 AM, Chris St. Pierre (chris.a.st.pie...@gmail.com) 
wrote:
Jinkies, that sounds like *work*. Got any links to docs I can start diving 
into? In particular, keystone audit events and anything that might be handy 
about the solution proposal you mention. Keystone is mostly foreign territory 
to me so some learning will be in order.


The event notifications are documented here: 
http://docs.openstack.org/developer/keystone/event_notifications.html
As to the rest of the topic, there are a few threads in the past on the ML such 
as http://lists.openstack.org/pipermail/openstack-dev/2015-February/055801.html

I don’t think we’ve really gotten much further at this point than that.

Thanks!

On Thu, May 7, 2015 at 12:49 PM, Morgan Fainberg morgan.fainb...@gmail.com 
wrote:
Hi Chris,

So there is no rule saying you can't ask keystone. However, we do emit events 
(audit, needs to be configured) to the message bus when tenants (or in v3 
parlance, projects) are deleted. This allows nova to mark things in a way to 
cleanup / do direct cleanup. 

There have been a few conversations about this, but we haven't made significant 
progress (as far as I know) on this topic. 

The best solution proposal (iirc) was that we need to creat a listener or 
similar that the other services could hook a callback to that will do the 
cleanup directly rather than require blocking the main API for the cleanup. 

Keystone is open to these improvements and ideas. It just doesn't scale of 
every action from every service has to ask keystone if thing still exists. 
Let's make sure we don't start using a pattern that will cause significant 
issues down the road.  

--Morgan

Sent via mobile

On May 7, 2015, at 09:37, Chris St. Pierre chris.a.st.pie...@gmail.com wrote:

This bug recently came to my attention: 
https://bugs.launchpad.net/nova/+bug/1241587

I've reopened it, because it is an actual problem, especially for people using 
nova-network and Rally, which creates and deletes tons of tenants.

The obvious simple solution is to allow deletion of the 'default' security 
group if it is assigned to a tenant that doesn't exist, but I wasn't sure what 
the most acceptable way to do that within Nova would be. Is it acceptable to 
perform a call to the Keystone API to check for the tenant? Or is there 
another, better way?

Alternatively, is there a better way to tackle the problem altogether?

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Boris Pavlovic
Sean,

Thank you for advice. We are going to fix jobs ASAP.

Here is the patch: https://review.openstack.org/#/c/181088/
But seems like it's not ready yet.

Best regards,
Boris Pavlovic



On Thu, May 7, 2015 at 8:40 PM, Sean Dague s...@dague.net wrote:

 On 05/07/2015 01:37 PM, Boris Pavlovic wrote:
  Sean,
 
  Nobody is able to track and know *everything*.
 
  Friendly reminder that Heat is going to be removed and not installed by
  default would help to avoid such situations.

 Sure, but that misses the first point, that gate jobs should really be
 explicit about the services they run.

 It's a vast surprise that anything running in our gate just runs with
 defaults.

 So I would suggest now is an excellent time to make the rally jobs work
 like the grenade and tempest jobs and be explicit about such things.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Joshua Harlow

Sean Dague wrote:

On 05/07/2015 02:29 PM, Joshua Harlow wrote:

Boris Pavlovic wrote:

Sean,

Nobody is able to track and know *everything*.

Friendly reminder that Heat is going to be removed and not installed by
default would help to avoid such situations.

Doesn't keystone have a service listing? Use that in rally (and
elsewhere?), if keystone had a service and each service had a API
discovery ability, there u go, profit! ;)


Service listing for test jobs is actually quite dangerous, because then
something can change something about which services are registered, and
you automatically start skipping 30% of your tests because you react
correctly to this change. However, that means the job stopped doing what
you think it should do.

*This has happened multiple times in the past*. And typically days,
weeks, or months go by before someone notices in investigating an
unrelated failure. And then it's days, weeks, or months to dig out of
the regressions introduced.

So... test jobs should be extremely explicit about what they setup and
what they expect.


Makes sense, perhaps all the test (and/or test-like) frameworks could 
share some code + common config that does this, seems to be something 
simple (and something that all could use for pre-testing validation of 
all the expected services being alive/active/up/responding...)?


^^ Just an idear,

-Josh



-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-07 Thread Georgy Okrokvertskhov
Yes, Rabbit MQ is kind of shared. Each VM gets its own Queue which is
dynamically created in MQ when application is being deployed. Technically
we can create separate MQ users and virtual hosts for each VM, but this is
an overkill for now. So by default it is just separate Queue with random
generated name.

If you want more protection you wiull have to change Murano default
behaviour by adding a new vhost for each tenant.

Thanks
Gosha

On Thu, May 7, 2015 at 10:26 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  So... rabbit is not multitenant I think. You share a rabbit across
 multiple tenants's vms? How do you protect one tenant's vm's from getting
 commands sent to it by another tenant?

 Thanks,
 Kevin
  --
 *From:* Georgy Okrokvertskhov [gokrokvertsk...@mirantis.com]
 *Sent:* Thursday, May 07, 2015 9:18 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

 Hi,

  When we use Murano in production there is a MQ service which is running
 on OpenStack controllers but it listens on public interface. It means that
 both Murano which is running on OpenStack controllers and Agent on VMs have
 an access to this MQ via external (public) network.
  When Murano creates a new deployment it actually deploys a private
 network and attach it to the router which acts as a gateway to external
 networking. So it is specific application deployment topology which allows
 VMs to communicate with MA via external network.

  Thanks
  Gosha

 On Thu, May 7, 2015 at 1:28 AM, Filip Blaha filip.bl...@hp.com wrote:

  yes. I agree that direction is important from only networking piont of
 view. Usually is more probable that VM on neutron network will be able to
 access O~S service ( VM -- rabbit) then opposite direction from O~S
 service to VM running on neutron network (mistral -- VM).

 Filip



 On 05/06/2015 06:39 PM, Georgy Okrokvertskhov wrote:

   Connection direction here is important only in the frame of networking
 connectivity problem solving. The networking in OpenStack in general works
 in such a way so that connections from VM are allowed to almost anywhere.
 In Murano production deployment we use separate MQ instance so that VMs
 have no access to OpenStack MQ.

  In the sense who initiates task execution it always a Murano service
 which publishes tasks (shell script + necessary files) in the MQ so that
 agent can pull them and execute.

  Thanks
  Gosha



 On Wed, May 6, 2015 at 9:31 AM, Filip Blaha filip.bl...@hp.com wrote:

 Hello

 one more note on that. There is difference in direction who initiates
 connection. In case of murano agent -- rabbit MQ is connection initiated
 from VM to openstack service(rabbit). In case of std.ssh mistral action is
 direction opposite from openstack service (mistral) to ssh server on VM.

 Filip


 On 05/06/2015 06:00 PM, Pospisil, Radek wrote:

 Hello,

 I think that the generic question is - can be O~S services also
 accessible on Neutron networks, so VM (created by Nova) can access it? We
 (I and Filip) were discussing this today and we were not make a final
 decision.
 Another example is Murano agent running on VMs - it connects to
 RabbitMQ which is also accessed by Murano engine

Regards,

 Radek

 -Original Message-
 From: Blaha, Filip
 Sent: Wednesday, May 06, 2015 5:43 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action

 Hello

 We are considering implementing  actions on services of a murano
 environment via mistral workflows. We are considering whether mistral
 std.ssh action could be used to run some command on an instance. Example of
 such action in murano could be restart action on Mysql DB service.
 Mistral workflow would ssh to that instance running Mysql and run
 service mysql restart. From my point of view trying to use SSH to access
 instances from mistral workflow is not good idea but I would like to
 confirm it.

 The biggest problem I see there is openstack networking. Mistral
 service running on some openstack node would not be able to access instance
 via its fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh
 from namespace of its gateway router e.g. ip netns exec qrouter-... ssh
 cirros@10.0.0.5 but I think it is not good to rely on implementation
 detail of  neutron and use it. In multinode openstack deployment it could
 be even more complicated.

 In other words I am asking whether we can use std.ssh mistral action to
 access instances via ssh on theirs fixed IPs? I think no but I would like
 to confirm it.

 Thanks
 Filip


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 

Re: [openstack-dev] [cinder][nova] Question on Cinder client exception handling

2015-05-07 Thread Matt Riedemann



On 5/6/2015 7:02 AM, Chen CH Ji wrote:

Hi
In order to work on [1] , nova need to know what kind of
exception are raised when using cinderclient so that it can handle like
[2] did?
In this case, we don't need to distinguish the error
case based on string compare , it's more accurate and less error leading
Anyone is doing it or any other methods I can use to
catch cinder specified  exception in nova? Thanks


[1] https://bugs.launchpad.net/nova/+bug/1450658
[2]
https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L64

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is there anything preventing us from adding a more specific exception to 
cinderclient and then once that's in and released, we can pin the 
minimum version of cinderclient in global-requirements so nova can 
safely use it?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Question on Cinder client exception handling

2015-05-07 Thread Matt Riedemann



On 5/7/2015 3:21 PM, Chen CH Ji wrote:

no, I only want to confirm whether cinder folks is doing this or there
are already tricks can be used that before submit the change ... thanks

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC

Inactive hide details for Matt Riedemann ---05/07/2015 10:12:21 PM---On
5/6/2015 7:02 AM, Chen CH Ji wrote:  HiMatt Riedemann ---05/07/2015
10:12:21 PM---On 5/6/2015 7:02 AM, Chen CH Ji wrote:  Hi

From: Matt Riedemann mrie...@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Date: 05/07/2015 10:12 PM
Subject: Re: [openstack-dev] [cinder][nova] Question on Cinder client
exception handling







On 5/6/2015 7:02 AM, Chen CH Ji wrote:
  Hi
  In order to work on [1] , nova need to know what kind of
  exception are raised when using cinderclient so that it can handle like
  [2] did?
  In this case, we don't need to distinguish the error
  case based on string compare , it's more accurate and less error leading
  Anyone is doing it or any other methods I can use to
  catch cinder specified  exception in nova? Thanks
 
 
  [1] https://bugs.launchpad.net/nova/+bug/1450658
  [2]
 
https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L64
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
  District, Beijing 100193, PRC
 
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Is there anything preventing us from adding a more specific exception to
cinderclient and then once that's in and released, we can pin the
minimum version of cinderclient in global-requirements so nova can
safely use it?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I added some notes to the bug after looking into the cinder code.  I 
think this would actually be a series of changes if you want something 
more specific than the 500 you're getting back from the cinder API 
today, and that's going to be several changes (cinder to raise a more 
specific error, cinderclient to translate that to a specific exception, 
and then nova to handle that).


I'd probably just go with a change to nova to handle the 500 from cinder 
and not completely puke and orphan the instance.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Cross-project coordination

2015-05-07 Thread Brent Eagles
Hi,

On Thu, May 07, 2015 at 10:03:30PM +, Sean M. Collins wrote:
 On Wed, Apr 22, 2015 at 06:41:58PM EDT, Jay Pipes wrote:
  Agreed. I'm hoping that someone in the Nova community -- note, this does 
  not need to be a Nova core contributor -- can step up to the plate and 
  serve in this critical role.
 
 Hi,
 
 I've put together a section on the wiki,
 
 https://wiki.openstack.org/wiki/CrossProjectLiaisons#Inter-project_Liaisons
 
 We still need a Nova liaison to sign up, to help me with the Nova/Neutron
 cross project effort. If you are interested, please reply and replace
 the Volunteer Needed sections in the table!

I volunteer! I've modified the wiki accordingly.

Cheers,

Brent


pgp62LhTdm14W.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-05-07 Thread Andrew Beekhof

 On 5 May 2015, at 1:19 pm, Zhou Zheng Sheng / 周征晟 zhengsh...@awcloud.com 
 wrote:
 
 Thank you Andrew.
 
 on 2015/05/05 08:03, Andrew Beekhof wrote:
 On 28 Apr 2015, at 11:15 pm, Bogdan Dobrelya bdobre...@mirantis.com wrote:
 
 Hello,
 Hello, Zhou
 
 I using Fuel 6.0.1 and find that RabbitMQ recover time is long after
 power failure. I have a running HA environment, then I reset power of
 all the machines at the same time. I observe that after reboot it
 usually takes 10 minutes for RabittMQ cluster to appear running
 master-slave mode in pacemaker. If I power off all the 3 controllers and
 only start 2 of them, the downtime sometimes can be as long as 20 minutes.
 Yes, this is a known issue [0]. Note, there were many bugfixes, like
 [1],[2],[3], merged for MQ OCF script, so you may want to try to
 backport them as well by the following guide [4]
 
 [0] https://bugs.launchpad.net/fuel/+bug/1432603
 [1] https://review.openstack.org/#/c/175460/
 [2] https://review.openstack.org/#/c/175457/
 [3] https://review.openstack.org/#/c/175371/
 [4] https://review.openstack.org/#/c/170476/
 Is there a reason you’re using a custom OCF script instead of the 
 upstream[a] one?
 Please have a chat with David (the maintainer, in CC) if there is something 
 you believe is wrong with it.
 
 [a] 
 https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
 
 I'm using the OCF script from the Fuel project, specifically from the
 6.0 stable branch [alpha].

Ah, I’m still learning who is who... i thought you were part of that project 
:-) 

 
 Comparing with upstream OCF code, the main difference is that Fuel
 RabbitMQ OCF is a master-slave resource. Fuel RabbitMQ OCF does more
 bookkeeping, for example, blocking client access when RabbitMQ cluster
 is not ready. I beleive the upstream OCF should be OK to use as well
 after I read the code, but it might not fit into the Fuel project. As
 far as I test, the Fuel OCF script is good except sometimes the full
 reassemble time is long, and as I find out, it is mostly because the
 Fuel MySQL Galera OCF script keeps pacemaker from promoting RabbitMQ
 resource, as I mentioned in the previous emails.
 
 Maybe Vladimir and Sergey can give us more insight on why Fuel needs a
 master-slave RabbitMQ.

That would be good to know.
Browsing the agent, promote seems to be a no-op if rabbit is already running.

 I see Vladimir and Sergey works on the original
 Fuel blueprint RabbitMQ cluster [beta].
 
 [alpha]
 https://github.com/stackforge/fuel-library/blob/stable/6.0/deployment/puppet/nova/files/ocf/rabbitmq
 [beta]
 https://blueprints.launchpad.net/fuel/+spec/rabbitmq-cluster-controlled-by-pacemaker
 
 I have a little investigation and find out there are some possible causes.
 
 1. MySQL Recovery Takes Too Long [1] and Blocking RabbitMQ Clustering in
 Pacemaker
 
 The pacemaker resource p_mysql start timeout is set to 475s. Sometimes
 MySQL-wss fails to start after power failure, and pacemaker would wait
 475s before retry starting it. The problem is that pacemaker divides
 resource state transitions into batches. Since RabbitMQ is master-slave
 resource, I assume that starting all the slaves and promoting master are
 put into two different batches. If unfortunately starting all RabbitMQ
 slaves are put in the same batch as MySQL starting, even if RabbitMQ
 slaves and all other resources are ready, pacemaker will not continue
 but just wait for MySQL timeout.
 Could you please elaborate the what is the same/different batches for MQ
 and DB? Note, there is a MQ clustering logic flow charts available here
 [5] and we're planning to release a dedicated technical bulletin for this.
 
 [5] http://goo.gl/PPNrw7
 
 I can re-produce this by hard powering off all the controllers and start
 them again. It's more likely to trigger MySQL failure in this way. Then
 I observe that if there is one cloned mysql instance not starting, the
 whole pacemaker cluster gets stuck and does not emit any log. On the
 host of the failed instance, I can see a mysql resource agent process
 calling the sleep command. If I kill that process, the pacemaker comes
 back alive and RabbitMQ master gets promoted. In fact this long timeout
 is blocking every resource from state transition in pacemaker.
 
 This maybe a known problem of pacemaker and there are some discussions
 in Linux-HA mailing list [2]. It might not be fixed in the near future.
 It seems in generally it's bad to have long timeout in state transition
 actions (start/stop/promote/demote). There maybe another way to
 implement MySQL-wss resource agent to use a short start timeout and
 monitor the wss cluster state using monitor action.
 This is very interesting, thank you! I believe all commands for MySQL RA
 OCF script should be as well wrapped with timeout -SIGTERM or -SIGKILL
 as we did for MQ RA OCF. And there should no be any sleep calls. I
 created a bug for this [6].
 
 [6] https://bugs.launchpad.net/fuel/+bug/1449542
 
 I also 

Re: [openstack-dev] [TC][Keystone] Rehashing the Pecan/Falcon/other WSGI debate

2015-05-07 Thread Dolph Mathews
On Monday, May 4, 2015, Flavio Percoco fla...@redhat.com wrote:

 On 02/05/15 12:02 -0700, Morgan Fainberg wrote:



  On May 2, 2015, at 10:28, Monty Taylor mord...@inaugust.com wrote:

  On 05/01/2015 09:16 PM, Jamie Lennox wrote:
 Hi all,

 At around the time Barbican was applying for incubation there was a
 discussion about supported WSGI frameworks. From memory the decision
 at the time was that Pecan was to be the only supported framework and
 that for incubation Barbican had to convert to Pecan (from Falcon).

 Keystone is looking to ditch our crusty old, home-grown wsgi layer for
 an external framework and both Pecan and Falcon are in global
 requirements.

 In the experimenting I've done Pecan provides a lot of stuff we don't
 need and some that just gets in the way. To call out a few:
 * the rendering engine really doesn't make sense for us, for APIs, and
 where we are often returning different data (not just different views or
 data) based on Content-Type.
 * The security enforcement within Pecan does not really mesh with how
 we enforce policy, nor does the way we build controller objects per
 resource. It seems we will have to build this for ourselves on top of
 pecan

 and there are just various other niggles.

 THIS IS NOT SUPPOSED TO START A DEBATE ON THE VIRTUES OF EACH FRAMEWORK.

 Everything I've found can be dealt with and pecan will be a vast
 improvement over what we use now. I have also not written a POC with
 Falcon to know that it will suit any better.

 My question is: Does the ruling that Pecan is the only WSGI framework
 for OpenStack stand? I don't want to have 100s of frameworks in the
 global requirements, but given falcon is already there iff a POC
 determines that Falcon is a better fit for keystone can we use it?


 a) Just to be clear - I don't actually care


 Just to be super clear, I don't care either. :)


 That said:

 falcon is a wsgi framework written by kgriffs who was PTL of marconi who
 has since being involved with OpenStack. My main perception of it has
 always been as a set of people annoyed by openstack doing their own
 thing. That's fine - but I don't have much of a use for that myself.


 ok, I'll bite.

 We didn't pick Falcon because Kurt was Marconi's PTL and Falcon's
 maintainer. The main reason it was picked was related to performance
 first[0] and time (We didn't/don't have enough resources to even think
 of porting the API) and at this point, I believe it's not even going
 to be considered anymore in the short future.


I'm just going to pipe up and say that's a terribly shallow reason for
choosing a web framework, and I think it's silly and embarrassing that
there's not a stronger community preference for more mature frameworks. I
take that as a sign that most of our developer community is coming from
non-Python backgrounds, which is fine, but this whole conversation has
always felt like a plague of Not-Invented-Here, which baffles me.



 There were lots of discussions around this, there were POCs and team
 work. I think it's fair to say that the team didn't blindly *ignored*
 what was recommended as the community framework but it picked what
 worked best for the service.

 [0] https://wiki.openstack.org/wiki/Zaqar/pecan-evaluation


 pecan is a wsgi framework written by Dreamhost that eventually moved
 itself into stackforge to better enable collaboration with our community
 after we settled on it as the API for things moving forward.

 Since the decision that new REST apis should be written in Pecan, the
 following projects have adopted it:

 openstack:
 barbican
 ceilometer
 designate
 gnocchi
 ironic
 ironic-python-agent
 kite
 magnum
 storyboard
 tuskar

 stackforge:
 anchor
 blazar
 cerberus
 cloudkitty
 cue
 fuel-ostf
 fuel-provision
 graffiti
 libra
 magnetodb
 monasca-api
 mistral
 octavia
 poppy
 radar
 refstack
 solum
 storyboard
 surveil
 terracotta

 On the other hand, the following use falcon:

 stachtach-quincy
 zaqar


 To me this is a strong indicator that pecan will see more eyes and
 possibly be more open to improvement to meet the general need.


 +1

  That means that for all of the moaning and complaining, there is
 essentially one thing that uses it - the project that was started by the
 person who wrote it and has since quit.

 I'm sure it's not perfect - but the code is in stackforge - I'm sure we
 can improve it if there is something missing. OTOH - if we're going to
 go back down this road, I'd think it would be more useful to maybe look
 at flask or something else that has a large following in the python
 community at large to try to reduce the amount of special we are.


 +1


 Please, lets not go back down this road, not yet at least. :)


  But honestly - I think it matters almost not at all, which is why I keep
 telling people to just use pecan ... basically, the argument is not
 worth it.


 +1, go with Pecan if your requirements are not like Zaqar's.
 Contribute to Pecan and make it better.

 Flavio

 --
 

Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-05-07 Thread Andrew Beekhof

 On 5 May 2015, at 9:30 pm, Zhou Zheng Sheng / 周征晟 zhengsh...@awcloud.com 
 wrote:
 
 Thank you Andrew. Sorry for misspell your name in the previous email.
 
 on 2015/05/05 14:25, Andrew Beekhof wrote:
 On 5 May 2015, at 2:31 pm, Zhou Zheng Sheng / 周征晟 zhengsh...@awcloud.com 
 wrote:
 
 Thank you Bogdan for clearing the pacemaker promotion process for me.
 
 on 2015/05/05 10:32, Andrew Beekhof wrote:
 On 29 Apr 2015, at 5:38 pm, Zhou Zheng Sheng / 周征晟 
 zhengsh...@awcloud.com wrote:
 [snip]
 
 Batch is a pacemaker concept I found when I was reading its
 documentation and code. There is a batch-limit: 30 in the output of
 pcs property list --all. The pacemaker official documentation
 explanation is that it's The number of jobs that the TE is allowed to
 execute in parallel. From my understanding, pacemaker maintains cluster
 states, and when we start/stop/promote/demote a resource, it triggers a
 state transition. Pacemaker puts as many as possible transition jobs
 into a batch, and process them in parallel.
 Technically it calculates an ordered graph of actions that need to be 
 performed for a set of related resources.
 You can see an example of the kinds of graphs it produces at:
 
  
 http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/s-config-testing-changes.html
 
 There is a more complex one which includes promotion and demotion on the 
 next page.
 
 The number of actions that can run at any one time is therefor limited by
 - the value of batch-limit (the total number of in-flight actions)
 - the number of resources that do not have ordering constraints between 
 them (eg. rsc{1,2,3} in the above example)  
 
 So in the above example, if batch-limit = 3, the monitor_0 actions will 
 still all execute in parallel.
 If batch-limit == 2, one of them will be deferred until the others 
 complete.
 
 Processing of the graph stops the moment any action returns a value that 
 was not expected.
 If that happens, we wait for currently in-flight actions to complete, 
 re-calculate a new graph based on the new information and start again.
 So can I infer the following statement? In a big cluster with many
 resources, chances are some resource agent actions return unexpected
 values,
 The size of the cluster shouldn’t increase the chance of this happening 
 unless you’ve set the timeouts too aggressively.
 
 If there are many types of resource agents, and anyone of them is not
 well written, it might cause trouble, right?

Yes, but really only for the things that depend on it.

For example if resources B, C, D, E all depend (in some way) on A, then their 
startup is going to be delayed.
But F, G, H and J will be able to start while we wait around for B to time out.

 
 and if any of the in-flight action timeout is long, it would
 block pacemaker from re-calculating a new transition graph?
 Yes, but its actually an argument for making the timeouts longer, not 
 shorter.
 Setting the timeouts too aggressively actually increases downtime because of 
 all the extra delays and recovery it induces.
 So set them to be long enough that there is unquestionably a problem if you 
 hit them.
 
 But we absolutely recognise that starting/stopping a database can take a 
 very long time comparatively and that it shouldn’t block recovery of other 
 unrelated services.
 I would expect to see this land in Pacemaker 1.1.14
 
 It will be great to see this in Pacemaker 1.1.14. From my experience
 using Pacemaker, I think customized resource agents are possibly the
 weakest part.

This is why we encourage people wanting new agents to get involved with the 
upstream resource-agents project :-)

 This feature should improve the handling for resource
 action timeouts.
 
 I see the
 current batch-limit is 30 and I tried to increase it to 100, but did not
 help.
 Correct.  It only puts an upper limit on the number of in-flight actions, 
 actions still need to wait for all their dependants to complete before 
 executing.
 
 I'm sure that the cloned MySQL Galera resource is not related to
 master-slave RabbitMQ resource. I don't find any dependency, order or
 rule connecting them in the cluster deployed by Fuel [1].
 In general it should not have needed to wait, but if you send me a 
 crm_report covering the period you’re talking about I’ll be able to comment 
 specifically about the behaviour you saw.
 
 You are very nice, thank you. I uploaded the file generated by
 crm_report to google drive.
 
 https://drive.google.com/file/d/0B_vDkYRYHPSIZ29NdzV3NXotYU0/view?usp=sharing

Hmmm... there’s no logs included here for some reason.
I suspect it a bug on my part, can you apply this patch to report.collector on 
the machine you’re running crm_report from and retry?

   https://github.com/ClusterLabs/pacemaker/commit/96427ec


 
 Is there anything I can do to make sure all the resource actions return
 expected values in a full reassembling?
 In general, if we say ‘start’, do your best to start or return ‘0’ if you 
 

Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Michael Barton
On Thu, May 7, 2015 at 7:05 PM, Adam Lawson alaw...@aqorn.com wrote:

 Chuck (and/or others who understand tor have experienced the limits of
 Python)

 I found this comment of yours incredibly intriguing: we are running out
 of incremental improvements that can be made with Python.

 Given your work with Swift thus far, what sort of limitations have you
 discovered that had to do specifically with the fact we're using Python? I
 haven't run into real-life limitations specific to a particular language
 before (I usually run into issues with my approach rather than limitations
 with the language itself) so i find this to be a fascinating (and perhaps
 accidental) consideration.



Well, Swift is sort of different from provisioning services like most
Openstack projects.  We handle hundreds of times as many requests as big
Nova installations, and the backend servers like this one handle some
multiplier on top of that.  Our users care a lot about performance because
it affects things like their page load times.

Then Python turns out to be kind of a bad choice for writing a
high-performance file server.  It's slow.  Its concurrency model really
punishes you for having workloads that mix disk and network i/o and CPU.
Swift's mix of worker processes, eventlet, and thread pools mostly works
but it's complicated and inefficient.  Blocking disk operations and
CPU-heavy tasks are still prone to either locking up event loops or
thrashing the GIL.

Python 3 and pypy would both make some aspects of that better, but not fix
it (yet).

- Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings

2015-05-07 Thread Tony Breeds
On Thu, May 07, 2015 at 01:31:07PM +0300, Mikhail Dubov wrote:
 We have decided to stay in *#openstack-meeting* but have our meetings *on
 Mondays at 1400 UTC*. Hope that this time there will be no conflicts.
 
 We will also have the internal release meeting in *#openstack-rally* one
 hour before that, on Mondays at 1300 UTC.

Okay thanks.  iCal updated.

Tony.


pgpR0utn1Iksu.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Adam Lawson
Chuck (and/or others who understand tor have experienced the limits of
Python)

I found this comment of yours incredibly intriguing: we are running out of
incremental improvements that can be made with Python.

Given your work with Swift thus far, what sort of limitations have you
discovered that had to do specifically with the fact we're using Python? I
haven't run into real-life limitations specific to a particular language
before (I usually run into issues with my approach rather than limitations
with the language itself) so i find this to be a fascinating (and perhaps
accidental) consideration.



*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Thu, May 7, 2015 at 3:48 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Chuck Thier's message of 2015-05-07 13:10:13 -0700:
  I think most are missing the point a bit.  The question that should
 really
  be asked is, what is right for Swift to continue to scale.  Since the
  inception of Openstack, Swift has had to solve for problems of scale that
  generally are not shared with the rest of Openstack.
 
  When we first set out to write Swift, we had set, what we thought at the
  time were pretty lofty goals for ourselves:
 
  * 100 Billion objects
  * 100 Petabytes of data
  * 100 K requests/second
  * 100 Gb/s throughput
 
  We started with Python figuring that when we hit major bottlenecks, we
  would look at other options.  We have been surprised at how far we have
  been able to push Python and have met most if not all of the goals above.
 
  As we look toward the future, we realize that we are now looking for how
 we
  will support trillions of objects, 100's of petabytes to exabytes of
 data,
  etc.  We feel that we have finally hit that point that we need more than
  incremental improvements, and that we are running out of incremental
  improvements that can be made with Python.
 
  What started as a simple experiment by Mike Barton, has turned into
 quite a
  significant improvement in performance and builds a base that can be
 built
  off of for future improvements.  This wasn't built because of it being
  shiny but out of direct need, and is currently being tested with great
  results on production workloads.
 
  I applaud the team that has worked on this at Rackspace, and hope the
  community can look at the current needs of Swift, and the merits of the
  work that has been accomplished, rather than the politics of shiny.
 

 Chuck, much respect to you and the team for everything accomplished.

 I'm still very curious to hear if anybody has been willing to try to
 make Swift work on pypy. This is pretty much why pypy exists, and making
 pypy work for Swift could mean some really nice residual benefits to the
 other projects that haven't gone as far as to experiment with a compiled
 language like Go yet. There's also the other benefit that pypy would
 gain some eyeballs and improvements that we could feed back into it.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Feedback to move IRC Monday meeting and time.

2015-05-07 Thread zhiwei
Sorry for late response, 1600 GMT is a little late in China, will
check the meeting minutes if not attend.

On Fri, May 8, 2015 at 12:19 AM, JJ Asghar jasg...@chef.io wrote:

 On May 6, 2015, at 11:33 PM, Samuel Cassiba s...@cassiba.com wrote:


 This has actually caused a situation that I’d like to make public. In the
 documentation the times for the meetings are suggested at the top of the
 hour, we have ours that start at :30 past. This allows for our friends and
 community members on the west coast of the United States able to join at a
 pseudo-reasonable time.  The challenge is, if we move it forward to the top
 of the hour, we may lose the west coast, but if we move it back to the top
 of the next hour we may lose our friends in Germany and earlier time zones.

 Moving it to the top of the hour would still fall in the realm of
 pseudo-reasonable, as reasonable as meetings in that timeframe can be. 0800
 is more reasonable than 0730. ;-)

 By doing that, it would allow the group to remain as inclusive as possible,
 while still allowing time for commutes for us west coasters.


 Awesome! Yeah so it looks like we’ve gotten some positive feedback about
 moving the meeting to 1600GMT. We’ll discuss it and ideally make it official
 on Monday, and it’s already added to the agenda[1].


 -JJ

 [1]: https://etherpad.openstack.org/p/openstack-chef-meeting-20150511


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Clay Gerrard
On Thu, May 7, 2015 at 3:48 PM, Clint Byrum cl...@fewbar.com wrote:

 I'm still very curious to hear if anybody has been willing to try to
 make Swift work on pypy.


yeah, Alex Gaynor was helping out with it for awhile.  It worked.  And it
helped.  A little bit.

Probably still worth looking at if you're curious, but I'm not aware of
anyone who's currently working aggressively to productionize swift running
on pypy.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [be nice] Before doing big non backardcompabitle changes in how gates work make sure that all PTL are informed about that

2015-05-07 Thread Joe Gordon
As a heads up, here is a patch to remove Sahara from the default
configuration as well. This is part of the effort to further decouple the
'integrated gate' so we don't have to gate every project on the tests for
every project.

https://review.openstack.org/#/c/181230/

On Thu, May 7, 2015 at 11:58 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 So... test jobs should be extremely explicit about what they setup and
 what they expect.


 +2

 Best regards,
 Boris Pavlovic

 On Thu, May 7, 2015 at 9:44 PM, Sean Dague s...@dague.net wrote:

 On 05/07/2015 02:29 PM, Joshua Harlow wrote:
  Boris Pavlovic wrote:
  Sean,
 
  Nobody is able to track and know *everything*.
 
  Friendly reminder that Heat is going to be removed and not installed by
  default would help to avoid such situations.
 
  Doesn't keystone have a service listing? Use that in rally (and
  elsewhere?), if keystone had a service and each service had a API
  discovery ability, there u go, profit! ;)

 Service listing for test jobs is actually quite dangerous, because then
 something can change something about which services are registered, and
 you automatically start skipping 30% of your tests because you react
 correctly to this change. However, that means the job stopped doing what
 you think it should do.

 *This has happened multiple times in the past*. And typically days,
 weeks, or months go by before someone notices in investigating an
 unrelated failure. And then it's days, weeks, or months to dig out of
 the regressions introduced.

 So... test jobs should be extremely explicit about what they setup and
 what they expect.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] libvirt.remove_unused_kernels config option - default to true now?

2015-05-07 Thread Matt Riedemann

I came across this today:

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagecache.py#L50

That was added back in grizzly:

https://review.openstack.org/#/c/22777/

With a note in the code that we should default it to true at some point. 
 Is 2+ years long enough for this to change to true?


This change predates my involvement in the project so ML it is.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-07 Thread Joe Gordon
On May 7, 2015 2:37 AM, Sahid Orentino Ferdjaoui 
sahid.ferdja...@redhat.com wrote:

 Hi,

 The primary point of this expected discussion around asynchronous
 communication is to optimize performance by reducing latency.

 For instance the design used in Nova and probably other projects let
 able to operate ascynchronous operations from two way.

 1. When communicate between inter-services
 2. When communicate to the database

 1 and 2 are close since they use the same API but I prefer to keep a
 difference here since the high level layer is not the same.

 From Oslo Messaging point of view we currently have two methods to
 invoke an RPC:

   Cast and Call: The first one is not bloking and will invoke a RPC
 without to wait any response while the second will block the
 process and wait for the response.

 The aim is to add new method which will return without to block the
 process an object let's call it Future which will provide some basic
 methods to wait and get a response at any time.

 The benefice from Nova will comes on a higher level:

 1. When communicate between services it will be not necessary to block
the process and use this free time to execute some other
computations.

Isn't this what the use of green threads (and eventlet) is supposed to
solve. Assuming my understanding is correct, and we can fix any issues
without adding async oslo.messaging, then adding yet another async pattern
seems like a bad thing.


   future = rpcapi.invoke_long_process()
  ... do something else here ...
   result = future.get_response()

 2. We can use the benefice of all of the work previously done with the
Conductor and so by updating the framework Objects and Indirection
Api we should take advantage of async operations to the database.

MyObject = MyClassObject.get_async()
  ... do something else here ...
MyObject.wait()

MyObject.foo = bar
MyObject.save_async()
  ... do something else here ...
MyObject.wait()

 All of this is to illustrate and have to be discussed.

 I guess the first job needs to come from Oslo Messaging so the
 question is to know the feeling here and then from Nova since it will
 be the primary consumer of this feature.

 https://blueprints.launchpad.net/nova/+spec/asynchronous-communication

 Thanks,
 s.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-07 Thread Mike Bayer



On 5/7/15 5:32 PM, Thomas Goirand wrote:

If there are really fixes and features we

need in Py2K then of course we have to either convince MySQLdb to merge
them or switch to mysqlclient.


Given the no reply in 6 months I think that's enough to say it: 
mysql-python is a dangerous package with a non-responsive upstream. 
That's always bad, and IMO, enough to try to get rid of it. If you 
think switching to PyMYSQL is effortless, and the best way forward, 
then let's do that ASAP!


haha - id rather have drop eventlet + mysqlclient :)

as far as this thread, where this has been heading is that django has 
already been recommending mysqlclient and it's become apparent just what 
a barrage of emails and messages have been sent Andy Dustman's way, with 
no response.I agree this is troubling behavior, and I've alerted 
people at RH internal that we need to start thinking about this package 
switch.My original issue was that for Fedora etc., changing it in 
this way is challenging, and from my discussions with packaging people, 
this is actually correct - this isn't an easy way to do it for them and 
there have been many emails as a result.  My other issue is the 
SQLAlchemy testing issue - I'd essentially have to just stop testing 
mysql-python and switch to mysqlclient entirely, which means i need to 
revise all my docs and get all my users to switch also when the 
SQLAlchemy MySQLdb dialect eventually diverges from mysql-python 1.2.5, 
hence the whole thing is in a not-minor-enough way my problem as 
well.A simple module name change for mysqlclient, then there's no 
problem.   But there you go - assuming continued crickets from AD, and 
seeing that people continue find it important to appease projects like 
Trac that IMO quite amateurishly hardcode import MySQLdb, I don't see 
much other option.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Question on Cinder client exception handling

2015-05-07 Thread Duncan Thomas
On 7 May 2015 at 23:10, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


 Is there anything preventing us from adding a more specific exception to
 cinderclient and then once that's in and released, we can pin the minimum
 version of cinderclient in global-requirements so nova can safely use it?


Seems like the right approach to me. In some cases, the cinder API will
need to return more info first. I suggest raising bugs for any situations
that are hard to detect, and we can work from there...

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-07 Thread Josh Durgin

Hey folks, thanks for filing a bug for this:

https://bugs.launchpad.net/cinder/+bug/1452641

Nova stores the volume connection info in its db, so updating that
would be a workaround to allow restart/migration of vms to work.
Otherwise running vms shouldn't be affected, since they'll notice any
new or deleted monitors through their existing connection to the
monitor cluster.

Perhaps the most general way to fix this would be for cinder to return
any monitor hosts listed in ceph.conf (as they are listed, so they may
be hostnames or ips) in addition to the ips from the current monmap
(the current behavior).

That way an out of date ceph.conf is less likely to cause problems,
and multiple clusters could still be used with the same nova node.

Josh

On 05/06/2015 12:46 PM, David Medberry wrote:

Hi Arne,

We've had this EXACT same issue.

I don't know of a way to force an update as you are basically pulling
the rug out from under a running instance. I don't know if it is
possible/feasible to update the virsh xml in place and then migrate to
get it to actually use that data. (I think we tried that to no avail.)
dumpxml=massage cephmons=import xml

If you find a way, let me know, and that's part of the reason I'm
replying so that I stay on this thread. NOTE: We did this on icehouse.
Haven't tried since upgrading to Juno but I don't note any change
therein that would mitigate this. So I'm guessing Liberty/post-Liberty
for a real fix.



On Wed, May 6, 2015 at 12:57 PM, Arne Wiebalck arne.wieba...@cern.ch
mailto:arne.wieba...@cern.ch wrote:

Hi,

As we swapped a fraction of our Ceph mon servers between the
pre-production and production cluster
— something we considered to be transparent as the Ceph config
points to the mon alias—, we ended
up in a situation where VMs with volumes attached were not able to
boot (with a probability that matched
the fraction of the servers moved between the Ceph instances).

We found that the reason for this is the connection_info in
block_device_mapping which contains the
IP adresses of the mon servers as extracted by the rbd driver in
initialize_connection() at the moment
when the connection is established. From what we see, however, this
information is not updated as long
as the connection exists, and will hence be re-applied without
checking even when the XML is recreated.

The idea to extract the mon servers by IP from the mon map was
probably to get all mon servers (rather
than only one from a load-balancer or an alias), but while our
current scenario may be special, we will face
a similar problem the day the Ceph mons need to be replaced. And
that makes it a more general issue.

For our current problem:
Is there a user-transparent way to force an update of that
connection information? (Apart from fiddling
with the database entries, of course.)

For the general issue:
Would it be possible to simply use the information from the
ceph.conf file directly (an alias in our case)
throughout the whole stack to avoid hard-coding IPs that will be
obsolete one day?

Thanks!
  Arne

—
Arne Wiebalck
CERN IT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient works with keystone sessions?

2015-05-07 Thread Sergey Kraynev
Hi Jay.

AFAIK, it works, but we can have some minor issues. There several atches on
review to improve it:

https://review.openstack.org/#/q/status:open+project:openstack/python-heatclient+branch:master+topic:improve-sessionclient,n,z

Also as I remember we really had bug mentioned by you, but fix was merged.
Please look:
https://review.openstack.org/#/c/160431/1
https://bugs.launchpad.net/python-heatclient/+bug/1427310

Which version of client do you use? Try to use code from master, it should
works.
Also one note: the best place for such questions is
openst...@lists.openstack.org or http://ask.openstack.org/. And of course
channel #heat in IRC.

Regards,
Sergey.

On 7 May 2015 at 23:43, Jay Reslock jresl...@gmail.com wrote:

 Hi,
 This is my first mail to the group.  I hope I set the subject correctly
 and that this hasn't been asked already.  I searched archives and did not
 see this question asked or answered previously.

 I am working on a client thing that uses the python-keystoneclient and
 python-heatclient api bindings to set up an authenticated session and then
 use that session to talk to the heat service.  This doesn't work for heat
 but does work for other services such as nova and sahara.  Is this because
 sessions aren't supported in the heatclient api yet?

 sample code:

 https://gist.github.com/jreslock/a525abdcce53ca0492a7

 I'm using fabric to define tasks so I can call them via another tool.
 When I run the task I get:

 TypeError: Client() takes at least 1 argument (0 given)

 The documentation does not say anything about being able to pass session
 to the heatclient but the others seem to work.  I just want to know if this
 is intended/expected behavior or not.

 -Jason

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Service group foundations and features

2015-05-07 Thread Joshua Harlow

Hi all,

In seeing the following:

- https://review.openstack.org/#/c/169836/
- https://review.openstack.org/#/c/163274/
- https://review.openstack.org/#/c/138607/

Vilobh and I are starting to come to the conclusion that the service 
group layers in nova really need to be cleaned up (without adding more 
features that only work in one driver), or removed or other... Spec[0] 
has interesting findings on this:


A summary/highlights:

* The zookeeper service driver in nova has probably been broken for 1 or 
more releases, due to eventlet attributes that are gone that it via 
evzookeeper[1] library was using. Evzookeeper only works for eventlet  
0.17.1. Please refer to [0] for details.
* The memcache service driver really only uses memcache for a tiny piece 
of the service liveness information (and does a database service table 
scan to get the list of services). Please refer to [0] for details.
* Nova-manage service disable (CLI admin api) does interact with the 
service group layer for the 'is_up'[3] API (but it also does a database 
service table scan[4] to get the list of services, so this is 
inconsistent with the service group driver API 'get_all'[2] view on what 
is enabled/disabled). Please refer to [9][10] for nova manage service 
enable disable for details.
  * Nova service delete (REST api) seems to follow a similar broken 
pattern (it also avoids calling into the service group layer to delete a 
service, which means it only works with the database layer[5], and 
therefore is inconsistent with the service group 'get_all'[2] API).


^^ Doing the above makes both disable/delete agnostic about other 
backends available that may/might manage service group data for example 
zookeeper, memcache, redis etc... Please refer [6][7] for details. 
Ideally the API should follow the model used in [8] so that the 
extension, admin interface as well as the API interface use the same 
servicegroup interface which should be *fully* responsible for managing 
services. Doing so we will have a consistent view of services data, 
liveness, disabled/enabled and so-on...


So with no disrespect to the authors of 169836 and 163274 (or anyone 
else involved), I am wondering if we can put a request in to figure out 
how to get the foundation of the service group concepts stabilized (or 
other...) before adding more features (that only work with the DB layer).


What is the path to request some kind of larger coordination effort by 
the nova folks to fix the service group layers (and the concepts that 
are not disjoint/don't work across them) before continuing to add 
features on-top of a 'shakey' foundation?


If I could propose something it would probably work out like the following:

Step 0: Figure out if the service group API + layer(s) should be 
maintained/tweaked at all (nova-core decides?)


If maintain it:

 - Have an agreement that nova service extension, admin 
interface(nova-manage) and API go through a common path for 
update/delete/read.
  * This common path should likely be the servicegroup API so as to 
have a consistent view of data and that also helps nova to add different 
data-stores (keeping the services data in a DB and getting numerous 
updates about liveliness every few seconds of N number of compute where 
N is pretty high can be detrimental to Nova's performance)
 - At the same time allow 163274 to be worked on (since it fixes a 
edge-case that was asked about in the initial addition of the delete API 
in its initial code commit @ https://review.openstack.org/#/c/39998/)
 - Delay 169836 until the above two/three are fixed (and stabilized); 
it's down concept (and all other usages of services that are hitting a 
database mentioned above) will need to go through the same service group 
foundation that is currently being skipped.


Else:
  - Discard 138607 and start removing the service group code (and just 
use the DB for all the things).
  - Allow 163274 and 138607 (since those would be additions on-top of 
the DB layer that will be preserved).


Thoughts?

- Josh (and Vilobh, who is spending the most time on this recently)

[0] Replace service group with tooz : 
https://review.openstack.org/#/c/138607/

[1] https://pypi.python.org/pypi/evzookeeper/
[2] 
https://github.com/openstack/nova/blob/stable/kilo/nova/servicegroup/api.py#L93
[3] 
https://github.com/openstack/nova/blob/stable/kilo/nova/servicegroup/api.py#L87

[4] https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L711
[5] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/services.py#L106
[6] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/services.py#L107

[7] https://github.com/openstack/nova/blob/master/nova/compute/api.py#L3436
[8] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/services.py#L61
[9] Nova manage enable : 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L742
[10] Nova manage disable : 

Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Clint Byrum
Excerpts from Chuck Thier's message of 2015-05-07 13:10:13 -0700:
 I think most are missing the point a bit.  The question that should really
 be asked is, what is right for Swift to continue to scale.  Since the
 inception of Openstack, Swift has had to solve for problems of scale that
 generally are not shared with the rest of Openstack.
 
 When we first set out to write Swift, we had set, what we thought at the
 time were pretty lofty goals for ourselves:
 
 * 100 Billion objects
 * 100 Petabytes of data
 * 100 K requests/second
 * 100 Gb/s throughput
 
 We started with Python figuring that when we hit major bottlenecks, we
 would look at other options.  We have been surprised at how far we have
 been able to push Python and have met most if not all of the goals above.
 
 As we look toward the future, we realize that we are now looking for how we
 will support trillions of objects, 100's of petabytes to exabytes of data,
 etc.  We feel that we have finally hit that point that we need more than
 incremental improvements, and that we are running out of incremental
 improvements that can be made with Python.
 
 What started as a simple experiment by Mike Barton, has turned into quite a
 significant improvement in performance and builds a base that can be built
 off of for future improvements.  This wasn't built because of it being
 shiny but out of direct need, and is currently being tested with great
 results on production workloads.
 
 I applaud the team that has worked on this at Rackspace, and hope the
 community can look at the current needs of Swift, and the merits of the
 work that has been accomplished, rather than the politics of shiny.
 

Chuck, much respect to you and the team for everything accomplished.

I'm still very curious to hear if anybody has been willing to try to
make Swift work on pypy. This is pretty much why pypy exists, and making
pypy work for Swift could mean some really nice residual benefits to the
other projects that haven't gone as far as to experiment with a compiled
language like Go yet. There's also the other benefit that pypy would
gain some eyeballs and improvements that we could feed back into it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] removing Angus Salkeld and Nick Barcet from ceilometer-core‏

2015-05-07 Thread Angus Salkeld
On Fri, May 8, 2015 at 3:56 AM, gordon chung g...@live.ca wrote:

 hi folks,

 as both have moved on to other endeavours, today we will be removing two
 founding contributors of Ceilometer from the core team. thanks to both of
 you for guiding the project in it's early days!


+1 from me, it's somewhat overdue, I appreciate having been core - thank
you all..

-Angus



 cheers,
 *gord*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Question about multi-host mode while using nova-network

2015-05-07 Thread BYEONG-GI KIM
Hello.

It seems that this question would be quite outdated question, because this
is a question about nova-network instead of neutron.

I wonder whether VMs located in a Compute Node, e.g., Compute A, are
accessible while its nova-network service is down if the other nova-network
is running on the other Compute Nodes, such as Compute B, Compute C, etc.

Or, does the multi-host just provide continuity of the networking service
via avoiding single point failure?

Thanks in advance!

Regards,
Byeong-gi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] IPv4 transition/interoperation with IPv6

2015-05-07 Thread John Davidge (jodavidg)
On the subject of Prefix Delegation - yes, the external system is
responsible for the routing. Here¹s a couple of video guides on using
PD in Neutron and setting up the Prefix Delegation Server (in this case
a dibbler server):

Using Neutron PD: https://www.youtube.com/watch?v=wI830s881HQ

Configuring the PD server: https://www.youtube.com/watch?v=zfsFyS01Fn0

The patch is up for review at: https://review.openstack.org/#/c/158697

And the networking guide docs: https://review.openstack.org/#/c/178739

John

On 06/05/2015 17:57, Carl Baldwin c...@ecbaldwin.net wrote:


On Wed, May 6, 2015 at 12:46 AM, Mike Spreitzer mspre...@us.ibm.com
wrote:
 While I am a Neutron operator, I am also a customer of a lower layer
network
 provider.  That network provider will happily give me a few /64.  How
do I
 serve IPv6 subnets to lots of my tenants?  In the bad old v4 days this
would
 be easy: a tenant puts all his stuff on his private networks and NATs
(e.g.,
 floating IP) his edge servers onto a public network --- no need to align
 tenant private subnets with public subnets.  But with no NAT for v6,
there
 is no public/private distinction --- I can only give out the public v6
 subnets that I am given.  Yes, NAT is bad.  But not being able to get
your
 job done is worse.

Mike, in this paragraph, you're hitting on something that has been on
my mind for a while.  We plan to cover this problem in detail in this
talk [1] and we're defining some work for Liberty to better address it
[2][3].  You hit the nail on the head, there is no distinguishing
private and public IP addresses in Neutron currently with IPv6.

Kilo's new subnet pool feature is a start.  It will allow you to
create a shared subnet pool including the /64s from your service
provider.  Tenants can then create a subnet getting an allocation from
it automatically.  However, given the current state of things, there
will be some manual work on the gateway router to route them to the
tenant's router.

Prefix delegation -- which looks on track for Liberty -- is another
option which could fill this void.  It will allow a router to get a
prefix delegation from an external PD system which will be useable on
a tenant subnet.  Presumably the external system will take care of
routing the subnet to the appropriate tenant router.

Carl

[1] http://sched.co/2qdm
[2] https://review.openstack.org/#/c/180267/
[3] https://review.openstack.org/#/c/125401/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Are routed TAP interfaces (and DHCP for them) in scope for Neutron?

2015-05-07 Thread Neil Jerram

Hi Carl,

I think I already answered your questions in my previous email below - 
but possibly that just means that I am misunderstanding exactly what you 
are asking!  More inline


On 06/05/15 18:13, Carl Baldwin wrote:

This brings up something I'd like to discuss.  We have a config option
called allow_overlapping_ips which actually defaults to False.  It
has been suggested [1] that this should be removed from Neutron and
I've just started playing around with ripping it out [2] to see what
the consequences are.

A purely L3 routed network, like Calico, is a case where it is more
complex to implement allowing overlapping ip addresses.


Well, yes.  At least it means needing per-address space namespaces on 
the compute host.  Because there might be two VMs on the same host and 
with the same IP, in different private address spaces, and there 
couldn't possibly be a way of routing both of those correctly, without 
putting their TAP interfaces into different namespaces.


Then there's still the question of how to route across the fabric to the 
destination compute host (or to a border gateway), and how to 
communicate, to that compute host, the address space that it should 
route it.  To solve that, in Calico, we're thinking of using stateless 
464XLAT.


There's a fuller exposition of all this, if you'd like more detail, at 
http://docs.projectcalico.org/en/latest/overlap-ips.html.



If we deprecate and eventually remove allow_overlapping_ips, will this
be a problem for Calico?


I don't think so, but I'm not sure I've fully grasped the implications. 
 For people using Calico today, we'd simply document and advertise that 
overlapping IPs aren't supported yet, and that would be the same 
regardless of whether allow_overlapping_ips still exists.


(More broadly, there are other things in the Neutron API that Calico has 
a different take on.  For example floating IPs - where Calico's approach 
is that if you want a VM to be publically addressable, you just give it 
an IP from a public range.  We currently try to cover those things via 
documentation, as here: 
http://docs.projectcalico.org/en/latest/calico-neutron-api.html.  But a 
longer term view would be for us to suggest evolving some of the Neutron 
API's concepts such that they have interpretations that make sense for 
both traditional Neutron networks and for Calico-like routed networks.


Hence the question that I asked when starting this thread: are routed 
TAP interfaces [or perhaps I should have said 'routed networking'] in 
scope for Neutron?  If they are, that might eventually have consequences 
for how the Neutron API should be expressed.)


Then, in the hopefully not-too-distant future, Calico _will_ support 
overlapping IPs, and then it certainly won't be a problem if 
allow_overlapping_ips no longer exists.



Is the shared address space in Calico
confined to a single flat network


Yes.


or do you already support tenant
private networks with this technology?


No, not yet.


 If I recall from previous
discussions, I think that it only supports Neutron's flat network
model in the current form, so I don't think it should be a problem.
Am I correct?  Please confirm.


Correct.

Many thanks for your interest and reply!

Neil



[1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/036336.html
[2] https://review.openstack.org/#/c/179953/

On Fri, May 1, 2015 at 8:22 AM, Neil Jerram neil.jer...@metaswitch.com wrote:

Thanks for your reply, Kevin, and sorry for the delay in following up.

On 21/04/15 09:40, Kevin Benton wrote:


Is it compatible with overlapping IPs? i.e. Will it give two different
VMs the same IP address if the reservations are setup that way?



No, not as I've described it below, and as we've implemented Calico so far.
Calico's first target is a shared address space without overlapping IPs, so
that we can handle everything within the default namespace.

But we do also anticipate a future Calico release to support private address
spaces with overlapping IPs, while still routing all VM data rather than
bridging.  That will need the private address TAP interfaces to go into a
separate namespace (per address space), and have their data routed there;
and we'd run a Dnsmasq in that namespace to provide that space's IP
addresses.

Within each namespace - whether the default one or private ones - we'd still
use the other changes I've described below for how the DHCP agent creates
the ns-XXX interface and launches Dnsmasq.

Does that make sense?  Do you think that this kind of approach could be in
scope under the Neutron umbrella, as an alternative to bridging the TAP
interfaces?

Thanks,
 Neil



On 16/04/15 15:12, Neil Jerram wrote:

 I have a Neutron DHCP agent patch whose purpose is to launch dnsmasq
 with options such that it works (= provides DHCP service) for TAP
 interfaces that are _not_ bridged to the DHCP interface (ns-XXX).  For
 the sake of being concrete, this involves:

   

[openstack-dev] [Fuel] Ceph software version in next releases

2015-05-07 Thread Rogon, Kamil
Hello,

 

I would like to ask what Ceph versions are scheduled for next releases.

 

I see the blueprint [1] for upgrading to next stable release (from Firefly
to Giant), but it is still in drafting state. 

That upgrade is important for Fuel 7.0 release, as this introduces a lot of
improvements in case of flash backends which is essential nowadays.

I think you should consider next Ceph version (Hammer) as this is marked as
LTS and should supersede 0.80.x Firefly. What's more this is already lunched
[2].

 

Regarding the upcoming 6.0.1 / 6.1 release I understand that you are stick
with Firefly branch. However will it be updated from 0.80.7 to 0.80.9? It
fixes some performance regression in librbd [3].

 

[1] https://blueprints.launchpad.net/fuel/+spec/upgrade-ceph

[2] http://ceph.com/docs/next/releases/

[3] http://docs.ceph.com/docs/next/release-notes/#v0-80-9-firefly

 

 

Regards,

Kamil Rogon



Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173
80-298 Gdansk



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-07 Thread ozamiatin

Hi,

I generally like the idea of async CALL. Is there a place in Nova (or 
other services) where the new CALL may be applied to see advantage?


Thanks,
Oleksii Zamiatin

07.05.15 12:34, Sahid Orentino Ferdjaoui пишет:

Hi,

The primary point of this expected discussion around asynchronous
communication is to optimize performance by reducing latency.

For instance the design used in Nova and probably other projects let
able to operate ascynchronous operations from two way.

1. When communicate between inter-services
2. When communicate to the database

1 and 2 are close since they use the same API but I prefer to keep a
difference here since the high level layer is not the same.

 From Oslo Messaging point of view we currently have two methods to
invoke an RPC:

   Cast and Call: The first one is not bloking and will invoke a RPC
 without to wait any response while the second will block the
 process and wait for the response.

The aim is to add new method which will return without to block the
process an object let's call it Future which will provide some basic
methods to wait and get a response at any time.

The benefice from Nova will comes on a higher level:

1. When communicate between services it will be not necessary to block
the process and use this free time to execute some other
computations.

   future = rpcapi.invoke_long_process()
  ... do something else here ...
   result = future.get_response()

2. We can use the benefice of all of the work previously done with the
Conductor and so by updating the framework Objects and Indirection
Api we should take advantage of async operations to the database.

MyObject = MyClassObject.get_async()
  ... do something else here ...
MyObject.wait()

MyObject.foo = bar
MyObject.save_async()
  ... do something else here ...
MyObject.wait()

All of this is to illustrate and have to be discussed.

I guess the first job needs to come from Oslo Messaging so the
question is to know the feeling here and then from Nova since it will
be the primary consumer of this feature.

https://blueprints.launchpad.net/nova/+spec/asynchronous-communication

Thanks,
s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-07 Thread Zhi Yan Liu
I'd really like this idea,  async call will definitely improve overall
performance for cloud control system. In nova (and other components)
there are some slow tasks which handle resource with long time
running, which makes new tasks get a huge delay before getting served,
especially for the high concurrent request (e.g. provisioning hundreds
VMs) with high delay operation for the resource handling case.

To really archive the result of improving system overall performance,
I think the biggest challenge is it must makes all operations cross
components to be asynchronous in the handling pipeline, a synchronous
operation in the call path will makes the workflow still be
synchronous, the system still need to wait this synchronous operation
to be finish and it will makes delay/waiting keep there, and this kind
of synchronous operation are very familiar around the resource
handling case for now.

thanks,
zhiyan

On Thu, May 7, 2015 at 6:05 PM, ozamiatin ozamia...@mirantis.com wrote:
 Hi,

 I generally like the idea of async CALL. Is there a place in Nova (or
 other services) where the new CALL may be applied to see advantage?

 Thanks,
 Oleksii Zamiatin

 07.05.15 12:34, Sahid Orentino Ferdjaoui пишет:

 Hi,

 The primary point of this expected discussion around asynchronous
 communication is to optimize performance by reducing latency.

 For instance the design used in Nova and probably other projects let
 able to operate ascynchronous operations from two way.

 1. When communicate between inter-services
 2. When communicate to the database

 1 and 2 are close since they use the same API but I prefer to keep a
 difference here since the high level layer is not the same.

  From Oslo Messaging point of view we currently have two methods to
 invoke an RPC:

Cast and Call: The first one is not bloking and will invoke a RPC
  without to wait any response while the second will block the
  process and wait for the response.

 The aim is to add new method which will return without to block the
 process an object let's call it Future which will provide some basic
 methods to wait and get a response at any time.

 The benefice from Nova will comes on a higher level:

 1. When communicate between services it will be not necessary to block
 the process and use this free time to execute some other
 computations.

future = rpcapi.invoke_long_process()
   ... do something else here ...
result = future.get_response()

 2. We can use the benefice of all of the work previously done with the
 Conductor and so by updating the framework Objects and Indirection
 Api we should take advantage of async operations to the database.

 MyObject = MyClassObject.get_async()
   ... do something else here ...
 MyObject.wait()

 MyObject.foo = bar
 MyObject.save_async()
   ... do something else here ...
 MyObject.wait()

 All of this is to illustrate and have to be discussed.

 I guess the first job needs to come from Oslo Messaging so the
 question is to know the feeling here and then from Nova since it will
 be the primary consumer of this feature.

 https://blueprints.launchpad.net/nova/+spec/asynchronous-communication

 Thanks,
 s.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] upgrade_levels in nova upgrade

2015-05-07 Thread Rui Chen
Assuming my understanding is correct, 2 things make you sad in the upgrade
process.

1. must reconfig the 'upgrade_levels' in the config file during
post-upgrade.
2. must restart the service in order to make the option 'upgrade_level'
work.

I think the configuration management tools (e.g. chef, pupput) can solve
the #1.
We can change the 'upgrade_level' option in config file after upgrading and
sync it to all the hosts conveniently.

#2 is more complex, fortunately there are some works to try to solve it,
[1] [2].
If all the OpenStack services can support SIGHUP, I think we just need to
trigger a SIGHUP to make the services reload the config file.

Correct me If I'm wrong, thanks.


[1]: https://blueprints.launchpad.net/glance/+spec/sighup-conf-reload
[2]: https://bugs.launchpad.net/oslo-incubator/+bug/1276694



2015-05-07 16:09 GMT+08:00 Guo, Ruijing ruijing@intel.com:

  Hi, All,



 In existing design, we need to reconfig nova.conf and restart nova
 service during post-upgrade cleanup

 As https://www.rdoproject.org/Upgrading_RDO_To_Icehouse:



 I propose to send RPC message to remove RPC API version pin.





 1.   Stop services  (same with existing)

 2.   Upgrade packages (same with existing)

 3.   Upgrade DB schema (same with existint)

 4.   Start service with upgrade  (add upgrade parameter so that nova
 will use old version of RPC API. We may add more parameter for other
 purpose including query upgrade progress)

 5.   Send RPC message to remove RPC API version pin. (we don’t need
 to reconfig nova.conf and restart nova service)



 What do you think?



 Thanks,

 -Ruijing





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Liberty mid-cycle meetup

2015-05-07 Thread Michael Still
As discussed at the Nova meeting this morning, we'd like to gauge
interest in a mid-cycle meetup for the Liberty release.

To that end, I've created the following eventbrite event like we have
had for previous meetups. If you sign up, you're expressing interest
in the event and if we decide there's enough interest to go ahead we
will email you and let you know its safe to book travel and that
you're ticket is now a real thing.

To save you a few clicks, the proposed details are 21 July to 23 July,
at IBM in Rochester, MN.

So, I'd appreciate it if people could take a look at:


https://www.eventbrite.com.au/e/openstack-nova-liberty-mid-cycle-developer-meetup-tickets-16908756546

Thanks,
Michael

PS: I haven't added this to the wiki list of sprints because it might
not happen. When the decision is final, I'll add it to the wiki if we
decide to go ahead.

-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-07 Thread David Medberry
Josh,

Certainly in our case the the monitor hosts (in addition to IPs) would have
made a difference.

On Thu, May 7, 2015 at 3:21 PM, Josh Durgin jdur...@redhat.com wrote:

 Hey folks, thanks for filing a bug for this:

 https://bugs.launchpad.net/cinder/+bug/1452641

 Nova stores the volume connection info in its db, so updating that
 would be a workaround to allow restart/migration of vms to work.
 Otherwise running vms shouldn't be affected, since they'll notice any
 new or deleted monitors through their existing connection to the
 monitor cluster.

 Perhaps the most general way to fix this would be for cinder to return
 any monitor hosts listed in ceph.conf (as they are listed, so they may
 be hostnames or ips) in addition to the ips from the current monmap
 (the current behavior).

 That way an out of date ceph.conf is less likely to cause problems,
 and multiple clusters could still be used with the same nova node.

 Josh

 On 05/06/2015 12:46 PM, David Medberry wrote:

 Hi Arne,

 We've had this EXACT same issue.

 I don't know of a way to force an update as you are basically pulling
 the rug out from under a running instance. I don't know if it is
 possible/feasible to update the virsh xml in place and then migrate to
 get it to actually use that data. (I think we tried that to no avail.)
 dumpxml=massage cephmons=import xml

 If you find a way, let me know, and that's part of the reason I'm
 replying so that I stay on this thread. NOTE: We did this on icehouse.
 Haven't tried since upgrading to Juno but I don't note any change
 therein that would mitigate this. So I'm guessing Liberty/post-Liberty
 for a real fix.



 On Wed, May 6, 2015 at 12:57 PM, Arne Wiebalck arne.wieba...@cern.ch
 mailto:arne.wieba...@cern.ch wrote:

 Hi,

 As we swapped a fraction of our Ceph mon servers between the
 pre-production and production cluster
 -- something we considered to be transparent as the Ceph config
 points to the mon alias--, we ended
 up in a situation where VMs with volumes attached were not able to
 boot (with a probability that matched
 the fraction of the servers moved between the Ceph instances).

 We found that the reason for this is the connection_info in
 block_device_mapping which contains the
 IP adresses of the mon servers as extracted by the rbd driver in
 initialize_connection() at the moment
 when the connection is established. From what we see, however, this
 information is not updated as long
 as the connection exists, and will hence be re-applied without
 checking even when the XML is recreated.

 The idea to extract the mon servers by IP from the mon map was
 probably to get all mon servers (rather
 than only one from a load-balancer or an alias), but while our
 current scenario may be special, we will face
 a similar problem the day the Ceph mons need to be replaced. And
 that makes it a more general issue.

 For our current problem:
 Is there a user-transparent way to force an update of that
 connection information? (Apart from fiddling
 with the database entries, of course.)

 For the general issue:
 Would it be possible to simply use the information from the
 ceph.conf file directly (an alias in our case)
 throughout the whole stack to avoid hard-coding IPs that will be
 obsolete one day?

 Thanks!
   Arne

 --
 Arne Wiebalck
 CERN IT

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Upstream QA test plans

2015-05-07 Thread Adam Young
Yes, we have Tempest and unit tests and Functional tests.  But still we 
need test plans


Keystone often plays a role deep in others work flows. Often we have 
complicated features that span multiple services;  WebSSO, Trusts and 
HEAT,  EC2 Credentials...


Before we can automate the tests, we need to know what to test.  And 
that is a conversation best started by the developers implementing the 
feature, and then handed off to the engineer that needs to make sure the 
feature actually works, positive, negative, and edge.


I'm sure that we have many organizations with internal QA that can 
benefit from test plans that show how to exercise and test these 
features.  As part of getting comfortable working up stream, my team has 
written the start of a few test plans.  I'd like to share them with the 
larger Keystone consuming community;



https://etherpad.openstack.org/p/ldap-backend-non-default-domain
https://etherpad.openstack.org/p/hierarchical-projects
https://etherpad.openstack.org/p/keystone-token-scoping

As you can guess, my goal is not purely altruistic:  I'd like to get 
other people to both input into these test cases and to write some for 
other aspects of Keystone.  Other ones we've identified:


CADF Notifications Everywhere
Allow Redelegation via Trusts
mapping to existing user in federated workflow

I think Etherpad is the right medium for this.  I was originally 
thinking the Spec process, but approving test plans seems to 
centralized:  we want multiple engineers contributing to the test 
plans.  They should be collaborative documents as people add suggestions 
of what to test.


Eventually, these should be automated, but before we can automate, we 
need to identify what to test.  Getting just that down is incredibly 
valuable. We can be more rigorous on the review process when we automate.


I am actively seeking suggestions.  How should we best organize this effort?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Taskflow 0.10.0 incompatible with NetApp NFS drivers

2015-05-07 Thread Alex Meade
It appears that the release of taskflow 0.10.0 exposed an issue in the
NetApp NFS drivers. Something changed that caused the sqlalchemy Volume
object to be garbage collected even though it is passed into create_volume()

An example error can be found in the c-vol logs here:

http://dcf901611175aa43f968-c54047c910227e27e1d6f03bb1796fd7.r95.cf5.rackcdn.com/57/181157/1/check/cinder-cDOT-NFS/0473c54/

One way to get around whatever the issue is would be to change the drivers
to not update the object directly as it is not needed. But this should not
fail. Perhaps a more proper fix is for the volume manager to not pass
around sqlalchemy objects.

Something changed in taskflow, however, and we should just understand if
that has other impact.

-Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-07 Thread Li Tianqing
sorry, I thought he was not.
i can not connect on him though those three emails. PTL can be long time be not 
connected?
Email: slick...@gmail.com 
 nikhil.mancha...@hp.com 
 nik...@manchanda.me





 



--

Best
Li Tianqing

At 2015-05-08 10:34:36, James Polley j...@jamezpolley.com wrote:

A quick google for Openstack projects pointed me at 
https://wiki.openstack.org/wiki/Project_Teams, which has a link to the 
complete, up-to-date list of projects at 
http://governance.openstack.org/reference/projects/index.html, which pointed me 
to http://governance.openstack.org/reference/projects/trove.html, which tells 
me that the PTL is Nikhil Manchanda (SlickNik)


On Fri, May 8, 2015 at 12:26 PM, Li Tianqing jaze...@163.com wrote:

Hello,
 Who is the ptl of trove? 





--

Best
Li Tianqing



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Taskflow 0.10.0 incompatible with NetApp NFS drivers

2015-05-07 Thread Alex Meade
So it seems that this will break a number of drivers, I see that glusterfs
does the same thing.

On Thu, May 7, 2015 at 10:29 PM, Alex Meade mr.alex.me...@gmail.com wrote:

 It appears that the release of taskflow 0.10.0 exposed an issue in the
 NetApp NFS drivers. Something changed that caused the sqlalchemy Volume
 object to be garbage collected even though it is passed into create_volume()

 An example error can be found in the c-vol logs here:


 http://dcf901611175aa43f968-c54047c910227e27e1d6f03bb1796fd7.r95.cf5.rackcdn.com/57/181157/1/check/cinder-cDOT-NFS/0473c54/

 One way to get around whatever the issue is would be to change the drivers
 to not update the object directly as it is not needed. But this should not
 fail. Perhaps a more proper fix is for the volume manager to not pass
 around sqlalchemy objects.

 Something changed in taskflow, however, and we should just understand if
 that has other impact.

 -Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Taskflow 0.10.0 incompatible with NetApp NFS drivers

2015-05-07 Thread Joshua Harlow
Alright, it was as I had a hunch for, a small bug found in the new 
algorithm to make the storage layer 
copy-original,mutate-copy,save-copy,update-original (vs 
update-original,save-original) more reliable.


https://bugs.launchpad.net/taskflow/+bug/1452978 opened and a one line 
fix made @ https://review.openstack.org/#/c/181288/ to stop trying to 
copy task results (which was activating logic that must of caused the 
reference to drop out of existence and therefore the issue noted below).


Will get that released in 0.10.1 once it flushes through the pipeline.

Thanks alex for helping double check, if others want to check to that'd 
be nice, can make sure that's the root cause (overzealous usage of 
copy.copy, ha).


Overall I'd still *highly* recommend that the following still happen:

 One way to get around whatever the issue is would be to change the
 drivers to not update the object directly as it is not needed. But
 this should not fail. Perhaps a more proper fix is for the volume
 manager to not pass around sqlalchemy objects.

But that can be a later tweak that cinder does; using any taskflow 
engine that isn't the greenthreaded/threaded/serial engine will require 
results to be serializable, and therefore copyable, so that those 
results can go across IPC or MQ/other boundaries. Sqlalchemy objects 
won't fit either of these cases (obviously).


-Josh

Joshua Harlow wrote:

Are we sure this is taskflow? I'm wondering since those errors are more
from task code (which is in cinder) and the following seems to be a
general garbage collection issue (not connected to taskflow?):

'Exception during message handling: Can't emit change event for
attribute 'Volume.provider_location' - parent object of type Volume
has been garbage collected.'''

Or:

'''2015-05-07 22:42:51.142 17040 TRACE oslo_messaging.rpc.dispatcher
ObjectDereferencedError: Can't emit change event for attribute
'Volume.provider_location' - parent object of type Volume has been
garbage collected.'''

Alex Meade wrote:

So it seems that this will break a number of drivers, I see that
glusterfs does the same thing.

On Thu, May 7, 2015 at 10:29 PM, Alex Meade mr.alex.me...@gmail.com
mailto:mr.alex.me...@gmail.com wrote:

It appears that the release of taskflow 0.10.0 exposed an issue in
the NetApp NFS drivers. Something changed that caused the sqlalchemy
Volume object to be garbage collected even though it is passed into
create_volume()

An example error can be found in the c-vol logs here:

http://dcf901611175aa43f968-c54047c910227e27e1d6f03bb1796fd7.r95.cf5.rackcdn.com/57/181157/1/check/cinder-cDOT-NFS/0473c54/


One way to get around whatever the issue is would be to change the
drivers to not update the object directly as it is not needed. But
this should not fail. Perhaps a more proper fix is for the volume
manager to not pass around sqlalchemy objects.


+1



Something changed in taskflow, however, and we should just
understand if that has other impact.


I'd like to understand that also: the only one commit that touched this
stuff is https://github.com/openstack/taskflow/commit/227cf52 (which
basically ensured that a storage object copy is modified, then saved,
then the local object is updated vs updating the local object, and then
saving, which has problems/inconsistencies if the save fails).



-Alex


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Clay Gerrard
On Thu, May 7, 2015 at 5:05 PM, Adam Lawson alaw...@aqorn.com wrote:

 what sort of limitations have you discovered that had to do specifically
 with the fact we're using Python?


Python is great.  Conscious decision to optimize for developer wall time
over cpu cycles has made it a great language for 20 years - and probably
will for another 20 at *least* (IMHO).

I don't think you would pick out anything to point at as a limitation of
python that you couldn't point at any dynamic interpreted language, but my
list is something like this:

   - Dynamic Interpreted Runtime overhead
   - Eventlet non-blocking hub is NOT OK for blocking operations (cpu, disk)
   - OTOH, dispatch to threads has overhead AND GIL
   - non-byte-aligned buffers restricts access to O_DIRECT and asyncio

*So often* this kinda stuff just doesn't matter.  Or even lots of times
even when it *does* matter - it doesn't matter that much in the grand
scheme of things.  Or maybe it matters a non-trivial amount, *but* there's
still other things that just mater more *right now*.  I think Swift has
been in that last case for a long time, maybe we still are - great thing
about open-source is redbo can publish an experiment on a feature branch in
gerrit and in-between the hard work of testing it - we can pontificate
about it on the mailing list!  ;)

FWIW, I don't think anyone should find it particularly surprising that a
mature data-path project would naturally gravitate closer to the metal in
the critical paths - it shouldn't be a big deal - unless it all works out -
and it's $%^! tons faster - then BOOYAH! ;)

But I'd suggest you be very careful not to draw any assumptions in general
about a great language like python even if this one time this one project
thought maybe they should find out if some part of the distributed system
might be better by some measure in something not-python.  ;)

Cheers,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-05-07 Thread Andrew Beekhof

 On 5 May 2015, at 7:52 pm, Bogdan Dobrelya bdobre...@mirantis.com wrote:
 
 On 05.05.2015 04:32, Andrew Beekhof wrote:
 
 
 [snip]
 
 
 Technically it calculates an ordered graph of actions that need to be 
 performed for a set of related resources.
 You can see an example of the kinds of graphs it produces at:
 
   
 http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/s-config-testing-changes.html
 
 There is a more complex one which includes promotion and demotion on the 
 next page.
 
 The number of actions that can run at any one time is therefor limited by
 - the value of batch-limit (the total number of in-flight actions)
 - the number of resources that do not have ordering constraints between them 
 (eg. rsc{1,2,3} in the above example)  
 
 So in the above example, if batch-limit = 3, the monitor_0 actions will 
 still all execute in parallel.
 If batch-limit == 2, one of them will be deferred until the others complete.
 
 Processing of the graph stops the moment any action returns a value that was 
 not expected.
 If that happens, we wait for currently in-flight actions to complete, 
 re-calculate a new graph based on the new information and start again.
 
 
 First we do a non-recurring monitor (*_monitor_0) to check what state the 
 resource is in.
 We can’t assume its off because a) we might have crashed, b) the admin might 
 have accidentally configured it to start at boot or c) the admin may have 
 asked us to re-check everything.
 
 
 Also important to know, the order of actions is:

I should clarify something here:

   s/actions is/actions for each resource is/

 
 1. any necessary demotions
 2. any necessary stops
 3. any necessary starts
 4. any necessary promotions
 
 
 
 Thank you for explaining this, Andrew!
 
 So, in the context of the given two example DB(MySQL) and
 messaging(RabbitMQ) resources:
 
 The problem is that pacemaker can only promote a resource after it
 detects the resource is started. During a full reassemble, in the first
 transition batch, pacemaker starts all the resources including MySQL and
 RabbitMQ. Pacemaker issues resource agent start invocation in parallel
 and reaps the results.
 For a multi-state resource agent like RabbitMQ, pacemaker needs the
 start result reported in the first batch, then transition engine and
 policy engine decide if it has to retry starting or promote, and put
 this new transition job into a new batch.
 
 So, for given example, it looks like we currently have:
 _batch start_
 ...
 3. DB, messaging resources start in a one batch

Since there is no dependancy between them, yes.

 4. messaging resource promote blocked by the step 3 completion
 _batch end_

Not quite, I wasn’t as clear as I could have been in my previous email.

We wont promote Rabbit instances until all they have all been started.
However we don’t need to wait for all the DBs to finish starting (again, 
because there is no dependancy between them) before we begin promoting Rabbit.

So a single transition that did this is totally possible:

t0.  Begin transition
t1.  Rabbit start node1(begin)
t2.  DB start node 3   (begin)
t3.  DB start node 2   (begin)
t4.  Rabbit start node2(begin)
t5.  Rabbit start node3(begin)
t6.  DB start node 1   (begin)
t7.  Rabbit start node2(complete)
t8.  Rabbit start node1(complete)
t9.  DB start node 3   (complete)
t10. Rabbit start node3(complete)
t11. Rabbit promote node 1 (begin)
t12. Rabbit promote node 3 (begin)
t13. Rabbit promote node 2 (begin)
... etc etc ...

For something like cinder however, these are some of the dependancies we define:

pcs constraint order start keystone-clone then cinder-api-clone
pcs constraint order start cinder-api-clone then cinder-scheduler-clone
pcs constraint order start galera-master then keystone-clone

So first all the galera instances must be started. Then we can begin to promote 
some.
Once all the promotions complete, then we can start the keystone instances.
Once all the keystone instances are up, then we can bring up the cinder API 
instances, which allows us to start the scheduler, etc etc.

And assuming nothing fails, this can all happen in one transition.

Bottom line: Pacemaker will do as much as it can as soon as it can.  
The only restrictions are ordering constraints you specify, the batch-limit, 
and each master/slave (or clone) resource’s _internal_ 
demote-stop-start-promote ordering.

Am I making it better or worse?

 
 Does this mean what an artificial constraints ordering between DB and
 messaging could help them to get into the separate transition batches, like:
 
 ...
 3. messaging multistate clone resource start
 4. messaging multistate clone resource promote
 _batch end_
 
 _next batch start_
 ...
 3. DB simple clone resource start
 
 ?
 
 -- 
 Best regards,
 Bogdan Dobrelya,
 Skype #bogdando_at_yahoo.com
 Irc #bogdando
 
 __
 

Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Huang Zhiteng
Well put, Clay.  Totally Agree.

On Fri, May 8, 2015 at 10:19 AM, Clay Gerrard clay.gerr...@gmail.com wrote:


 On Thu, May 7, 2015 at 5:05 PM, Adam Lawson alaw...@aqorn.com wrote:

 what sort of limitations have you discovered that had to do specifically
 with the fact we're using Python?


 Python is great.  Conscious decision to optimize for developer wall time
 over cpu cycles has made it a great language for 20 years - and probably
 will for another 20 at *least* (IMHO).

 I don't think you would pick out anything to point at as a limitation of
 python that you couldn't point at any dynamic interpreted language, but my
 list is something like this:

 Dynamic Interpreted Runtime overhead
 Eventlet non-blocking hub is NOT OK for blocking operations (cpu, disk)
 OTOH, dispatch to threads has overhead AND GIL
 non-byte-aligned buffers restricts access to O_DIRECT and asyncio

 *So often* this kinda stuff just doesn't matter.  Or even lots of times even
 when it *does* matter - it doesn't matter that much in the grand scheme of
 things.  Or maybe it matters a non-trivial amount, *but* there's still other
 things that just mater more *right now*.  I think Swift has been in that
 last case for a long time, maybe we still are - great thing about
 open-source is redbo can publish an experiment on a feature branch in gerrit
 and in-between the hard work of testing it - we can pontificate about it on
 the mailing list!  ;)

 FWIW, I don't think anyone should find it particularly surprising that a
 mature data-path project would naturally gravitate closer to the metal in
 the critical paths - it shouldn't be a big deal - unless it all works out -
 and it's $%^! tons faster - then BOOYAH! ;)

 But I'd suggest you be very careful not to draw any assumptions in general
 about a great language like python even if this one time this one project
 thought maybe they should find out if some part of the distributed system
 might be better by some measure in something not-python.  ;)

 Cheers,

 -Clay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards
Huang Zhiteng

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc.

2015-05-07 Thread Adam Young

On 05/06/2015 06:54 PM, Hu, David J (Converged Cloud) wrote:
david8hu One of the first thing we have to do is get all of our 
glossary straight J  I am starting to hear about “capability”.  Are we 
talking about “rule” in oslo policy terms? Or “action” in nova policy 
terms? Or this is something new.  For example, 
“compute:create_instance” is a “rule” in oslo.policy enforce(…) 
definition,  “compute:create_instance” is an “action” in nova.policy 
enforce(…) definition.


By capability, I ( think I ) mean  Action in Nova terms, as I am trying 
to exclude the internal rules that policy lets you define. However, to 
further muddy the water, you can actually enforce on one of these 
rules./  For example, the Keystone server enforces on admin_required  
for the V2 API.


The term capability has been thrown around a few times and I picked it 
up.  Really what I want to delineate is the point in the code at which  
policy gets enforced.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc.

2015-05-07 Thread Dolph Mathews
On Thursday, May 7, 2015, Adam Young ayo...@redhat.com wrote:

  On 05/06/2015 06:54 PM, Hu, David J (Converged Cloud) wrote:

 david8hu One of the first thing we have to do is get all of our glossary
 straight J  I am starting to hear about “capability”.  Are we talking
 about “rule” in oslo policy terms? Or “action” in nova policy terms? Or
 this is something new.  For example, “compute:create_instance” is a “rule”
 in oslo.policy enforce(…) definition,  “compute:create_instance” is an
 “action” in nova.policy enforce(…) definition.


 By capability, I ( think I ) mean  Action in Nova terms, as I am trying to
 exclude the internal rules that policy lets you define.  However, to
 further muddy the water, you can actually enforce on one of these rules./
 For example, the Keystone server enforces on admin_required  for the V2
 API.

 The term capability has been thrown around a few times and I picked it
 up.  Really what I want to delineate is the point in the code at which
 policy gets enforced.


I completely agree with Adam. Capabilities are the actions a user is
allowed to perform. But I'd rather talk in terms of authorized HTTP API
calls.

I've tossed around the idea of something like GET /capabilities where the
response included something analogous to a list of APIs where the user
(i.e. current token / authorization context) match the relevant policy
rules -- but implementing that concept in more than one service would be a
huge challenge.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-07 Thread Nikhil Manchanda
Hi Li:

Thanks for contacting me about Trove. I'm still the PTL for Trove for
the Liberty cycle.

Looking at my inbox. I see that I have an email from you from late May
5th. Because of the volume of emails I get in my inbox, please
understand that it might sometimes take me 2-3 days to respond to all
non-urgent questions about trove. In fact, I'd recommend that you send
any Trove related questions out to the OpenStack development mailing
list (this one). There are more Trove related folks on this list, and
there is a better chance of getting a quicker reply for queries
sent out to the mailing list.

Hope this helps,

Thanks,
Nikhil


On Thu, May 7, 2015 at 7:47 PM, Li Tianqing jaze...@163.com wrote:

 Thanks a lot


 --
 Best
 Li Tianqing

 At 2015-05-08 10:33:52, Steve Martinelli steve...@ca.ibm.com wrote:

 That information can be found here:
 http://governance.openstack.org/reference/projects/trove.html
 And a full list here:
 http://governance.openstack.org/reference/projects/index.html


 Thanks,

 Steve Martinelli
 OpenStack Keystone Core



 From:Li Tianqing jaze...@163.com
 To:openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org
 Date:05/07/2015 10:31 PM
 Subject:[openstack-dev] [all] who is the ptl of trove?
 --



 Hello,
  Who is the ptl of trove?




 --
 Best
 Li Tianqing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



On Thu, May 7, 2015 at 7:47 PM, Li Tianqing jaze...@163.com wrote:

 Thanks a lot


 --
 Best
 Li Tianqing

 At 2015-05-08 10:33:52, Steve Martinelli steve...@ca.ibm.com wrote:

 That information can be found here:
 http://governance.openstack.org/reference/projects/trove.html
 And a full list here:
 http://governance.openstack.org/reference/projects/index.html


 Thanks,

 Steve Martinelli
 OpenStack Keystone Core



 From:Li Tianqing jaze...@163.com
 To:openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org
 Date:05/07/2015 10:31 PM
 Subject:[openstack-dev] [all] who is the ptl of trove?
 --



 Hello,
  Who is the ptl of trove?




 --
 Best
 Li Tianqing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Dan Prince
On Thu, 2015-05-07 at 12:15 +0300, marios wrote:
 On 07/05/15 05:32, Dan Prince wrote:
  Looking over some of the Puppet pacemaker stuff today. I appreciate all
  the hard work going into this effort but I'm not quite happy about all
  of the conditionals we are adding to our puppet overcloud_controller.pp
  manifest. Specifically it seems that every service will basically have
  its resources duplicated for pacemaker and non-pacemaker version of the
  controller by checking the $enable_pacemaker variable.
  
  After seeing it play out for a couple services I think I might prefer it
  better if we had an entirely separate template for the pacemaker
  version of the controller. One easy way to kick off this effort would be
  to use the Heat resource registry to enable pacemaker rather than a
  parameter.
  
  Something like this:
  
  https://review.openstack.org/#/c/180833/
 
 +1 I like this as an idea. Given we've already got quite a few reviews
 in flight making changes to overcloud_controller.pp (we're still working
 out how to, and enabling services) I'd be happier to let those land and
 have the tidy up once it settles (early next week at the latest) -
 especially since there's some working out+refactoring to do still,

My preference would be that we not go any further down the path of using
$enable_pacemaker in the overcloud_controller.pp template.

I don't think it would be that hard to convert existing reviews to use
the new file would it? And removing the conditionals would just make it
read more cleanly too.

Dan

 
 thanks, marios
 
  
  If we were to split out the controller into two separate templates I
  think it might be appropriate to move a few things into puppet-tripleo
  to de-duplicate a bit. Things like the database creation for example.
  But probably not all of the services... because we are trying as much as
  possible to use the stackforge puppet modules directly (and not our own
  composition layer).
  
  I think this split is a good compromise and would probably even speed up
  the implementation of the remaining pacemaker features too. And removing
  all the pacemaker conditionals we have from the non-pacemaker version
  puts us back in a reasonably clean state I think.
  
  Dan
  
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] interaction between fuel-plugin and fuel-UI

2015-05-07 Thread Samuel Bartel
Hi all,



I am working on two plugins for fuel : logrotate and cinder-netapp (to add
multibackend feature)

In this two plugins I face the same problem. Is it possible in the
environment yaml config describing the fields to display for the plugin in
the UI to have some dynamic element.

I explain my need. I would like to be able to add additional element by
clicking on a “+” button as the IP range for network tab in order to be
able to:

-add new log file to manage for the logrorate instead of having a static
list

-add extra netapp filer/volume instead ofbeing able to setup only one for
the cinder netapp in a multibackend scope.

For the cinder netapp for example, I would be able to access to the netapp
server hostname with:

$::fuel_settings[‘cinder_netapp’][0][‘netapp_server_hostname’]  #for the
first one

$::fuel_settings[‘cinder_netapp’][1][‘netapp_server_hostname’]  #for the
second  one

And so on.



Can we do that with the actual plugin feature.  If not is it planned to add
such a feature?



Regards,


Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] liberty summit cross-project session schedule published

2015-05-07 Thread Doug Hellmann
The cross-project session schedule is published. See the Tuesday sessions with 
names starting “Cross Project workshops on 
https://libertydesignsummit.sched.org/overview/type/design+summit 

If you are a moderator of one of these sessions, please contact me directly if 
you have a scheduling conflict such as a presentation at the conference. We did 
our best to spot those conflicts already, but we’ll try to accommodate if we 
missed one.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] liberty summit cross-project session schedule published

2015-05-07 Thread Thierry Carrez
Doug Hellmann wrote:
 The cross-project session schedule is published. See the Tuesday sessions 
 with names starting “Cross Project workshops on 
 https://libertydesignsummit.sched.org/overview/type/design+summit 

Or use the right filter:

https://libertydesignsummit.sched.org/overview/type/design+summit/Cross+Project+workshops

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-05-07 Thread Neil Jerram
Is there a design for how ML2 mechanism drivers are supposed to cope 
with the Neutron server forking?


What I'm currently seeing, with api_workers = 2, is:

- my mechanism driver gets instantiated and initialized, and immediately 
kicks off some processing that involves communicating over the network


- the Neutron server process then forks into multiple copies

- multiple copies of my driver's network processing then continue, and 
interfere badly with each other :-)


I think what I should do is:

- wait until any forking has happened

- then decide (somehow) which mechanism driver is going to kick off that 
processing, and do that.


But how can a mechanism driver know when the Neutron server forking has 
happened?


Thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Jay Dobies



On 05/07/2015 06:01 AM, Giulio Fidente wrote:

On 05/07/2015 11:15 AM, marios wrote:

On 07/05/15 05:32, Dan Prince wrote:


[..]


Something like this:

https://review.openstack.org/#/c/180833/


+1 I like this as an idea. Given we've already got quite a few reviews
in flight making changes to overcloud_controller.pp (we're still working
out how to, and enabling services) I'd be happier to let those land and
have the tidy up once it settles (early next week at the latest) -
especially since there's some working out+refactoring to do still,


+1 on not block ongoing work

as of today a split would cause the two .pp to have a lot of duplicated
data, making them not better than one with the ifs IMHO


I'm with Giulio here. I'm not as strong on my puppet as everyone else, 
but I don't see the current approach as duplication, it's just passing 
in different configurations.



we should probably move out of the existing .pp the duplicated parts
first (see my other email on the matter)


My bigger concern is Tuskar. It has the ability to set parameters. It's 
hasn't moved to a model where you're configuring the overcloud through 
selecting entries in the resource registry. I can see that making sense 
in the future, but that's going to require API changes.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Dan Prince
On Thu, 2015-05-07 at 11:56 +0200, Jiří Stránský wrote:
 Hi Dan,
 
 On 7.5.2015 04:32, Dan Prince wrote:
  Looking over some of the Puppet pacemaker stuff today. I appreciate all
  the hard work going into this effort but I'm not quite happy about all
  of the conditionals we are adding to our puppet overcloud_controller.pp
  manifest. Specifically it seems that every service will basically have
  its resources duplicated for pacemaker and non-pacemaker version of the
  controller by checking the $enable_pacemaker variable.
 
 +1
 
 
  After seeing it play out for a couple services I think I might prefer it
  better if we had an entirely separate template for the pacemaker
  version of the controller. One easy way to kick off this effort would be
  to use the Heat resource registry to enable pacemaker rather than a
  parameter.
 
  Something like this:
 
  https://review.openstack.org/#/c/180833/
 
 I have two mild concerns about this approach:
 
 1) We'd duplicate the logic (or at least the inclusion logic) for the 
 common parts in two places, making it prone for the two .pp variants to 
 get out of sync. The default switches from if i want to make a 
 difference between the two variants, i need to put in a conditional to 
 if i want to *not* make a difference between the two variants, i need 
 to put this / include this in two places.

The goal for these manifests is that we would just be doing 'include's
for various stackforge puppet modules. If we have
'include ::glance::api' in two places that doesn't really bother me.
Agree that it isn't ideal but I don't think it bothers me too much. And
the benefit is we can get rid of pacemaker conditionals for all the
things.

 
 2) If we see some other bit emerging in the future, which would be 
 optional but at the same time omnipresent in a similar way as 
 Pacemaker is, we'll see the same if/else pattern popping up. Using the 
 same solution would mean we'd have 4 .pp files (a 2x2 matrix) doing the 
 same thing to cover all scenarios. This is a somewhat hypothetical 
 concern at this point, but it might become real in the future (?).

Sure. It could happen. But again maintaining all of those in a single
file could be quite a mess too. And if we are striving to set all or our
Hiera data in Heat (avoiding use of some of the puppet functions we now
make use of like split, etc) this would further de-duplicate it I think.

Again having duplication that includes just the raw puppet classes
doesn't bother me too much.

 
 
  If we were to split out the controller into two separate templates I
  think it might be appropriate to move a few things into puppet-tripleo
  to de-duplicate a bit. Things like the database creation for example.
  But probably not all of the services... because we are trying as much as
  possible to use the stackforge puppet modules directly (and not our own
  composition layer).
 
 I think our restraint from having a composition layer (extracting things 
 into puppet-tripleo) is what's behind my concern no. 1 above. I know one 
 of the arguments against having a composition layer is that it makes 
 things less hackable, but if we could amend puppet modules without 
 rebuilding or altering the image, it should mitigate the problem a bit 
 [1]. (It's almost a matter that would deserve a separate thread though :) )
 
 
  I think this split is a good compromise and would probably even speed up
  the implementation of the remaining pacemaker features too. And removing
  all the pacemaker conditionals we have from the non-pacemaker version
  puts us back in a reasonably clean state I think.
 
  Dan
 
 
 An alternative approach could be something like:
 
 if hiera('step') = 2 {
  include ::tripleo::mongodb
 }
 
 and move all the mongodb related logic to that class and let it deal 
 with both pacemaker and non-pacemaker use cases. This would reduce the 
 stress on the top-level .pp significantly, and we'd keep things 
 contained in logical units. The extracted bits will still have 
 conditionals but it's going to be more manageable because the bits will 
 be a lot smaller. So this would mean splitting up the manifest per 
 service rather than based on pacemaker on/off status. This would require 
 more extraction into puppet-tripleo though, so it kinda goes against the 
 idea of not having a composition layer. It would also probably consume a 
 bit more time to implement initially and be more disruptive to the 
 current state of things.
 
 At this point i don't lean strongly towards one or the other solution, i 
 just want us to have an option to discuss and consider benefits and 
 drawbacks of both, so that we can take an informed decision. I think i 
 need to let this sink in a bit more myself.
 
 
 Cheers
 
 Jirka
 
 [1] https://review.openstack.org/#/c/179177/
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Dan Prince
On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:
 hi Dan!
 
 On 05/07/2015 04:32 AM, Dan Prince wrote:
  Looking over some of the Puppet pacemaker stuff today. I appreciate all
  the hard work going into this effort but I'm not quite happy about all
  of the conditionals we are adding to our puppet overcloud_controller.pp
  manifest. Specifically it seems that every service will basically have
  its resources duplicated for pacemaker and non-pacemaker version of the
  controller by checking the $enable_pacemaker variable.
 
 not sure about the meaning of 'resources duplicated' but I think it is 
 safe to say that the pacemaker ifs are there coping mainly with the 
 following two:
 
 1. when pacemaker, we don't want puppet to enable/start the service, 
 pacemaker will manage so we need to tell the module not to
 
 2. when pacemaker, there are some pacemaker related steps to be 
 performed, like adding a resource into the cluster so that it is 
 effectively monitoring the service status
 
 in the future, we might need to pass some specific config params to a 
 module only when pacemaker, but that looks like covered by 1) already
 
  After seeing it play out for a couple services I think I might prefer it
  better if we had an entirely separate template for the pacemaker
  version of the controller. One easy way to kick off this effort would be
  to use the Heat resource registry to enable pacemaker rather than a
  parameter.
 
  Something like this:
 
  https://review.openstack.org/#/c/180833/
 
  If we were to split out the controller into two separate templates I
  think it might be appropriate to move a few things into puppet-tripleo
  to de-duplicate a bit. Things like the database creation for example.
  But probably not all of the services... because we are trying as much as
  possible to use the stackforge puppet modules directly (and not our own
  composition layer).
 
 I think the change is good, I am assuming we don't want the shared parts 
 to get duplicated into the two .pp though.

So again. Duplicating the puppet class includes doesn't bother me too
much. Some of the logic (perhaps the DB creation) should move over to
puppet-tripleo however. But I would like to see us not go crazy with the
composition layer... using the stackforge/puppet modules directly is
best I think.


Any conversion code in Puppet (functions using split, downcase, etc) I
view as technical debt which should ideally we would eventually be able
to convert within the Heat templates themselves into formats usable by
Hiera directly. Any duplication around that sort of thing would
eventually get cleaned up as Heat gets an extra function or two.

Dan

 
 What is your idea about those shared parts? To move them into 
 puppet-tripleo? To provision a shared .pp in addition to a 
 differentiated top-level template maybe? Something else?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread marios
On 07/05/15 16:34, Dan Prince wrote:
 On Thu, 2015-05-07 at 12:15 +0300, marios wrote:
 On 07/05/15 05:32, Dan Prince wrote:
 Looking over some of the Puppet pacemaker stuff today. I appreciate all
 the hard work going into this effort but I'm not quite happy about all
 of the conditionals we are adding to our puppet overcloud_controller.pp
 manifest. Specifically it seems that every service will basically have
 its resources duplicated for pacemaker and non-pacemaker version of the
 controller by checking the $enable_pacemaker variable.

 After seeing it play out for a couple services I think I might prefer it
 better if we had an entirely separate template for the pacemaker
 version of the controller. One easy way to kick off this effort would be
 to use the Heat resource registry to enable pacemaker rather than a
 parameter.

 Something like this:

 https://review.openstack.org/#/c/180833/

 +1 I like this as an idea. Given we've already got quite a few reviews
 in flight making changes to overcloud_controller.pp (we're still working
 out how to, and enabling services) I'd be happier to let those land and
 have the tidy up once it settles (early next week at the latest) -
 especially since there's some working out+refactoring to do still,
 
 My preference would be that we not go any further down the path of using
 $enable_pacemaker in the overcloud_controller.pp template.
 
 I don't think it would be that hard to convert existing reviews to use
 the new file would it? And removing the conditionals would just make it
 read more cleanly too.

something like this should do it?:

 https://review.openstack.org/#/c/181015/1

I rebased onto yours and moved the enable_pacemaker stuff. If this is
what we want to do then I can rebase my other two dependent patches too
and do the same,

marios

 
 Dan
 

 thanks, marios


 If we were to split out the controller into two separate templates I
 think it might be appropriate to move a few things into puppet-tripleo
 to de-duplicate a bit. Things like the database creation for example.
 But probably not all of the services... because we are trying as much as
 possible to use the stackforge puppet modules directly (and not our own
 composition layer).

 I think this split is a good compromise and would probably even speed up
 the implementation of the remaining pacemaker features too. And removing
 all the pacemaker conditionals we have from the non-pacemaker version
 puts us back in a reasonably clean state I think.

 Dan


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-07 Thread Thomas Goirand



On 05/05/2015 09:56 PM, Mike Bayer wrote:

Having two packages that both install into the same name is the least
ideal arrangement


From your point of view, and for testing against both, certainly. But 
for a distribution, avoiding dot have 2 packages clashing each other and 
deciding on only a single implementation of the same API is so much 
better in many ways. This avoid the duplication of work, security 
support, and above all: this makes it possible for all reverse 
dependency to just use the new implementation without doing anything.



and I don't see why we have to settle for a mediocre
outcome like that.  What we want is MySQL-Python to be maintained, we
have a maintainer, we have the code, we have everything we need, except
a password. We should at least make an attempt at that outcome.


A fork is often the worst thing that can happen to a project. See the 
examples of libav vs ffmpeg, libreoffice vs openoffice, or mysql vs 
mariadb. At the end, end users and developers all suffer. The only thing 
we can do is pickup the implementation which we believe is best for us. 
And in this case, it looks like mysqlclient has python3 support, which 
we want as a feature.


If you believe you can make it so that either:
#1 mysql-python can get Python 3 support.
#2 both forks are re-merged, and maintained as one.

then that's the best possible outcome (especially #2).

Whatever happens, talking to both upstream seems a very good idea to me.

However, it may not be possible to revert what has (or is about to) 
happen in Debian, as this is the decision of the package maintainer. I 
don't think it would be a good idea to go up to the Debian technical 
committee if the maintainer of the python-mysqldb package doesn't do 
something we like. The only other option we'd have would be to 
re-introduce mysql-python as a separate package, but the Debian FTP 
masters may oppose to it and reject it, unless we have a very good 
reason to do so (and at this point, I don't know if we do...).


Hoping the above helps with Debian insights,
Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient works with keystone sessions?

2015-05-07 Thread Ian Cordasco


On 5/7/15, 14:43, Jay Reslock jresl...@gmail.com wrote:

Hi,
This is my first mail to the group.  I hope I set the subject correctly
and that this hasn't been asked already.  I searched archives and did not
see this question asked or answered previously.


I am working on a client thing that uses the python-keystoneclient and
python-heatclient api bindings to set up an authenticated session and
then use that session to talk to the heat service.  This doesn't work for
heat but does work for other services
 such as nova and sahara.  Is this because sessions aren't supported in
the heatclient api yet?


sample code:


https://gist.github.com/jreslock/a525abdcce53ca0492a7



I'm using fabric to define tasks so I can call them via another tool.
When I run the task I get:


TypeError: Client() takes at least 1 argument (0 given)



The documentation does not say anything about being able to pass session
to the heatclient but the others seem to work.  I just want to know if
this is intended/expected behavior or not.


-Jason



Hey Jason,

Welcome to the list. There's a critical difference in the different
Client's that you import. In the rest of the examples you import from
foo.v2.client. In this case, you're importing heatclient.client. Since
heat's API is versioned, heatclient.client.Client is expecting a string
like 'v1' to be passed like

Client(version='v1', session=sess)

Alternatively, you can just do

from heatclient.v1 import client as heat_v1
heat = heat_v1.Client(session=sess)

Note, there is no v2 in heatclient.

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Juno is completely broken in Trusty + Linux 3.19!

2015-05-07 Thread Martinx - ジェームズ
On Thu, May 7, 2015 at 4:26 PM Martinx - ジェームズ thiagocmarti...@gmail.com
wrote:

 Guys,

  I just upgraded my Trusty servers, that I'm running OpenStack Juno, to
 Linux 3.19, which is already available at Proposed repository.

  OpenStack is dead here, no connectivity for the tenants.

  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1452868

  I appreciate any help!

  It works okay with Linux 3.19.

 Best,
 Thiago


First things first.

I'm so sorry about this crossposting, I'll not do it again. I'm tired and
I'm not feeling well today, huge headache... I'll only crosspost again, my
apologies now.

Secondly, only after a complete power cycle, Juno with Trusty + Linux 3.19
is working. Tenants have connectivity again.

 So far, the only error that I'm still seeing, is the following:

---
== /var/log/neutron/ovs-cleanup.log ==
2015-05-07 13:30:57.384 881 TRACE neutron Command: ['sudo',
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link',
'delete', 'tap3da8f4fe-55']
2015-05-07 13:30:57.384 881 TRACE neutron Exit code: 2
2015-05-07 13:30:57.384 881 TRACE neutron Stdout: ''
2015-05-07 13:30:57.384 881 TRACE neutron Stderr: 'RTNETLINK answers:
Operation not supported\n'
---

 I'll clean up everything, bridges and Namespaces, to see if the problem
persists. I saw this problem in a lab as well, I'll reset the databases and
tenants to start over again, with Linux 3.19 from the beginning.

 Again, forgive-me about the crossposting and about the buzz.

Best Regards,
Thiago
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Question on Cinder client exception handling

2015-05-07 Thread Chen CH Ji
no, I only want to confirm whether cinder folks is doing this or there are
already tricks can be used that before submit the change ... thanks

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Matt Riedemann mrie...@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Date:   05/07/2015 10:12 PM
Subject:Re: [openstack-dev] [cinder][nova] Question on Cinder client
exception handling





On 5/6/2015 7:02 AM, Chen CH Ji wrote:
 Hi
 In order to work on [1] , nova need to know what kind of
 exception are raised when using cinderclient so that it can handle like
 [2] did?
 In this case, we don't need to distinguish the error
 case based on string compare , it's more accurate and less error leading
 Anyone is doing it or any other methods I can use to
 catch cinder specified  exception in nova? Thanks


 [1] https://bugs.launchpad.net/nova/+bug/1450658
 [2]

https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L64


 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
 District, Beijing 100193, PRC



__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Is there anything preventing us from adding a more specific exception to
cinderclient and then once that's in and released, we can pin the
minimum version of cinderclient in global-requirements so nova can
safely use it?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread David Medberry
On Thu, May 7, 2015 at 2:10 PM, Chuck Thier cth...@gmail.com wrote:

 What started as a simple experiment by Mike Barton, has turned into quite
 a significant improvement in performance and builds a base that can be
 built off of for future improvements.  This wasn't built because of it
 being shiny but out of direct need, and is currently being tested with
 great results on production workloads.


Excellent and congrats to the team. Also very happy that it has been posted
and not made secret sauce. Much appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-07 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I don't know much about the puppet project organization so I won't
comment on whether 1 or 2 is better, but a big +1 to having a common
way to configure Oslo opts.  Consistency of those options across all
services is one of the big reasons we pushed so hard for the libraries
to own their option definitions, so this would align well with the way
the projects are consumed.

- -Ben

On 05/07/2015 03:19 PM, Emilien Macchi wrote:
 Hi,
 
 I think one of the biggest challenges working on Puppet OpenStack 
 modules is to keep code consistency across all our modules (~20). 
 If you've read the code, you'll see there is some differences
 between RabbitMQ configuration/parameters in some modules and this
 is because we did not have the right tools to make it properly. A
 lot of the duplicated code we have comes from Oslo libraries 
 configuration.
 
 Now, I come up with an idea and two proposals.
 
 Idea 
 
 We could have some defined types to configure oslo sections in
 OpenStack configuration files.
 
 Something like: define oslo::messaging::rabbitmq( $user, $password 
 ) { ensure_resource($name, 'oslo_messaging_rabbit/rabbit_userid',
 {'value' = $user}) ... }
 
 Usage in puppet-nova: ::oslo::messaging::rabbitmq{'nova_config': 
 user = 'nova', password = 'secrete', }
 
 And patch all our modules to consume these defines and finally
 have consistency at the way we configure Oslo projects (messaging,
 logging, etc).
 
 Proposals =
 
 #1 Creating puppet-oslo ... and having oslo::messaging::rabbitmq,
 oslo::messaging::qpid, ..., oslo::logging, etc. This module will be
 used only to configure actual Oslo libraries when we deploy
 OpenStack. To me, this solution is really consistent with how 
 OpenStack works today and is scalable as soon we contribute Oslo 
 configuration changes in this module.
 
 #2 Using puppet-openstacklib ... and having
 openstacklib::oslo::messaging::(...) A good thing is our modules
 already use openstacklib. But openstacklib does not configure
 OpenStack now, it creates some common defines  classes that are
 consumed in other modules.
 
 
 I personally prefer #1 because: * it's consistent with OpenStack. *
 I don't want openstacklib the repo where we put everything common.
 We have to differentiate *common-in-OpenStack* and
 *common-in-our-modules*. I think openstacklib should continue to be
 used for common things in our modules, like providers, wrappers,
 database management, etc. But to configure common OpenStack bits
 (aka Oslo©), we might want to create puppet-oslo.
 
 As usual, any thoughts are welcome,
 
 Best,
 
 
 
 __


 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVS9HMAAoJEDehGd0Fy7uq24sH/j/ctaGaNbGdxyRCfBatIPbU
Vk810yyMYzNH67s4Ku8LsEKvqMAoToEtnq/84ZXiUGUH65PtwGm9e6Nb54tkIHTE
tPVjQSePC7omn97M5A4tb94b6h0TaLxWT+0oZjnto1Lk+/Q1tCYgCySClyF/CsmM
2CZvHRqRKWG1ytWhJuYrjymury4Xfgpcwt7MA69Nqun/7fwjSgFvvVdfVlln6VI+
2Nx4AIFDyXVafvN7ZBIGkyqrWRsmyht3elvJg5JtxSu8gQbf3LVgbkTLREUccHDA
07/edo00ouAHMhKyYdvFimmjqr6gom5OqmpQqiw8TsFqFUDEXunTVil/v5W1dL8=
=B85A
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

On 05/07/2015 07:35 PM, Dan Prince wrote:

On Thu, 2015-05-07 at 17:36 +0200, Giulio Fidente wrote:

On 05/07/2015 03:31 PM, Dan Prince wrote:

On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:


[...]


on the other hand, we can very well get rid of the ifs today by
deploying *with* pacemaker in single node scenario as well! we already
have EnablePacemaker always set to true for dev purposes, even on single
node


EnablePacemaker is set to 'false' by default. IMO it should be opt-in:

http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=1f7426a014f0f83ace4d2b3531014e22f7778b4d


sure that param is false by default, but one can enable it and deploy 
with pacemaker on single node, and in fact many people do this for dev 
purposes


before that change, we were even running CI on single node with 
pacemaker so as a matter of fact, one could get rid of the conditionals 
in the manifest today by just assuming there will be pacemaker


this said, I prefer myself to leave some air for a (future?) 
non-pacemaker scenario, but I still wanted to point out the reason why 
the conditionals are there in the first place

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >