Re: [openstack-dev] [oslo] Adding Mehdi Abaakouk (sileht) to oslo-core

2015-05-11 Thread Victor Stinner
Hi,

I didn't know that Mehdi was only core reviewer on Oslo Messaging.

+1 for Mehdi as Oslo core reviewer.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-11 Thread Filip Blaha

Hi Stan

we wanted interact with murano applications from mistral. Currently 
there is no support in mistral how to execute scripts on VM via murano 
agent (maybe I miss something). We noticed std.ssh mistral action so we 
consider SSH as one of the options. I think that it is not good idea due 
to networking obstacles I just wanted to confirm that. Thanks for 
pointing out Zaquar I didn't know about it.


Filip


On 05/10/2015 04:21 AM, Stan Lagun wrote:

Filip,

If I got you right the plan is to have Murano application execute 
Mistral workflow that SSH to VM and executes particular command? And 
alternative is Murano-Mistral-Zaquar-Zaquar agent?
Why can't you just send this command directly from Murano (to Murano 
agent on VM)? This is the most common use case that is found in nearly 
all Murano applications and it is battle-proven. If you need SSH you 
can contribute SSH plugin to Murano (Mistral will require similar 
plugin anyway). The more moving parts you involve the more chances you 
have for everything to fail



Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis


On Fri, May 8, 2015 at 11:22 AM, Renat Akhmerov 
rakhme...@mirantis.com mailto:rakhme...@mirantis.com wrote:


Generally yes, std.ssh action works as long as network
infrastructure allows access to a host using specified IP, it
doesn’t provide anything on top of that.


 On 06 May 2015, at 22:26, Fox, Kevin M kevin@pnnl.gov
mailto:kevin@pnnl.gov wrote:

 This would also probably be a good use case for Zaqar I think.
Have a generic run shell commands from Zaqar queue agent, that
pulls commands from a Zaqar queue, and executes it.
 The vm's don't have to be directly reachable from the network
then. You just have to push messages into Zaqar.

Yes, in Mistral it would be another action that puts a command
into Zaqar queue. This type of action doesn’t exist yet but it can
be plugged in easily.

 Should Mistral abstract away how to execute the action, leaving
it up to Mistral how to get the action to the vm?

Like I mentioned previously it should be just a different type of
action: “zaqar.something” instead of “std.ssh”. Mistral engine
itself works with all actions equally, they are just basically
functions that we can plug in and use in Mistral workflow
language. From this standpoint Mistral is already abstract enough.

 If that's the case, then ssh vs queue/agent is just a Mistral
implementation detail?

More precisely: implementation detail of Mistral action which may
not be even hardcoded part of Mistral, we can rather plug them in
(using stevedore underneath).


Renat Akhmerov
@ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-11 Thread Li Tianqing






--

Best
Li Tianqing



At 2015-05-11 16:50:05, Flavio Percoco fla...@redhat.com wrote:
On 11/05/15 16:32 +0800, Li Tianqing wrote:





--
Best
Li Tianqing



At 2015-05-11 16:04:07, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote:
On 09/05/15 02:28, Monty Taylor wrote:
 On 05/08/2015 03:45 AM, Nikhil Manchanda wrote:

 Comments and answers inline.

 Li Tianqing writes:

 [...]

 1) why we put the trove vm into user's tenant, not the trove's
 tenant? User can login on that vm, and that vm must connect to
 rabbitmq. It is quite insecure.
 what's about put the tenant into trove tenant?

 While the default configuration of Trove in devstack puts Trove guest
 VMs into the users' respective tenants, it's possible to configure Trove
 to create VMs in a single Trove tenant. You would do this by
 overriding the default novaclient class in Trove's remote.py with one
 that creates all Trove VMs in a particular tenant whose user credentials
 you will need to supply. In fact, most production instances of Trove do
 something like this.

 Might I suggest that if this is how people regularly deploy, that such a
 class be included in trove proper, and that a config option be provided
 like use_tenant='name_of_tenant_to_use' that would trigger the use of
 the overridden novaclient class?

 I think asking an operator as a standard practice to override code in
 remote.py is a bad pattern.

 2) Why there is no trove mgmt cli, but mgmt api is in the code?
 Does it disappear forever ?

 The reason for this is because the old legacy Trove client was rewritten
 to be in line with the rest of the openstack clients. The new client
 has bindings for the management API, but we didn't complete the work on
 writing the shell pieces for it. There is currently an effort to
 support Trove calls in the openstackclient, and we're looking to
 support the management client calls as part of this as well. If this is
 something that you're passionate about, we sure could use help landing
 this in Liberty.

 3)  The trove-guest-agent is in vm. it is connected by taskmanager
 by rabbitmq. We designed it. But is there some prectise to do this?
  how to make the vm be connected in vm-network and management
  network?

 Most deployments of Trove that I am familiar with set up a separate
 RabbitMQ server in cloud that is used by Trove. It is not recommended to
 use the same infrastructure RabbitMQ server for Trove for security
 reasons. Also most deployments of Trove set up a private (neutron)
 network that the RabbitMQ server and guests are connected to, and all
 RPC messages are sent over this network.

 This sounds like a great chunk of information to potentially go into
 deployer docs.



I'd like to +1 this.

It is misleading that the standard documentation (and the devstack
setup) describes a configuration that is unsafe/unwise to use in
production. This is surely unusual to say the least! Normally when test
or dev setups use unsafe configurations the relevant docs clearly state
this - and describe how it should actually be done.

In addition the fact that several extended question threads were
required to extract this vital information is ...disappointing, and does
not display the right spirit for an open source project in my opinion!

While I agree with this last paragraph

i really want to vote to put trove back into stackforge for it can not give 
an clear deployment,

... I must point out this is *NOT* the way this community (OpenStack)
works.

This mailing list exists to discuss things in the open, make sure we
can have wide and constructive discussions to make things like the ones
discussed in this thread come out so that they can be improved,
documented and made public for the sake of usability, operation and
adoption.

 and it is always avoiding problem, not to face it. The service vm need get 
 connect through the management should
talk about a lot.

It's now on Trove team's - or other people's - hands to help
find/implement a better solution for the problem pointed out in this
thread and make the product better.


Let's collaborate,


Fine.


Flavio




Regards

Mark




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-11 Thread Li Tianqing






--

Best
Li Tianqing



At 2015-05-11 16:04:07, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote:
On 09/05/15 02:28, Monty Taylor wrote:
 On 05/08/2015 03:45 AM, Nikhil Manchanda wrote:

 Comments and answers inline.

 Li Tianqing writes:

 [...]

 1) why we put the trove vm into user's tenant, not the trove's
 tenant? User can login on that vm, and that vm must connect to
 rabbitmq. It is quite insecure.
 what's about put the tenant into trove tenant?

 While the default configuration of Trove in devstack puts Trove guest
 VMs into the users' respective tenants, it's possible to configure Trove
 to create VMs in a single Trove tenant. You would do this by
 overriding the default novaclient class in Trove's remote.py with one
 that creates all Trove VMs in a particular tenant whose user credentials
 you will need to supply. In fact, most production instances of Trove do
 something like this.

 Might I suggest that if this is how people regularly deploy, that such a
 class be included in trove proper, and that a config option be provided
 like use_tenant='name_of_tenant_to_use' that would trigger the use of
 the overridden novaclient class?

 I think asking an operator as a standard practice to override code in
 remote.py is a bad pattern.

 2) Why there is no trove mgmt cli, but mgmt api is in the code?
 Does it disappear forever ?

 The reason for this is because the old legacy Trove client was rewritten
 to be in line with the rest of the openstack clients. The new client
 has bindings for the management API, but we didn't complete the work on
 writing the shell pieces for it. There is currently an effort to
 support Trove calls in the openstackclient, and we're looking to
 support the management client calls as part of this as well. If this is
 something that you're passionate about, we sure could use help landing
 this in Liberty.

 3)  The trove-guest-agent is in vm. it is connected by taskmanager
 by rabbitmq. We designed it. But is there some prectise to do this?
  how to make the vm be connected in vm-network and management
  network?

 Most deployments of Trove that I am familiar with set up a separate
 RabbitMQ server in cloud that is used by Trove. It is not recommended to
 use the same infrastructure RabbitMQ server for Trove for security
 reasons. Also most deployments of Trove set up a private (neutron)
 network that the RabbitMQ server and guests are connected to, and all
 RPC messages are sent over this network.

 This sounds like a great chunk of information to potentially go into
 deployer docs.



I'd like to +1 this.

It is misleading that the standard documentation (and the devstack 
setup) describes a configuration that is unsafe/unwise to use in 
production. This is surely unusual to say the least! Normally when test 
or dev setups use unsafe configurations the relevant docs clearly state 
this - and describe how it should actually be done.

In addition the fact that several extended question threads were 
required to extract this vital information is ...disappointing, and does 

not display the right spirit for an open source project in my opinion!


i really want to vote to put trove back into stackforge for it can not give an 
clear deployment,
 and it is always avoiding problem, not to face it. The service vm need get 
connect through the management should 
talk about a lot. 





Regards

Mark




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ImageRef vs bdm v2 in REST API

2015-05-11 Thread Feodor Tersin
Hi.

Since that bdm v2 was introduced for Havana, it requires a caller to
specify bdm for an image together with imageRef to boot an instance in a
case of using bdm v2 to attach additional volumes.

{server: {imageRef: xxx,
   block_device_mapping_v2: [
   {uuid: xxx,
source_type: image,
destination_type: local,
boot_index: 0,
   },
   other_mappings
   ],
   ...}}

If we specify imageRef or the bdm record only, the launch will be failed.

Novaclient does it [1] for shell invokes like
nova boot --image xxx --block_device other mappings ...
, but does nothing for client API calls.

Such feature of usage is unclear. I've not found any documentation for the
usage. Wiki [2] doesn't mention this feature, but refers to a deleted API
sample [3]. Before deleting [4] that sample didn't consider the feature, so
it was wrong.

As a result the need to specify an image in two arguments seems a temprary
workaround for some problems. And whole bdm v2 conception looks not well
designed, not finished.

The question is: is this feature correct? Shall we specify an image twice
for the long term? Otherwise which manner should be established finally: by
imageRef or by an bdm record?

ps. There are a review [5] and a linked bug [6] which require clarification
of this question.

[1]
https://github.com/openstack/python-novaclient/commit/6a85c954c53f868251413db51cc1d9616acd4d02#diff-4812fe2b8b37d18cf9498f9fbbab17beR125
[2]
https://wiki.openstack.org/wiki/BlockDeviceConfig#API_data_model_and_backwards_compat_issues
[3]
https://github.com/openstack/nova/tree/master/doc/api_samples/os-block-device-mapping-v2-boot
[4]
https://github.com/openstack/nova/blob/2f32996c3e5625245a4d0588ab32880d41400b9e/doc/api_samples/os-block-device-mapping-v2-boot/server-post-req.json
[5] https://review.openstack.org/#/c/171984/
[6] https://bugs.launchpad.net/nova/+bug/1441990
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] disabling pypy unit test jobs for oslo

2015-05-11 Thread Davanum Srinivas
requirements jobs are stuck as well :(

http://logs.openstack.org/30/170830/6/check/gate-requirements-pypy/732dc33/console.html#_2015-05-11_01_25_22_506

-- dims

On Mon, May 11, 2015 at 6:19 AM, Steven Hardy sha...@redhat.com wrote:
 On Mon, May 11, 2015 at 11:52:13AM +0200, Julien Danjou wrote:
 On Fri, May 08 2015, Doug Hellmann wrote:

  The jobs running unit tests under pypy are failing for several Oslo
  libraries for reasons that have nothing to do with the libraries
  themselves, as far as I can tell (they pass locally). I have proposed
  a change to mark the jobs as non-voting [1] until someone can fix
  them, but we need a volunteer to look at the failure and understand why
  they fail.
 
  Does anyone want to step up to do that? If we don't have a volunteer in
  the next couple of weeks, I'll go ahead and remove the jobs so we can
  use those test nodes for other jobs.

 I'm willing to take a look at those, do you have any link to a
 review/job that failed?

 I suspect we're impacted by the same issue for python-heatclient, nearly
 all of our patches are failing the pypy job, e.g:

 http://logs.openstack.org/56/178756/4/gate/gate-python-heatclient-pypy/66e4dcc/console.html

 The error error: option --single-version-externally-managed not
 recognized looks a lot like bug 1290562 which was closed months ago.

 I've raised https://bugs.launchpad.net/python-heatclient/+bug/1453095,
 because I didn't know what component (other than heatclient) this could be
 assigned to.

 I'm attempting the bug 1290562 workaround here:

 https://review.openstack.org/#/c/181851/

 If we can't figure out what the problem is, I guess we'll have to
 temporarily disable our pypy job too, any insights appreciated! :)

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-11 Thread Mark Kirkwood

On 09/05/15 02:28, Monty Taylor wrote:

On 05/08/2015 03:45 AM, Nikhil Manchanda wrote:


Comments and answers inline.

Li Tianqing writes:


[...]



1) why we put the trove vm into user's tenant, not the trove's
tenant? User can login on that vm, and that vm must connect to
rabbitmq. It is quite insecure.
what's about put the tenant into trove tenant?


While the default configuration of Trove in devstack puts Trove guest
VMs into the users' respective tenants, it's possible to configure Trove
to create VMs in a single Trove tenant. You would do this by
overriding the default novaclient class in Trove's remote.py with one
that creates all Trove VMs in a particular tenant whose user credentials
you will need to supply. In fact, most production instances of Trove do
something like this.


Might I suggest that if this is how people regularly deploy, that such a
class be included in trove proper, and that a config option be provided
like use_tenant='name_of_tenant_to_use' that would trigger the use of
the overridden novaclient class?

I think asking an operator as a standard practice to override code in
remote.py is a bad pattern.


2) Why there is no trove mgmt cli, but mgmt api is in the code?
Does it disappear forever ?


The reason for this is because the old legacy Trove client was rewritten
to be in line with the rest of the openstack clients. The new client
has bindings for the management API, but we didn't complete the work on
writing the shell pieces for it. There is currently an effort to
support Trove calls in the openstackclient, and we're looking to
support the management client calls as part of this as well. If this is
something that you're passionate about, we sure could use help landing
this in Liberty.


3)  The trove-guest-agent is in vm. it is connected by taskmanager
by rabbitmq. We designed it. But is there some prectise to do this?
 how to make the vm be connected in vm-network and management
 network?


Most deployments of Trove that I am familiar with set up a separate
RabbitMQ server in cloud that is used by Trove. It is not recommended to
use the same infrastructure RabbitMQ server for Trove for security
reasons. Also most deployments of Trove set up a private (neutron)
network that the RabbitMQ server and guests are connected to, and all
RPC messages are sent over this network.


This sounds like a great chunk of information to potentially go into
deployer docs.




I'd like to +1 this.

It is misleading that the standard documentation (and the devstack 
setup) describes a configuration that is unsafe/unwise to use in 
production. This is surely unusual to say the least! Normally when test 
or dev setups use unsafe configurations the relevant docs clearly state 
this - and describe how it should actually be done.


In addition the fact that several extended question threads were 
required to extract this vital information is ...disappointing, and does 
not display the right spirit for an open source project in my opinion!


Regards

Mark




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-11 Thread Filip Blaha

Hi

there is VPN mechanism in neutron we could consider for future how to 
get around these networking obstacles if we would like to use direct SSH.


1) every private created by murano would create VPN gateway on public 
interface of the router [1]


neutron vpn-service-create --name myvpn --description My vpn service 
router1 mysubnet


2) any service like mistral which needs directly access VM via SSH (or 
other protocols) would connect to that VPN and then it could directly 
access VM on its fixed IP


This mechanism would probably resolve network obstacles. But it requires 
more effort to analyse it.


[1] https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall

Filip

On 05/08/2015 10:22 AM, Renat Akhmerov wrote:

Generally yes, std.ssh action works as long as network infrastructure allows 
access to a host using specified IP, it doesn’t provide anything on top of that.



On 06 May 2015, at 22:26, Fox, Kevin M kevin@pnnl.gov wrote:

This would also probably be a good use case for Zaqar I think. Have a generic run 
shell commands from Zaqar queue agent, that pulls commands from a Zaqar queue, and 
executes it.
The vm's don't have to be directly reachable from the network then. You just 
have to push messages into Zaqar.

Yes, in Mistral it would be another action that puts a command into Zaqar 
queue. This type of action doesn’t exist yet but it can be plugged in easily.


Should Mistral abstract away how to execute the action, leaving it up to 
Mistral how to get the action to the vm?

Like I mentioned previously it should be just a different type of action: 
“zaqar.something” instead of “std.ssh”. Mistral engine itself works with all 
actions equally, they are just basically functions that we can plug in and use 
in Mistral workflow language. From this standpoint Mistral is already abstract 
enough.


If that's the case, then ssh vs queue/agent is just a Mistral implementation 
detail?

More precisely: implementation detail of Mistral action which may not be even 
hardcoded part of Mistral, we can rather plug them in (using stevedore 
underneath).


Renat Akhmerov
@ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-11 Thread Filip Blaha

Stan,

At the beginning we considered whether we could implement action on a 
murano application via mistral workflow. We thought that it could be 
beneficial to use workflow engine to implement some non-trivial action 
e.g. reconfiguration of some complex application within murano 
environment. Of course there would be disadvantages like dependency on 
mistral. I would recommend to discuss it later on some meeting (IRC, 
hangouts or summit) with Radek to explain use-cases better.


Filip

On 05/11/2015 10:44 AM, Stan Lagun wrote:

Filip,

 Currently there is no support in mistral how to execute scripts on 
VM via murano agent


Mistral can call Murano application action that will do the job via 
agent. Actions are intended to be called by 3rd party systems with 
single HTTP request


Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis


On Mon, May 11, 2015 at 11:27 AM, Filip Blaha filip.bl...@hp.com 
mailto:filip.bl...@hp.com wrote:


Hi

there is VPN mechanism in neutron we could consider for future how
to get around these networking obstacles if we would like to use
direct SSH.

1) every private created by murano would create VPN gateway on
public interface of the router [1]

neutron vpn-service-create --name myvpn --description My vpn
service router1 mysubnet

2) any service like mistral which needs directly access VM via SSH
(or other protocols) would connect to that VPN and then it could
directly access VM on its fixed IP

This mechanism would probably resolve network obstacles. But it
requires more effort to analyse it.

[1] https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall

Filip


On 05/08/2015 10:22 AM, Renat Akhmerov wrote:

Generally yes, std.ssh action works as long as network infrastructure 
allows access to a host using specified IP, it doesn’t provide anything on top 
of that.



On 06 May 2015, at 22:26, Fox, Kevin Mkevin@pnnl.gov  
mailto:kevin@pnnl.gov  wrote:

This would also probably be a good use case for Zaqar I think. Have a generic 
run shell commands from Zaqar queue agent, that pulls commands from a Zaqar 
queue, and executes it.
The vm's don't have to be directly reachable from the network then. You 
just have to push messages into Zaqar.

Yes, in Mistral it would be another action that puts a command into Zaqar 
queue. This type of action doesn’t exist yet but it can be plugged in easily.


Should Mistral abstract away how to execute the action, leaving it up to 
Mistral how to get the action to the vm?

Like I mentioned previously it should be just a different type of action: 
“zaqar.something” instead of “std.ssh”. Mistral engine itself works with all 
actions equally, they are just basically functions that we can plug in and use 
in Mistral workflow language. From this standpoint Mistral is already abstract 
enough.


If that's the case, then ssh vs queue/agent is just a Mistral 
implementation detail?

More precisely: implementation detail of Mistral action which may not be 
even hardcoded part of Mistral, we can rather plug them in (using stevedore 
underneath).


Renat Akhmerov
@ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-11 Thread Stan Lagun
Filip,

 Currently there is no support in mistral how to execute scripts on VM via
murano agent

Mistral can call Murano application action that will do the job via agent.
Actions are intended to be called by 3rd party systems with single HTTP
request

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

sla...@mirantis.com

On Mon, May 11, 2015 at 11:27 AM, Filip Blaha filip.bl...@hp.com wrote:

  Hi

 there is VPN mechanism in neutron we could consider for future how to get
 around these networking obstacles if we would like to use direct SSH.

 1) every private created by murano would create VPN gateway on public
 interface of the router [1]

 neutron vpn-service-create --name myvpn --description My vpn service
 router1 mysubnet

 2) any service like mistral which needs directly access VM via SSH (or
 other protocols) would connect to that VPN and then it could directly
 access VM on its fixed IP

 This mechanism would probably resolve network obstacles. But it requires
 more effort to analyse it.

 [1] https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall

 Filip


 On 05/08/2015 10:22 AM, Renat Akhmerov wrote:

 Generally yes, std.ssh action works as long as network infrastructure allows 
 access to a host using specified IP, it doesn’t provide anything on top of 
 that.



  On 06 May 2015, at 22:26, Fox, Kevin M kevin@pnnl.gov 
 kevin@pnnl.gov wrote:

 This would also probably be a good use case for Zaqar I think. Have a generic 
 run shell commands from Zaqar queue agent, that pulls commands from a Zaqar 
 queue, and executes it.
 The vm's don't have to be directly reachable from the network then. You just 
 have to push messages into Zaqar.

  Yes, in Mistral it would be another action that puts a command into Zaqar 
 queue. This type of action doesn’t exist yet but it can be plugged in easily.


  Should Mistral abstract away how to execute the action, leaving it up to 
 Mistral how to get the action to the vm?

  Like I mentioned previously it should be just a different type of action: 
 “zaqar.something” instead of “std.ssh”. Mistral engine itself works with all 
 actions equally, they are just basically functions that we can plug in and 
 use in Mistral workflow language. From this standpoint Mistral is already 
 abstract enough.


  If that's the case, then ssh vs queue/agent is just a Mistral implementation 
 detail?

  More precisely: implementation detail of Mistral action which may not be 
 even hardcoded part of Mistral, we can rather plug them in (using stevedore 
 underneath).


 Renat Akhmerov
 @ Mirantis Inc.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][clients] - Should we implement project to endpoint group?

2015-05-11 Thread Enrique Garcia
Cool, thanks Jamie. I will look into it this afternoon.

Cheers,
Enrique


On Mon, 11 May 2015 at 02:08 Jamie Lennox jamielen...@redhat.com wrote:



 - Original Message -
  From: Enrique Garcia garcianava...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Monday, May 11, 2015 2:19:43 AM
  Subject: Re: [openstack-dev] [keystone][clients] - Should we implement
 project to endpoint group?
 
  Hi Marcos,
 
  I'm not part of the OpenStack development team but coincidently I
 implemented
  some of these actions on a fork the past week because I needed them in a
  project. If there is interest in these actions I could contribute them
 back
  or send a link to the repo if it would be helpful for you or someone
 else.
  Let me know if I can help.
 
  Cheers,
  Enrique.

 Absolutely there is always interest for things like this to be contributed
 upstream! We'd love to have it.

 There are fairly extensive docs on how to get started contributing
 https://wiki.openstack.org/wiki/How_To_Contribute and if you're not sure
 or need a hand there is typically people in either #openstack-dev or
 #openstack-keystone that can help.


 Jamie

  On Fri, 8 May 2015 at 16:03 Marcos Fermin Lobo 
 marcos.fermin.l...@cern.ch 
  wrote:
 
 
 
  Hi all,
 
  I would like to know if any of you would be interested to implement
 project
  to endpoint group actions (
 
 http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-ep-filter-ext.html#project-to-endpoint-group-relationship
  ) for keystone client. Are any of you already behind this?.
 
  Cheers,
  Marcos.
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-11 Thread Flavio Percoco

On 11/05/15 16:32 +0800, Li Tianqing wrote:






--
Best
   Li Tianqing



At 2015-05-11 16:04:07, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote:

On 09/05/15 02:28, Monty Taylor wrote:

On 05/08/2015 03:45 AM, Nikhil Manchanda wrote:


Comments and answers inline.

Li Tianqing writes:


[...]



1) why we put the trove vm into user's tenant, not the trove's
tenant? User can login on that vm, and that vm must connect to
rabbitmq. It is quite insecure.
what's about put the tenant into trove tenant?


While the default configuration of Trove in devstack puts Trove guest
VMs into the users' respective tenants, it's possible to configure Trove
to create VMs in a single Trove tenant. You would do this by
overriding the default novaclient class in Trove's remote.py with one
that creates all Trove VMs in a particular tenant whose user credentials
you will need to supply. In fact, most production instances of Trove do
something like this.


Might I suggest that if this is how people regularly deploy, that such a
class be included in trove proper, and that a config option be provided
like use_tenant='name_of_tenant_to_use' that would trigger the use of
the overridden novaclient class?

I think asking an operator as a standard practice to override code in
remote.py is a bad pattern.


2) Why there is no trove mgmt cli, but mgmt api is in the code?
Does it disappear forever ?


The reason for this is because the old legacy Trove client was rewritten
to be in line with the rest of the openstack clients. The new client
has bindings for the management API, but we didn't complete the work on
writing the shell pieces for it. There is currently an effort to
support Trove calls in the openstackclient, and we're looking to
support the management client calls as part of this as well. If this is
something that you're passionate about, we sure could use help landing
this in Liberty.


3)  The trove-guest-agent is in vm. it is connected by taskmanager
by rabbitmq. We designed it. But is there some prectise to do this?
 how to make the vm be connected in vm-network and management
 network?


Most deployments of Trove that I am familiar with set up a separate
RabbitMQ server in cloud that is used by Trove. It is not recommended to
use the same infrastructure RabbitMQ server for Trove for security
reasons. Also most deployments of Trove set up a private (neutron)
network that the RabbitMQ server and guests are connected to, and all
RPC messages are sent over this network.


This sounds like a great chunk of information to potentially go into
deployer docs.




I'd like to +1 this.

It is misleading that the standard documentation (and the devstack
setup) describes a configuration that is unsafe/unwise to use in
production. This is surely unusual to say the least! Normally when test
or dev setups use unsafe configurations the relevant docs clearly state
this - and describe how it should actually be done.

In addition the fact that several extended question threads were
required to extract this vital information is ...disappointing, and does
not display the right spirit for an open source project in my opinion!


While I agree with this last paragraph


i really want to vote to put trove back into stackforge for it can not give an 
clear deployment,


... I must point out this is *NOT* the way this community (OpenStack)
works.

This mailing list exists to discuss things in the open, make sure we
can have wide and constructive discussions to make things like the ones
discussed in this thread come out so that they can be improved,
documented and made public for the sake of usability, operation and
adoption.


and it is always avoiding problem, not to face it. The service vm need get 
connect through the management should
talk about a lot.


It's now on Trove team's - or other people's - hands to help
find/implement a better solution for the problem pointed out in this
thread and make the product better.

Let's collaborate,
Flavio






Regards

Mark




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpCAJHgagrnE.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] How we make service vm to connect to management network

2015-05-11 Thread Li Tianqing
Hello,
 Now:
  The vm is created by trove is installed trove-guestagent. The agent 
should connect to the rabbitmq in management network for notifications and 
billing.
  Right now, the trove vm can boot by two or more net cards. One is user's, 
the other one is trove-defined (which is defined in configuration.)
 Problem:
 1)  The vm is put into user's tenant, so the user can login into this vm. 
It is quite inscure.  We could overwrite the remote.py to put the vm into trove 
tenant.
  But after overwrite, the user can not start, stop, the instance, the 
network ip uesed for to connect to msyql is also can not get. 
  Then we should overwrite the instance view to add those information.
  We should make the decision now. If put the vm into trove's tenant is 
better than into user's tenant. We should add api, rewrite view. Not just give 
choise to users that use trove.
 Because we are the developer for trove. We should know which is better.
 2)  If we deployment trove like that. The user can not use private network 
fully. For there is the chance that the user-defined-network is same as the 
trove-defined-nework
   in cidr. Then the packet can not send out.We should also try other 
deployment that can make trove connect to management's rabbitmq. For example, 
make the vm can 
 passthrough to the host that load that vm. For that deployment do not 
limit the user's in private network use. So i say we should talk a lot on this 
problem.

3)  we should add mgmt-cli quickly. The client can not fully use the api. 
It is .  I think may be trove developers think all trove user are nice 
person who will never curse.
  
May be i am not right. But i am open do discuss if i still interested in trove.

--

Best
Li Tianqing__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] disabling pypy unit test jobs for oslo

2015-05-11 Thread Julien Danjou
On Fri, May 08 2015, Doug Hellmann wrote:

 The jobs running unit tests under pypy are failing for several Oslo
 libraries for reasons that have nothing to do with the libraries
 themselves, as far as I can tell (they pass locally). I have proposed
 a change to mark the jobs as non-voting [1] until someone can fix
 them, but we need a volunteer to look at the failure and understand why
 they fail.

 Does anyone want to step up to do that? If we don't have a volunteer in
 the next couple of weeks, I'll go ahead and remove the jobs so we can
 use those test nodes for other jobs.

I'm willing to take a look at those, do you have any link to a
review/job that failed?

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session (was: [all] who is the ptl of trove?)

2015-05-11 Thread Flavio Percoco

On 08/05/15 00:45 -0700, Nikhil Manchanda wrote:

   3)  The trove-guest-agent is in vm. it is connected by taskmanager
   by rabbitmq. We designed it. But is there some prectise to do this?
how to make the vm be connected in vm-network and management
network?


Most deployments of Trove that I am familiar with set up a separate
RabbitMQ server in cloud that is used by Trove. It is not recommended to
use the same infrastructure RabbitMQ server for Trove for security
reasons. Also most deployments of Trove set up a private (neutron)
network that the RabbitMQ server and guests are connected to, and all
RPC messages are sent over this network.


We've discussed trove+zaqar in the past and I believe some folks from the
Trove team have been in contact with Fei Long lately about this. Since
one of the projects goal's for this cycle is to provide support to
other projects and contribute to the adoption, I'm wondering if any of
the members of the trove team would be willing to participate in a
Zaqar working session completely dedicated to this integration?

It'd be a great opportunity to figure out what's really needed, edge
cases and get some work done on this specific case.

Thanks,
Flavio

--
@flaper87
Flavio Percoco


pgpu2jWytlWSD.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] disabling pypy unit test jobs for oslo

2015-05-11 Thread Steven Hardy
On Mon, May 11, 2015 at 11:52:13AM +0200, Julien Danjou wrote:
 On Fri, May 08 2015, Doug Hellmann wrote:
 
  The jobs running unit tests under pypy are failing for several Oslo
  libraries for reasons that have nothing to do with the libraries
  themselves, as far as I can tell (they pass locally). I have proposed
  a change to mark the jobs as non-voting [1] until someone can fix
  them, but we need a volunteer to look at the failure and understand why
  they fail.
 
  Does anyone want to step up to do that? If we don't have a volunteer in
  the next couple of weeks, I'll go ahead and remove the jobs so we can
  use those test nodes for other jobs.
 
 I'm willing to take a look at those, do you have any link to a
 review/job that failed?

I suspect we're impacted by the same issue for python-heatclient, nearly
all of our patches are failing the pypy job, e.g:

http://logs.openstack.org/56/178756/4/gate/gate-python-heatclient-pypy/66e4dcc/console.html

The error error: option --single-version-externally-managed not
recognized looks a lot like bug 1290562 which was closed months ago.

I've raised https://bugs.launchpad.net/python-heatclient/+bug/1453095,
because I didn't know what component (other than heatclient) this could be
assigned to.

I'm attempting the bug 1290562 workaround here:

https://review.openstack.org/#/c/181851/

If we can't figure out what the problem is, I guess we'll have to
temporarily disable our pypy job too, any insights appreciated! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Weekly meeting #35

2015-05-11 Thread Emilien Macchi
Hi,

Tomorrow is our weekly meeting.
Please look at the agenda [1].

Feel free to bring new topics and reviews/bugs if needed.
Also, if you had any action, make sure you can give a status during the
meeting or in the etherpad directly.

[1]
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150512

Note: we won't perform our weekly meeting during the OpenStack Summit.

See you tomorrow,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Port Nova to Python 3

2015-05-11 Thread Victor Stinner
Hi,

 Oh, this makes me think: how would one fix something like this?

The wiki page already contains some answer:
https://wiki.openstack.org/wiki/Python3#Common_patterns

Don't hesitate to complete the page if needed.

See also my personal list of documents:
https://haypo-notes.readthedocs.org/python.html#port-python-2-code-to-python-3


  def unicode_convert(self, item):
  try:
 return unicode(item, utf-8)
 E   NameError: name 'unicode' is not defined

Use six.text_type.


  def make(self, idp, sp, args):
  md5 = hashlib.md5()
  for arg in args:
  md5.update(arg.encode(utf-8))
 md5.update(sp)
 E   TypeError: Unicode-objects must be encoded before hashing

It depends on the code. Sometimes, you should encode sp in the caller, 
sometimes you should encode just before calling make(). If you don't know, use:

if isinstance(sp, six.text_type):
sp = sp.encode(utf-8)
md5.update(sp)


 and one last one:
 
  def harvest_element_tree(self, tree):
  # Fill in the instance members from the contents of the
  # XML tree.
  for child in tree:
  self._convert_element_tree_to_member(child)
 for attribute, value in tree.attrib.iteritems():
  self._convert_element_attribute_to_member(attribute, value)
 E   AttributeError: 'dict' object has no attribute 'iteritems'

Use six.iteritems().


 BTW, I did this:
 -from Cookie import SimpleCookie
 +try:
 +from Cookie import SimpleCookie
 +except:
 +from http.cookies import SimpleCookie
 
 Is there anything smarter to do with six? What's the rule with
 six.moves? Should I always just use the new location?

I did the same. If it's a very common pattern in your code, you can register 
you own move in six.moves:
https://pythonhosted.org/six/#advanced-customizing-renames

We used that fox mox/mox3 in tests of Oslo projects.


 Also, is this a correct fix for the basestring issue in Py3?
 
 +try:
 +basestring
 +except NameError:
 +basestring = (str,bytes)

I prefer six.string_types.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] disabling pypy unit test jobs for oslo

2015-05-11 Thread Sean Dague
It appears that we've basically run out of interest / time in
realistically keeping pypy working in our system.

With the focus on really getting python 3.4 working, it seems like it
would just be better to drop pypy entirely in the system. In the last
couple of years we've not seen any services realistically get to the
point of being used for any of the services. And as seen by this last
failure, pypy is apparently not keeping up with upstream tooling changes.

-Sean

On 05/11/2015 06:50 AM, Davanum Srinivas wrote:
 requirements jobs are stuck as well :(
 
 http://logs.openstack.org/30/170830/6/check/gate-requirements-pypy/732dc33/console.html#_2015-05-11_01_25_22_506
 
 -- dims
 
 On Mon, May 11, 2015 at 6:19 AM, Steven Hardy sha...@redhat.com wrote:
 On Mon, May 11, 2015 at 11:52:13AM +0200, Julien Danjou wrote:
 On Fri, May 08 2015, Doug Hellmann wrote:

 The jobs running unit tests under pypy are failing for several Oslo
 libraries for reasons that have nothing to do with the libraries
 themselves, as far as I can tell (they pass locally). I have proposed
 a change to mark the jobs as non-voting [1] until someone can fix
 them, but we need a volunteer to look at the failure and understand why
 they fail.

 Does anyone want to step up to do that? If we don't have a volunteer in
 the next couple of weeks, I'll go ahead and remove the jobs so we can
 use those test nodes for other jobs.

 I'm willing to take a look at those, do you have any link to a
 review/job that failed?

 I suspect we're impacted by the same issue for python-heatclient, nearly
 all of our patches are failing the pypy job, e.g:

 http://logs.openstack.org/56/178756/4/gate/gate-python-heatclient-pypy/66e4dcc/console.html

 The error error: option --single-version-externally-managed not
 recognized looks a lot like bug 1290562 which was closed months ago.

 I've raised https://bugs.launchpad.net/python-heatclient/+bug/1453095,
 because I didn't know what component (other than heatclient) this could be
 assigned to.

 I'm attempting the bug 1290562 workaround here:

 https://review.openstack.org/#/c/181851/

 If we can't figure out what the problem is, I guess we'll have to
 temporarily disable our pypy job too, any insights appreciated! :)

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Static Migration Error

2015-05-11 Thread Abhishek Talwar
Hi Folks,I am facing an issue while migrating a VM from one host to another. I had posted the same issue on ask.openstack.org but couldnt get any help. So kindly provide some information.I am trying to migrate (Static migrate) an instance from one compute 
host to another by running the command nova migrate but it results in 
the vm going in error state. Error: Command: ssh 
10.10.10.31 mkdir -p 
/var/lib/nova/instances/9edcde17-38db-4f80-94e7-fb60bf5c55a0 u'ssh: 
connect to host 10.10.10.31 port 22: Connection refused "message": 
"Unexpected error while running command. Please provide information on what could be the reason for this as all the services are up and running. Moreover, when i am running comand neutron agent list then i am unable to see new compute host over there.
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] disabling pypy unit test jobs for oslo

2015-05-11 Thread Davanum Srinivas
Thanks Sean, filed this review to mark them as non-voting to start with:
https://review.openstack.org/181870

-- dims

On Mon, May 11, 2015 at 7:51 AM, Sean Dague s...@dague.net wrote:
 It appears that we've basically run out of interest / time in
 realistically keeping pypy working in our system.

 With the focus on really getting python 3.4 working, it seems like it
 would just be better to drop pypy entirely in the system. In the last
 couple of years we've not seen any services realistically get to the
 point of being used for any of the services. And as seen by this last
 failure, pypy is apparently not keeping up with upstream tooling changes.

 -Sean

 On 05/11/2015 06:50 AM, Davanum Srinivas wrote:
 requirements jobs are stuck as well :(

 http://logs.openstack.org/30/170830/6/check/gate-requirements-pypy/732dc33/console.html#_2015-05-11_01_25_22_506

 -- dims

 On Mon, May 11, 2015 at 6:19 AM, Steven Hardy sha...@redhat.com wrote:
 On Mon, May 11, 2015 at 11:52:13AM +0200, Julien Danjou wrote:
 On Fri, May 08 2015, Doug Hellmann wrote:

 The jobs running unit tests under pypy are failing for several Oslo
 libraries for reasons that have nothing to do with the libraries
 themselves, as far as I can tell (they pass locally). I have proposed
 a change to mark the jobs as non-voting [1] until someone can fix
 them, but we need a volunteer to look at the failure and understand why
 they fail.

 Does anyone want to step up to do that? If we don't have a volunteer in
 the next couple of weeks, I'll go ahead and remove the jobs so we can
 use those test nodes for other jobs.

 I'm willing to take a look at those, do you have any link to a
 review/job that failed?

 I suspect we're impacted by the same issue for python-heatclient, nearly
 all of our patches are failing the pypy job, e.g:

 http://logs.openstack.org/56/178756/4/gate/gate-python-heatclient-pypy/66e4dcc/console.html

 The error error: option --single-version-externally-managed not
 recognized looks a lot like bug 1290562 which was closed months ago.

 I've raised https://bugs.launchpad.net/python-heatclient/+bug/1453095,
 because I didn't know what component (other than heatclient) this could be
 assigned to.

 I'm attempting the bug 1290562 workaround here:

 https://review.openstack.org/#/c/181851/

 If we can't figure out what the problem is, I guess we'll have to
 temporarily disable our pypy job too, any insights appreciated! :)

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Attila Fazekas




- Original Message -
 From: John Garbutt j...@johngarbutt.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: Dan Smith d...@danplanet.com
 Sent: Saturday, May 9, 2015 12:45:26 PM
 Subject: Re: [openstack-dev] [all] Replace mysql-python with mysqlclient
 
 On 30 April 2015 at 18:54, Mike Bayer mba...@redhat.com wrote:
  On 4/30/15 11:16 AM, Dan Smith wrote:
  There is an open discussion to replace mysql-python with PyMySQL, but
  PyMySQL has worse performance:
 
  https://wiki.openstack.org/wiki/PyMySQL_evaluation
 
  My major concern with not moving to something different (i.e. not based
  on the C library) is the threading problem. Especially as we move in the
  direction of cellsv2 in nova, not blocking the process while waiting for
  a reply from mysql is going to be critical. Further, I think that we're
  likely to get back a lot of performance from a supports-eventlet
  database connection because of the parallelism that conductor currently
  can only provide in exchange for the footprint of forking into lots of
  workers.
 
  If we're going to move, shouldn't we be looking at something that
  supports our threading model?
 
  yes, but at the same time, we should change our threading model at the
  level
  of where APIs are accessed to refer to a database, at the very least using
  a
  threadpool behind eventlet.   CRUD-oriented database access is faster using
  traditional threads, even in Python, than using an eventlet-like system or
  using explicit async.  The tests at
  http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/
  show this.With traditional threads, we can stay on the C-based MySQL
  APIs and take full advantage of their speed.
 
 Sorry to go back in time, I wanted to go back to an important point.
 
 It seems we have three possible approaches:
 * C lib and eventlet, blocks whole process
 * pure python lib, and eventlet, eventlet does its thing
 * go for a C lib and dispatch calls via thread pool

* go with pure C protocol lib, which explicitly using `python patch-able` 
  I/O function (Maybe others like.: threading, mutex, sleep ..)

* go with pure C protocol lib and the python part explicitly call
  for `decode` and `encode`, the C part just do CPU intensive operations,
  and it never calls for I/O primitives .   

 We have a few problems:
 * performance sucks, we have to fork lots of nova-conductors and api nodes
 * need to support python2.7 and 3.4, but its not currently possible
 with the lib we use?
 * want to pick a lib that we can fix when there are issues, and work to
 improve
 
 It sounds like:
 * currently do the first one, it sucks, forking nova-conductor helps
 * seems we are thinking the second one might work, we sure get py3.4 +
 py2.7 support
 * the last will mean more work, but its likely to be more performant
 * worried we are picking a unsupported lib with little future
 
 I am leaning towards us moving to making DB calls with a thread pool
 and some fast C based library, so we get the 'best' performance.
 
 Is that a crazy thing to be thinking? What am I missing here?

Using the python socket from C code:
https://github.com/esnme/ultramysql/blob/master/python/io_cpython.c#L100

Also possible to implement a mysql driver just as a protocol parser,
and you are free to use you favorite event based I/O strategy (direct epoll 
usage)
even without eventlet (or similar).

The issue with ultramysql, it does not implements
the `standard` python DB API, so you would need to add an extra wrapper to 
SQLAlchemy.

 
 Thanks,
 John
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-11 Thread John Garbutt
On 9 May 2015 at 17:55, Adrian Otto adrian.o...@rackspace.com wrote:
 On the subject of extending the Nova API to accommodate special use cases of 
 containers that are beyond the scope of the Nova API, I think we should 
 resist that, and focus those container-specific efforts in Magnum.

+1
The API is my biggest worry here.

 I will also mention that it’s natural to be allergic to the idea of nested 
 virtualization. We all know that creating multiple levels of hardware 
 virtualization leads to bad performance outcomes. However, nested 
 containers do not carry that same drawback, because the actual overhead of a 
 Linux cgroup and Kernel Namespeaces are much lighter than a hardware 
 virtualization. There are cases where having a container-in-container setup 
 gives compelling benefits. That’s why I’ll argue vigorously for both Nova and 
 Magnum to be able to produce container instances both at the machine level, 
 and allow Magnum to produce nested containers” to produce better workload 
 consolidation density. in a setup with no hypervisors at all.

+1
Agreed nested containers are a thing.
Its a great reason to keep our LXC driver.

 To sum up, I strongly support merging in nova-docker, with the caveat that it 
 operates within the existing Nova API (with few minor exceptions). For 
 features that require API features that are truly container specific, we 
 should land those in Magnum, and keep the Nova API scoped to operations that 
 are appropriate for “all instance types.

I am keen we set the right expectations here.
If you want to treat docker containers like VMs, thats OK.

I guess a remaining concern is the driver dropping into diss-repair if
most folks end up using Magnum when they want to use docker.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Service group foundations and features

2015-05-11 Thread Attila Fazekas




- Original Message -
 From: John Garbutt j...@johngarbutt.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Saturday, May 9, 2015 1:18:48 PM
 Subject: Re: [openstack-dev] [nova] Service group foundations and features
 
 On 7 May 2015 at 22:52, Joshua Harlow harlo...@outlook.com wrote:
  Hi all,
 
  In seeing the following:
 
  - https://review.openstack.org/#/c/169836/
  - https://review.openstack.org/#/c/163274/
  - https://review.openstack.org/#/c/138607/
 
  Vilobh and I are starting to come to the conclusion that the service group
  layers in nova really need to be cleaned up (without adding more features
  that only work in one driver), or removed or other... Spec[0] has
  interesting findings on this:
 
  A summary/highlights:
 
  * The zookeeper service driver in nova has probably been broken for 1 or
  more releases, due to eventlet attributes that are gone that it via
  evzookeeper[1] library was using. Evzookeeper only works for eventlet 
  0.17.1. Please refer to [0] for details.
  * The memcache service driver really only uses memcache for a tiny piece of
  the service liveness information (and does a database service table scan to
  get the list of services). Please refer to [0] for details.
  * Nova-manage service disable (CLI admin api) does interact with the
  service
  group layer for the 'is_up'[3] API (but it also does a database service
  table scan[4] to get the list of services, so this is inconsistent with the
  service group driver API 'get_all'[2] view on what is enabled/disabled).
  Please refer to [9][10] for nova manage service enable disable for details.
* Nova service delete (REST api) seems to follow a similar broken pattern
  (it also avoids calling into the service group layer to delete a service,
  which means it only works with the database layer[5], and therefore is
  inconsistent with the service group 'get_all'[2] API).
 
  ^^ Doing the above makes both disable/delete agnostic about other backends
  available that may/might manage service group data for example zookeeper,
  memcache, redis etc... Please refer [6][7] for details. Ideally the API
  should follow the model used in [8] so that the extension, admin interface
  as well as the API interface use the same servicegroup interface which
  should be *fully* responsible for managing services. Doing so we will have
  a
  consistent view of services data, liveness, disabled/enabled and so-on...
 
  So with no disrespect to the authors of 169836 and 163274 (or anyone else
  involved), I am wondering if we can put a request in to figure out how to
  get the foundation of the service group concepts stabilized (or other...)
  before adding more features (that only work with the DB layer).
 
  What is the path to request some kind of larger coordination effort by the
  nova folks to fix the service group layers (and the concepts that are not
  disjoint/don't work across them) before continuing to add features on-top
  of
  a 'shakey' foundation?
 
  If I could propose something it would probably work out like the following:
 
  Step 0: Figure out if the service group API + layer(s) should be
  maintained/tweaked at all (nova-core decides?)
 
  If maintain it:
 
   - Have an agreement that nova service extension, admin
  interface(nova-manage) and API go through a common path for
  update/delete/read.
* This common path should likely be the servicegroup API so as to have a
  consistent view of data and that also helps nova to add different
  data-stores (keeping the services data in a DB and getting numerous updates
  about liveliness every few seconds of N number of compute where N is pretty
  high can be detrimental to Nova's performance)
   - At the same time allow 163274 to be worked on (since it fixes a
   edge-case
  that was asked about in the initial addition of the delete API in its
  initial code commit @ https://review.openstack.org/#/c/39998/)
   - Delay 169836 until the above two/three are fixed (and stabilized); it's
  down concept (and all other usages of services that are hitting a database
  mentioned above) will need to go through the same service group foundation
  that is currently being skipped.
 
  Else:
- Discard 138607 and start removing the service group code (and just use
  the DB for all the things).
- Allow 163274 and 138607 (since those would be additions on-top of the
DB
  layer that will be preserved).
 
  Thoughts?
 
 I wonder about this approach:
 
 * I think we need to go back and document what we want from the
 service group concept.
 * Then we look at the best approach to implement that concept.
 * Then look at the best way to get to a happy place from where we are now,
 ** Noting we will need live upgrade for (at least) the most widely
 used drivers
 
 Does that make any sense?
 
 Things that pop into my head, include:
 * The operators have been asking questions like: Should new services
 not 

Re: [openstack-dev] [nova] Service group foundations and features

2015-05-11 Thread Chris Friesen

On 05/11/2015 07:13 AM, Attila Fazekas wrote:

From: John Garbutt j...@johngarbutt.com



* From the RPC api point of view, do we want to send a cast to
something that we know is dead, maybe we want to? Should we wait for
calls to timeout, or give up quicker?


How to fail sooner:
https://bugs.launchpad.net/oslo.messaging/+bug/1437955

We do not need a dedicated is_up just for this.


Is that really going to help?  As I understand it if nova-compute dies (or is 
isolated) then the queue remains present on the server but nothing will process 
messages from it.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Core reviewer update proposal

2015-05-11 Thread Alexis Lee
James Slagle said on Tue, May 05, 2015 at 07:57:46AM -0400:
 I also plan to remove Alexis Lee from core, who previously has
 expressed that he'd be stepping away from TripleO for a while. Alexis,
 thank you for reviews and contributions!

Just confirming this is fine by me. Thanks!


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Who is using nova-docker? (Re: [nova-docker] Status update)

2015-05-11 Thread Davanum Srinivas
Good points, Dan and John.

At this point it may be useful to see who is actually using
nova-docker. Can folks who are using any version of nova-docker,
please speak up with a short description of their use case?

Thanks,
dims

On Mon, May 11, 2015 at 10:06 AM, Dan Smith d...@danplanet.com wrote:
 +1 Agreed nested containers are a thing. Its a great reason to keep
 our LXC driver.

 I don't think that's a reason we should keep our LXC driver, because you
 can still run containers in containers with other things. If anything,
 using a nova vm-like container to run application-like containers inside
 them is going to beg the need to tweak more detailed things on the
 vm-like container to avoid restricting the application one, I think.

 IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
 nova is because it's nearly free. It is the libvirt driver with a few
 conditionals to handle different things when necessary for LXC. The
 docker driver is a whole other nova driver to maintain, with even less
 applicability to being a system container (IMHO).

 I am keen we set the right expectations here. If you want to treat
 docker containers like VMs, thats OK.

 I guess a remaining concern is the driver dropping into diss-repair
 if most folks end up using Magnum when they want to use docker.

 I think this is likely the case and I'd like to avoid getting into this
 situation again. IMHO, this is not our target audience, it's very much
 not free to just put it into the tree because meh, some people might
 like it instead of the libvirt-lxc driver.

 --Dan

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-11 Thread Daniel P. Berrange
On Mon, May 11, 2015 at 02:57:08PM +0100, John Garbutt wrote:
 On 9 May 2015 at 17:55, Adrian Otto adrian.o...@rackspace.com wrote:
  On the subject of extending the Nova API to accommodate special use cases 
  of containers that are beyond the scope of the Nova API, I think we should 
  resist that, and focus those container-specific efforts in Magnum.
 
 +1
 The API is my biggest worry here.
 
  I will also mention that it’s natural to be allergic to the idea of nested 
  virtualization. We all know that creating multiple levels of hardware 
  virtualization leads to bad performance outcomes. However, nested 
  containers do not carry that same drawback, because the actual overhead of 
  a Linux cgroup and Kernel Namespeaces are much lighter than a hardware 
  virtualization. There are cases where having a container-in-container setup 
  gives compelling benefits. That’s why I’ll argue vigorously for both Nova 
  and Magnum to be able to produce container instances both at the machine 
  level, and allow Magnum to produce nested containers” to produce better 
  workload consolidation density. in a setup with no hypervisors at all.
 
 +1
 Agreed nested containers are a thing.
 Its a great reason to keep our LXC driver.
 
  To sum up, I strongly support merging in nova-docker, with the caveat that 
  it operates within the existing Nova API (with few minor exceptions). For 
  features that require API features that are truly container specific, we 
  should land those in Magnum, and keep the Nova API scoped to operations 
  that are appropriate for “all instance types.
 
 I am keen we set the right expectations here.
 If you want to treat docker containers like VMs, thats OK.
 
 I guess a remaining concern is the driver dropping into diss-repair if
 most folks end up using Magnum when they want to use docker.

Yep, I'm personally fine with Docker being merged into Nova from a
technical and design POV. My only two requirements are social/community
ones

 - Do we have a team of people willing and able to commit to
   maintaining it in Nova - ie we don't just want it to bitrot
   and have nova cores left to pick up the pieces whenever it
   breaks.

 - Are there enough people who are actally interested in using
   Docker-in-Nova, that it is worth the overhead for that team
   of maintainers. We don't want to decide to support something
   that hardly anyone ends up using, because that could divert
   resources away that could usefully be put on Docker-in-Magnum
   and get a better return on investment for users.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to configure sriov nic using in neutron ml2 plugin

2015-05-11 Thread Jakub Libosvar
On 05/11/2015 04:13 PM, Kamsali, RaghavendraChari (Artesyn) wrote:
 Hi,
 
  
 
 I want to use SR-IOV supported nic (intel XL710) NIC for VM
 instantiation , so I would like to configure ml2plugin for the intel
 XL710 nic , How can could any one help me .
 
  
 
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
 
  
 
 https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
 
  
 
 Here some of them are provided .

Hi Raghavendrachari,

you may also find these blog posts [1][2] by Nir Yechiel interesting and
they may help you with configuration.

If you have any particular issue or question, feel free to ask on the list.

Good luck.
Kuba

[1]
http://redhatstackblog.redhat.com/2015/03/05/red-hat-enterprise-linux-openstack-platform-6-sr-iov-networking-part-i-understanding-the-basics/
[2]
http://redhatstackblog.redhat.com/2015/04/29/red-hat-enterprise-linux-openstack-platform-6-sr-iov-networking-part-ii-walking-through-the-implementation/

 
  
 
  
 
 Thanks and Regards,
 
 *Raghavendrachari kamsali *| Software Engineer II  | Embedded Computing
 
 *Artesyn Embedded Technologies***|**5th Floor, Capella Block, The V,
 Madhapur| Hyderabad, AP 500081 India
 
 T +91-40-66747059 | M +919705762153
 
  
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Quota delete API behavior and incorrect Quota usage reported after deletion

2015-05-11 Thread Gorka Eguileor
On Wed, Apr 08, 2015 at 10:13:27PM +0200, Gorka Eguileor wrote:
 Hi all,
 
 This message is in relation with a bug on Quota deletion API that
 affects both Nova [1] and Cinder [2], so we should discuss together
 what's the desired solution as to be consistent in both projects.
 
 Currently Quota deletion API removes not only Quota limits, but also
 Current Quota Usage and Quota Reservations. Which, from an Operator's
 point of view doesn't make that much sense, since they expect
 preservation of Usage and Reservations.
 
 I first created a patch for Cinder [3], then seeing that Nova's issue
 was the same I created one for Nova as well [4], but the solution was
 not unanimously accepted, and it was suggested that a new endpoint
 should be created for this new behavior (only deleting Quota limits).
 
 My reasoning for not creating a new endpoint in the first place and
 changing current endpoint instead, is that I saw this as a bug, not a
 new feature; I believe delete endpoint, like create, is only meant to
 affect Quota limits, as Usage and Reservations are handled by OpenStack
 itself. If you cannot create Quota usage you shouldn't be able to
 manually delete them either.
 
 Some additional topics were discussed on IRC and Gerrit:
 - Shouldn't delete set quota limits to unlimited instead of deleting
   them and thus apply default quota limits?: This is a matter of how the
   Quota delete is understood, as Delete quotas and leave no quota
   limits, not even defaults or just Delete additional quota limits
   that override defaults.
 - What about cascade deleting a tenant? Wouldn't it need to delete Usage
   and Reservations with the same API call?: Since Quota Reservations and
   Usage are handled by OpenStack, once related resources are deleted so
   will be the pertinent Reservations and Usage Quotas.
 
 In the matter of setting Quotas to unlimited on deletion I believe we
 should keep current behavior, which means it would use defaults or any
 other quotas that are in place, for example you delete a User's Quota
 limits, but Tenant's limits would still apply.
 
 As I see it, we should decide if it's OK to change existing endpoint or
 if, as it was suggested, we should create a new endpoint with a more
 pertinent name, like something related to reset quota limits.
 
 I, for one, believe we should change current behavior as it's not doing
 what it's meant to do. But I must admit that my understanding of how
 this endpoint is currently being used and how such a decision affects
 services is limited.
 
 Anyway, I have no problem changing the patches to whatever we decide is
 best.
 
 
 Cheers,
 Gorka.
 
 
 [1]: https://bugs.launchpad.net/nova/+bug/1410032
 [2]: https://bugs.launchpad.net/cinder/+bug/1410034
 [3]: https://review.openstack.org/#/c/162722/
 [4]: https://review.openstack.org/#/c/163423/
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Update on the Cinder side: This was discussed in a meeting [1], we
agreed to change the API and the patch to fix this has already been
merged.

The reasons why we considered that changing the API was acceptable were:
- It was considered to be a bug: Unlike in Nova, API documentation is
  more explicit; stating that after deletion it would return to default
  values, which implies that it's only talking about limits.
- It falls within the API change guidelines [2], like the Bugfixed OK
  example that modifies the counting of hosts.

Nova patch has been updated with the same solution in case it is
considered a valid fix.

Cheers,
Gorka.


[1]: 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-04-29-16.00.txt
[2]: https://wiki.openstack.org/wiki/APIChangeGuidelines

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-11 Thread Maxim Nestratov
Just as a reminder, not only libvirt-lxc can be used as an os-like 
container provider but
also recently added libvirt-parallels driver. And regarding nested 
Docker support, we
have just implemented it thus, anyone will be able to use nested 
application Docker

containers via libvirt-parallels containers.

Maxim Nestratov

11.05.2015 17:06, Dan Smith пишет:

+1 Agreed nested containers are a thing. Its a great reason to keep
our LXC driver.

I don't think that's a reason we should keep our LXC driver, because you
can still run containers in containers with other things. If anything,
using a nova vm-like container to run application-like containers inside
them is going to beg the need to tweak more detailed things on the
vm-like container to avoid restricting the application one, I think.

IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
nova is because it's nearly free. It is the libvirt driver with a few
conditionals to handle different things when necessary for LXC. The
docker driver is a whole other nova driver to maintain, with even less
applicability to being a system container (IMHO).


I am keen we set the right expectations here. If you want to treat
docker containers like VMs, thats OK.

I guess a remaining concern is the driver dropping into diss-repair
if most folks end up using Magnum when they want to use docker.

I think this is likely the case and I'd like to avoid getting into this
situation again. IMHO, this is not our target audience, it's very much
not free to just put it into the tree because meh, some people might
like it instead of the libvirt-lxc driver.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to configure sriov nic using in neutron ml2 plugin

2015-05-11 Thread Kamsali, RaghavendraChari (Artesyn)
Hi,

I want to use SR-IOV supported nic (intel XL710) NIC for VM instantiation , so 
I would like to configure ml2plugin for the intel XL710 nic , How can could any 
one help me .

https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support

https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking

Here some of them are provided .


Thanks and Regards,
Raghavendrachari kamsali | Software Engineer II  | Embedded Computing
Artesyn Embedded Technologies | 5th Floor, Capella Block, The V, Madhapur| 
Hyderabad, AP 500081 India
T +91-40-66747059 | M +919705762153

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Mike Bayer



On 5/11/15 9:58 AM, Attila Fazekas wrote:




- Original Message -

From: John Garbutt j...@johngarbutt.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: Dan Smith d...@danplanet.com
Sent: Saturday, May 9, 2015 12:45:26 PM
Subject: Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

On 30 April 2015 at 18:54, Mike Bayer mba...@redhat.com wrote:

On 4/30/15 11:16 AM, Dan Smith wrote:

There is an open discussion to replace mysql-python with PyMySQL, but
PyMySQL has worse performance:

https://wiki.openstack.org/wiki/PyMySQL_evaluation

My major concern with not moving to something different (i.e. not based
on the C library) is the threading problem. Especially as we move in the
direction of cellsv2 in nova, not blocking the process while waiting for
a reply from mysql is going to be critical. Further, I think that we're
likely to get back a lot of performance from a supports-eventlet
database connection because of the parallelism that conductor currently
can only provide in exchange for the footprint of forking into lots of
workers.

If we're going to move, shouldn't we be looking at something that
supports our threading model?

yes, but at the same time, we should change our threading model at the
level
of where APIs are accessed to refer to a database, at the very least using
a
threadpool behind eventlet.   CRUD-oriented database access is faster using
traditional threads, even in Python, than using an eventlet-like system or
using explicit async.  The tests at
http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/
show this.With traditional threads, we can stay on the C-based MySQL
APIs and take full advantage of their speed.

Sorry to go back in time, I wanted to go back to an important point.

It seems we have three possible approaches:
* C lib and eventlet, blocks whole process
* pure python lib, and eventlet, eventlet does its thing
* go for a C lib and dispatch calls via thread pool

* go with pure C protocol lib, which explicitly using `python patch-able`
   I/O function (Maybe others like.: threading, mutex, sleep ..)

* go with pure C protocol lib and the python part explicitly call
   for `decode` and `encode`, the C part just do CPU intensive operations,
   and it never calls for I/O primitives .


We have a few problems:
* performance sucks, we have to fork lots of nova-conductors and api nodes
* need to support python2.7 and 3.4, but its not currently possible
with the lib we use?
* want to pick a lib that we can fix when there are issues, and work to
improve

It sounds like:
* currently do the first one, it sucks, forking nova-conductor helps
* seems we are thinking the second one might work, we sure get py3.4 +
py2.7 support
* the last will mean more work, but its likely to be more performant
* worried we are picking a unsupported lib with little future

I am leaning towards us moving to making DB calls with a thread pool
and some fast C based library, so we get the 'best' performance.

Is that a crazy thing to be thinking? What am I missing here?

Using the python socket from C code:
https://github.com/esnme/ultramysql/blob/master/python/io_cpython.c#L100

Also possible to implement a mysql driver just as a protocol parser,
and you are free to use you favorite event based I/O strategy (direct epoll 
usage)
even without eventlet (or similar).

The issue with ultramysql, it does not implements
the `standard` python DB API, so you would need to add an extra wrapper to 
SQLAlchemy.


This driver appears to have seen its last commit about a year ago, that 
doesn't even implement the standard DBAPI (which is already a red 
flag).   There is apparently a separately released (!) DBAPI-compat 
wrapper https://pypi.python.org/pypi/umysqldb/1.0.3 which has had no 
releases in two years. If this wrapper is indeed compatible with 
MySQLdb then it would run in SQLAlchemy without changes (though I'd be 
extremely surprised if it passes our test suite).


How would using these obscure libraries be any preferable than running 
Nova API functions within the thread-pooling facilities already included 
with eventlet ?Keeping in mind that I've now done the work [1] 
to show that there is no performance gain to be had for all the trouble 
we go through to use eventlet/gevent/asyncio with local database 
connections.


[1] http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty mid-cycle meetup

2015-05-11 Thread Michael Still
Ok, given we've had a whole bunch people sign up already and no
complaints here, I think this is a done deal. So, you can now assume
that the dates are final. I will email people currently registered to
let them know as well.

I have added the mid-cycle to the wiki as well.

Cheers,
Michael

On Fri, May 8, 2015 at 4:49 PM, Michael Still mi...@stillhq.com wrote:
 I thought I should let people know that we've had 14 people sign up
 for the mid-cycle so far.

 Michael

 On Fri, May 8, 2015 at 3:55 PM, Michael Still mi...@stillhq.com wrote:
 As discussed at the Nova meeting this morning, we'd like to gauge
 interest in a mid-cycle meetup for the Liberty release.

 To that end, I've created the following eventbrite event like we have
 had for previous meetups. If you sign up, you're expressing interest
 in the event and if we decide there's enough interest to go ahead we
 will email you and let you know its safe to book travel and that
 you're ticket is now a real thing.

 To save you a few clicks, the proposed details are 21 July to 23 July,
 at IBM in Rochester, MN.

 So, I'd appreciate it if people could take a look at:

 
 https://www.eventbrite.com.au/e/openstack-nova-liberty-mid-cycle-developer-meetup-tickets-16908756546

 Thanks,
 Michael

 PS: I haven't added this to the wiki list of sprints because it might
 not happen. When the decision is final, I'll add it to the wiki if we
 decide to go ahead.

 --
 Rackspace Australia



 --
 Rackspace Australia



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-11 Thread Dan Smith
 +1 Agreed nested containers are a thing. Its a great reason to keep
 our LXC driver.

I don't think that's a reason we should keep our LXC driver, because you
can still run containers in containers with other things. If anything,
using a nova vm-like container to run application-like containers inside
them is going to beg the need to tweak more detailed things on the
vm-like container to avoid restricting the application one, I think.

IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
nova is because it's nearly free. It is the libvirt driver with a few
conditionals to handle different things when necessary for LXC. The
docker driver is a whole other nova driver to maintain, with even less
applicability to being a system container (IMHO).

 I am keen we set the right expectations here. If you want to treat
 docker containers like VMs, thats OK.
 
 I guess a remaining concern is the driver dropping into diss-repair
 if most folks end up using Magnum when they want to use docker.

I think this is likely the case and I'd like to avoid getting into this
situation again. IMHO, this is not our target audience, it's very much
not free to just put it into the tree because meh, some people might
like it instead of the libvirt-lxc driver.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0046] Setting services to debug mode can also set Pecan to debug

2015-05-11 Thread Nathan Kinder
Setting services to debug mode can also set Pecan to debug
---

### Summary ###
When debug mode is set for a service using Pecan (via --debug or
CONF.debug=True) Pecan is also set to debug. This can result in
accidental information disclosures.

### Affected Services / Software ###
Blazar, Ceilometer, Cue, Gnocchi, Ironic, Kite, Libra, Pecan, Tuskar

### Discussion ###
Although it's best practice to run production environments with
debugging functionality disabled, experience shows us that many
deployers choose to run OpenStack with debugging enabled to aid with
administration and fault finding.

When Pecan is running in debug mode, the following capabilities are made
available to anyone who can interact with the API service:

* Retrieve a stack trace of failed Pecan calls
* Retrieve a full list of environment variables containing potentially
sensitive information such as API credentials, passwords etc.
* Set an execution breakpoint which hangs the service with a pdb shell,
resulting in a denial of service

### Recommended Actions ###
At time of writing, Ceilometer, Gnocchi and Ironic have released fixes.
Deployers are encouraged to apply these fixes (see launchpad bug in
References) in their clouds. For services that do not have a fix, or
where fixes cannot be applied in existing deployments, we advise not
using the debug configuration for affected services in production
environments.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0046
Original LaunchPad Bug : https://bugs.launchpad.net/ironic/+bug/1425206
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
Pecan : http://www.pecanpy.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Allow passing body for more API methods

2015-05-11 Thread Rosa, Andrea (HP Cloud Services)
Hi all,

I noticed that in the nova API we allow to specify body just for the PUT and 
POST requests [0], for all the other methods, if a body is specified, it gets 
ignored.
I had a look at the RFC 7231 [1] and I noticed that just the TRACE must not 
have a body, for all the other request a body can be passed and managing or 
ignoring it depends  on the semantic of the request.
For that reason my proposal is to allow at WSGI layer to define a body for all 
the requests but for the TRACE then it is up to the specific controller to 
ignore or deal with the body in the request.
I put a WIP to implement that change [3].
The rationale behind it is double:

-  Be more in compliance with the RFC

-  Having  more flexibility in our framework. I have a valid (at least 
for me) use case here [4]:  at the moment a volume detach is implemented using 
a HTTP DELETE, I'd like to add the option for calling from nova, the cinder 
-force-delete
My idea to implement it is to add a parameter in the body of the DELETE call, 
but at the moment the only valid option is to create a new API using a POST 
method.

What do you think?
Regards
--
Andrea Rosa

[0] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L788
[1] https://tools.ietf.org/html/rfc7231
[3] https://review.openstack.org/181918
[4] https://bugs.launchpad.net/nova/+bug/1449221
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Meeting Cancelled for 5/20/2015

2015-05-11 Thread Steven Dake (stdake)
The Kolla IRC team meeting at 2200 UTC is cancelled because nearly all the team 
will be at OpenStack Developer Summit.

See ya there =)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Deprecation path?

2015-05-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 04/24/2015 09:06 PM, Julien Danjou wrote:
 Hi Oslo team!
 
 So what's your deprecation path?
 
 I sent a patch for oslo.utils¹ using debtcollector, our new fancy 
 deprecation tool, and I got a -2 stating that there's no way we 
 deprecate something being used, and that we need to remove usage
 from the projects first.
 
 I don't necessarily agree with this, but I accepted the challenge 
 anyway. I started by writing only patches for Nova² and Cinder³,
 and now I see people complaining that my patch can't be merged
 because the function is not deprecated in Oslo.
 
 So before I start flipping tables, what do we do?
 
 
 ¹  https://review.openstack.org/#/c/148500/
 
 ²  https://review.openstack.org/#/c/164753/
 
 ³  https://review.openstack.org/#/c/165798/
 
 
 
 __


 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Not directly answering your question, but it would be great to have a
bug that I can mark my project of interest with (neutron) and assign
to myself to make it's not lost.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVUMrnAAoJEC5aWaUY1u57OcwH/AhacP6E2cG0DYTUu239ionc
U1HZk57ZH4kIxOfzygTQjkB0eFQQYVmdYY04yKfTc1FBJutfdz37GZ5lr7aUJNTh
yLZ5o/bgSFfFZ22hRIrYJSXLGdg3UcN+HMojG7JuHi4UAEzI4xNcB49jW3QA8vhk
Tvji1GE9WC0Qt+rQIMDuILmJjHNvtnksqXPkeg1bTZQITbDo0L7G/CyjD6dkMPxO
sdmi2cODpi/VbZyezFyflgLYsmq2FR3ywKWF4Qc9zn8/rc4v6ORFZrUIl79TBf0X
+jUrSC+nZJdxX1IlwDritaUDci+M614Tpy6MFzeQrogSuoP6YfzBfcv8RkVQ7Vk=
=qFOY
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-11 Thread Eric Windisch
On Mon, May 11, 2015 at 10:06 AM, Dan Smith d...@danplanet.com wrote:

  +1 Agreed nested containers are a thing. Its a great reason to keep
  our LXC driver.

 I don't think that's a reason we should keep our LXC driver, because you
 can still run containers in containers with other things. If anything,
 using a nova vm-like container to run application-like containers inside
 them is going to beg the need to tweak more detailed things on the
 vm-like container to avoid restricting the application one, I think.

 IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
 nova is because it's nearly free. It is the libvirt driver with a few
 conditionals to handle different things when necessary for LXC. The
 docker driver is a whole other nova driver to maintain, with even less
 applicability to being a system container (IMHO).



Magnum is clearly geared toward greenfield development.

The Docker driver's sweet-spot is for the user wishing to replace existing
VMs and Nova orchestration with high-performance containers without having
to rewrite for Magnum, or having to deal with the complexities or
hardware-specific bits of Ironic (plus having a tad bit more security). As
an Ironic alternative, it may have a promising long-term life. As for a
mechanism for providing legacy migrations... the future is less clear.
Greenfield applications will go straight to Magnum or non-OpenStack
solutions while the number of legacy applications to be migrated from
nova-libvirt/xen/vmware to nova-docker is unknowable. However, I do expect
it to be a number likely to swell, then diminish as time goes on. Arguably,
the same could be presumed about VMs, however.

It's also worth noting that LXD is pushing to be a container-like-a-VM
solution, so support for building tools to provide legacy VM to container
migrations must be of interest to somebody.





 I think this is likely the case and I'd like to avoid getting into this
 situation again. IMHO, this is not our target audience, it's very much
 not free to just put it into the tree because meh, some people might
 like it instead of the libvirt-lxc driver.
  - Do we have a team of people willing and able to commit to
maintaining it in Nova - ie we don't just want it to bitrot
and have nova cores left to pick up the pieces whenever it
breaks.


The two reasons I have preferred this code stay out of tree (until now) has
been the breaking changes we wished to land, and the community involvement.
This driver was not the first driver I've been involved with that has had
these problems, and ultimately I had wished development were out of tree.
Having the code out of tree has been very good for nova-docker.

However, I believe that the period of high-frequency changes over, with
many of the critical goals reached... but that calls into question your
second point, which is the level of continued maintenance. To this, I
cannot answer, but I will say that the team right now is vanishingly small,
 I do wish it were larger. I give a lot of credit to Dims in particular for
keeping this afloat, but this effort needs more contributors if it is to
stay alive. As for myself, for the record, I am seldom involved at this
point, but do contribute some occasional time into reviews or the odd patch
in my free time.

I'll finish to say that I do think it's finally time to consider pulling it
back it.  While doing so may not attract contributors, I know being in
stackforge has certainly been a deterrent both potential contributors and
users.

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][oslo]Error in Log.debug

2015-05-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/05/2015 09:58 AM, Li, Chen wrote:
 I find the reason.
 
 
 
 When devstack install sahara, “logging_context_format_string” would
 be configured by default to:
 
 logging_context_format_string = %(asctime)s.%(msecs)03d 
 %(color)s%(levelname)s %(name)s [^[[01;36m%(request_id)s 
 ^[[00;36m%(*user_name*)s %(project_name)s%(color)s] 
 ^[[01;35m%(instance)s%(color)s%(message)s^[[00m

Note that I am going to change the default colorized format string to
reflect what's in oslo.log by default:
https://review.openstack.org/#/c/172509/2/functions

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVUMzlAAoJEC5aWaUY1u57lUoH/ROGII8goKWkh2mPK+iQIYUA
mF3MTTeNqO0IK6dijmQk7lMqS7nUZv79b7BRCLuWaH83IhqihmnlajWTt5g1F7i4
am4mqhLUdCH4ip3SyUYq2OYwuzeWyGyN7suY+0wMDogmFW4jgud8W8jNWPoPDGS6
I9WuHSGGKdr7fp+nOvc8ys+3sJkFUesYwqdgcaFOTChQpRv+KIX2qbOnPqEH+9eu
bc1oUdT+J9E7L2NahsAfNaWm1hvIRutWMBBPJ4fACEvGRo5//Z40m9BkoWp36Dne
tYzfjsZ0/3FiVcsaF+1XlvwFKNMoIqKujAlEM1kDaMjviqnEqMTVHIOZIpN1F6o=
=88aP
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] improve oozie engine for common lib management

2015-05-11 Thread lu jander
Hi, All
Currently, oozie share lib is not well known and can be hardly used by the
users, so I think we can  make it less oozieness and more friendly for the
users, it can be used for running jobs which are using third party libs. If
many jobs use the same libs, oozie share lib can make it as a common share
lib.

here is the bp,
https://blueprints.launchpad.net/sahara/+spec/improve-oozie-share-lib
I will write a spec soon after scheduled edp jobs bp.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Allow passing body for more API methods

2015-05-11 Thread Kevin L. Mitchell
(Added [api] to subject to bring in the attention of the API team.)

On Mon, 2015-05-11 at 14:48 +, Rosa, Andrea (HP Cloud Services)
wrote:
 I noticed that in the nova API we allow to specify body just for the
 PUT and POST requests [0], for all the other methods, if a body is
 specified, it gets ignored.
 
 I had a look at the RFC 7231 [1] and I noticed that just the TRACE
 must not have a body, for all the other request a body can be passed
 and managing or ignoring it depends  on the semantic of the request.
 
 For that reason my proposal is to allow at WSGI layer to define a body
 for all the requests but for the TRACE then it is up to the specific
 controller to ignore or deal with the body in the request.
 
 I put a WIP to implement that change [3].
 
 The rationale behind it is double:
 
 - Be more in compliance with the RFC
 
 - Having  more flexibility in our framework. I have a valid
 (at least for me) use case here [4]:  at the moment a volume detach is
 implemented using a HTTP DELETE, I’d like to add the option for
 calling from nova, the cinder –force-delete
 
 My idea to implement it is to add a parameter in the body of the
 DELETE call, but at the moment the only valid option is to create a
 new API using a POST method.

I have worked with client frameworks which raise exceptions if you
attempt to pass a body using the DELETE method, and would presumably
also prohibit a body with GET and HEAD, so I'm -1 on this: we should
actively discourage service developers from requiring bodies on HTTP
methods that a client framework may prohibit sending bodies with.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Server Error when attempting to sign ICLA at review.openstack.org

2015-05-11 Thread Earle Philhower
Hi all,

Now that HGST has signed the commercial contributor agreement,
I'm trying to sign the individual one under my work email
(earlephilhower username) but am running into persistent
errors.

I'm able to get to the agreement just fine,
https://review.openstack.org//#/settings/new-agreement, but
every time I fill in the form with my address, phone, etc.
and enter I AGREE and hit Submit, the page errors out with:
  Code Review - Error
  Server Error
  Cannot store contact information

The same error shows up when attempting to set my contact
info under Settings-Contact.

I've tried different browsers (Firefox and Chrome) and
removing all commas, periods, and only using [a-zA-z0-9] in
all fields, to no effect.

I didn't see any contact info on the site about the licensing
site.  If anyone monitoring -dev is responsible for this, or
has an email or someone I can get in touch with, I'd much
appreciate it.

Thanks for any help,
-Earle F. Philhower, III
 earle.philhower@hgst.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Allow passing body for more API methods

2015-05-11 Thread Monty Taylor
On 05/11/2015 02:05 PM, Dean Troyer wrote:
 On Mon, May 11, 2015 at 11:44 AM, Rosa, Andrea (HP Cloud Services) 
 andrea.r...@hp.com wrote:
 
 Agreed. Violating the HTTP spec is something that should be avoided.

 Actually it is not violating the HTTP spec, from RFC:
   A payload within a DELETE request message has no defined semantics;
sending a payload body on a DELETE request might cause some existing
implementations to reject the request.

 The RFC prohibit the use of a body just for the TRACE:
  A client MUST NOT send a message body in a TRACE request.

 
 
 When playing in undefined areas such as this it is best to keep in ming Jon
 Postel's RFC 1122 principle: Be liberal in what you accept, and conservative
 in what you send.
 
 I'll put it this way:  An RFC not prohibiting something does not make it a
 good idea.  This is not how we build a robust API that developers and user
 can easily adopt.

++


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] 2015-05-11 Status Meeting

2015-05-11 Thread JJ Asghar
Hey everyone!

Here’s a link[1] to the status meeting for this week. Just as a reminder we are 
officially moving the 1600GMT time slot as of next week. I’m still in the 
process of finding the official channel and i’ll respond to this email when i 
have it.

Unless there is significant objections i believe we should have our status 
meeting for next week also, and the agenda is posted here[2].  I understand a 
lot of us will be at Summit, but even with the possibility of progress on some 
of the outstanding action items over this next week, it would be nice to get an 
update.

Questions, concerns, thoughts?

-JJ


[1]: https://etherpad.openstack.org/p/openstack-chef-meeting-20150511 
https://etherpad.openstack.org/p/openstack-chef-meeting-20150511
[2]: https://etherpad.openstack.org/p/openstack-chef-meeting-20150518 
https://etherpad.openstack.org/p/openstack-chef-meeting-20150518

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session (was: [all] who is the ptl of trove?) [imm]

2015-05-11 Thread Amrith Kumar
One thing I'd definitely like to learn more about is how far along Zaqar is at 
this stage. I don't have a great understanding of it and would certainly have a 
number of newbie kinds of questions as well. But I'll do some reading in the 
next week and be sure to attend the session.

-amrith

| -Original Message-
| From: Flavio Percoco [mailto:fla...@redhat.com]
| Sent: Monday, May 11, 2015 1:28 PM
| To: OpenStack Development Mailing List (not for usage questions)
| Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration.
| Summit working session (was: [all] who is the ptl of trove?)
| 
| On 11/05/15 16:52 +, Doug Shelley wrote:
| Flavio,
| 
| This definitely sounds like a good idea. I know that many of us in the
| Trove community will be in Vancouver - what would make sense in terms
| of organizing a discussion?
| 
| I think we can use one of Zaqar's work sessions for this. We don't have
| any currently allocated so, you guys are free to pick one that works Ok
| for you.[0]
| 
| I've some info gotten from previous summits that I could put in the
| session description but I'd like to have an outline of points/doubts you
| would like to discuss from which we can reach agreements or at least
| generate action items.
| 
| Do you have doubts/questions you'd like answered in this session?
| Flavio
| 
| [0]
| http://libertydesignsummit.sched.org/overview/type/design+summit/Zaqar#.VV
| Dl_PYU9hF
| 
| 
| Regards,
| Doug
| 
| 
| 
| On 2015-05-11, 5:49 AM, Flavio Percoco fla...@redhat.com wrote:
| 
| On 08/05/15 00:45 -0700, Nikhil Manchanda wrote:
| 3)  The trove-guest-agent is in vm. it is connected by taskmanager
| by rabbitmq. We designed it. But is there some prectise to do
| this?
|  how to make the vm be connected in vm-network and management
|  network?
| 
| Most deployments of Trove that I am familiar with set up a separate
| RabbitMQ server in cloud that is used by Trove. It is not recommended
| to use the same infrastructure RabbitMQ server for Trove for security
| reasons. Also most deployments of Trove set up a private (neutron)
| network that the RabbitMQ server and guests are connected to, and all
| RPC messages are sent over this network.
| 
| We've discussed trove+zaqar in the past and I believe some folks from
| the Trove team have been in contact with Fei Long lately about this.
| Since one of the projects goal's for this cycle is to provide support
| to other projects and contribute to the adoption, I'm wondering if any
| of the members of the trove team would be willing to participate in a
| Zaqar working session completely dedicated to this integration?
| 
| It'd be a great opportunity to figure out what's really needed, edge
| cases and get some work done on this specific case.
| 
| Thanks,
| Flavio
| 
| --
| @flaper87
| Flavio Percoco
| 
| 
| ___
| ___ OpenStack Development Mailing List (not for usage questions)
| Unsubscribe:
| openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| 
| --
| @flaper87
| Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-11 Thread John Belamaric
Hi Erik,

Infoblox is also interested in this functionality, and we may be able to
help out as well.

Thanks,
John


On 5/8/15, 1:23 PM, Jay Pipes jaypi...@gmail.com wrote:

On 05/08/2015 09:29 AM, Erik Moe wrote:
 Hi,

 I have not been able to work with upstreaming of this for some time now.
 But now it looks like I may make another attempt. Who else is interested
 in this, as a user or to help contributing? If we get some traction we
 can have an IRC meeting sometime next week.

Hi Erik,

Mirantis has interest in this functionality, and depending on the amount
of work involved, we could pitch in...

Please cc me or add me to relevant reviews and I'll make sure the right
folks are paying attention.

All the best,
-jay

 *From:*Scott Drennan [mailto:sco...@nuagenetworks.net]
 *Sent:* den 4 maj 2015 18:42
 *To:* openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [neutron]Anyone looking at support for
 VLAN-aware VMs in Liberty?

 VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I
 don't see any work on VLAN-aware VMs for Liberty.  There is a
 blueprint[1] and specs[2] which was deferred from Kilo - is this
 something anyone is looking at as a Liberty candidate?  I looked but
 didn't find any recent work - is there somewhere else work on this is
 happening?  No-one has listed it on the liberty summit topics[3]
 etherpad, which could mean it's uncontroversial, but given history on
 this, I think that's unlikely.

 cheers,

 Scott

 [1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

 [2]: https://review.openstack.org/#/c/94612

 [3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics



 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Containers and networking

2015-05-11 Thread Bob Melander (bmelande)
I agree, that is my take too.

Russell, since you lead the OVN session in Vancouver, would it be possible to 
include the VLAN-aware-vms BP in that session?

Thanks,
Bob

From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: fredag 3 april 2015 13:17
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Containers and networking

This puts me in mind of a previous proposal, from the Neutron side of things. 
Specifically, I would look at Erik Moe's proposal for VM ports attached to 
multiple networks: 
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms .

I believe that you want logical ports hiding behind a conventional port (which 
that has); the logical ports attached to a variety of Neutron networks despite 
coming through the same VM interface (ditto); and an encap on the logical port 
with a segmentation ID (that uses exclusively VLANs, which probably suits here, 
though there's no particular reason why it has to be VLANs or why it couldn't 
be selectable).  The original concept didn't require multiple ports attached to 
the same incoming subnetwork, but that's a comparatively minor adaptation.
--
Ian.


On 2 April 2015 at 11:35, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:
On 04/02/2015 01:45 PM, Kevin Benton wrote:
 +1. I added a suggestion for a container networking suggestion to the
 etherpad for neutron. It would be sad if the container solution built
 yet another overlay on top of the Neutron networks with yet another
 network management workflow. By the time the packets are traveling
 across the wires, it would be nice not to have double encapsulation from
 completely different systems.

Yeah, that's what I like about this proposal.  Most of the existing work
in this space seems to result in double encapsulation.  Now we just need
to finish building it ...

--
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] service chaining feature development meeting minutes

2015-05-11 Thread Isaku Yamahata
Hello Cathy. Thank you for arranging the meeting.

Will we have goto meeting/irc meeting this week(May 12) on this topic?
I haven't seen any announcement yet.

thanks in advance
Isaku Yamahata

On Tue, May 05, 2015 at 08:55:25PM +,
Cathy Zhang cathy.h.zh...@huawei.com wrote:

 Attendees (Sorry we did not catch all the names):
 Cathy Zhang (Huawei), Louis Fourie, Alex Barclay (HP), Vikram Choudhary, 
 Carlos Goncalves (NTT) m...@cgoncalves.ptmailto:m...@cgoncalves.pt, Adolfo 
 Duarte (HP) adolfo.dua...@hp.commailto:adolfo.dua...@hp.com
 German Eichberger (HP)  
 german.eichber...@hp.commailto:german.eichber...@hp.com, Swami Vasudevan 
 (HP)  swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com, Uri  
 Elzur (Intel)  uri.el...@intel.commailto:uri.el...@intel.com
 Joe D'Andrea (ATT) 
 jdand...@research.att.commailto:jdand...@research.att.com, Isaku Yamahata 
 (Intel) isaku.yamah...@gmail.commailto:isaku.yamah...@gmail.com, Malini 
 Bhandaru malini.k.bhand...@intel.commailto:malini.k.bhand...@intel.com,
 Michael Johnson (HP) john...@hp.commailto:john...@hp.com, Lynn Li (HP) 
 lynn...@hp.commailto:lynn...@hp.com, Mickey Spiegel (IBM) 
 emspi...@us.ibm.commailto:emspi...@us.ibm.com, Ryan Tidwell (HP) 
 ryan.tidw...@hp.commailto:ryan.tidw...@hp.com
 Ralf Trezeciak openst...@trezeciak.demailto:openst...@trezeciak.de, David 
 Pinheiro, Hardik Italia
 
 Agenda:
 Cathy presented the service chain architecture and blueprints and Gerrit 
 review for:
 
 -   Neutron Extensions for Service chaining
 
 -  Common Neutron SFC Driver API
 
 
 
 There is a lot of interest on this feature.
 
 
 
 Questions
 
 Uri: Does this BP specify the implementation in the backend data path? Cathy: 
 This could be implemented by various data path chaining schemes: eg. IETF 
 service chain header, VLAN.
 
 Uri: What happens if the security group/iptables conflict with SGs for 
 port-chain? Cathy: it is expected that conflicts will be resolved by the 
 upper layer Intent Infra.
 
 Vikram: can other datapath transports be used. Cathy: Yes, what is defined in 
 the BP is the NBI for Neutron.
 
 Swami: add use case and example. Cathy: will add. There is a SFC use case BP, 
 will contact authors.
 
 Carlos: how does this relate to older traffic steering BP. Cathy: we can 
 discuss offline and work together to incorporate the traffic steering BP idea 
 into this BP.
 
 Uri: are these two BPs for NBI and SBI for Neutron? Cathy: yes
 
 Isaku: does Kyle know about BPs? Cathy: yes
 
 Uri: how do these BPs relate to rest of Openstack? Cathy: There will be a 
 presentation on the service chain framework at 2pm May 18 at the Vancouver 
 Summit. It will be presented by Cathy together with Kyle Mestery and Dave 
 Lenrow.
 
 Swami:  It helps to present a simple service chain use case at meeting next 
 week. Suggest to have IRC meeting too as it provides a record of what is 
 discussed. Cathy: I have already created the IRC meeting for service chain 
 project, will start the IRC meeting too.
 
 Malini: we should have a complete spec so redesign is avoided.
 
 Swami: We should use Neutron design session at Vancouver to discuss this BP 
 and flesh out details.
 Action Items
 
 
 1. We will have two meetings next week: one goto meeting with audio and 
 data sharing, the other IRC meeting with link to google doc for sharing 
 diagrams. Cathy will send out the meeting info to the community
 
 2. Swami will send an email to Kyle Mestery requesting a time slot for 
 service chaining topic in Neutron design session so that we can get all 
 interested parties in one room for a good face-to-face discussion.
 
 3. Cathy will discuss with Carlos offline about incorporating the traffic 
 steering idea into the service chain BPs.
 
 4. Service chaining is a complicated solution involving management plane, 
 control plane, and data plane as well as multiple components. The consensus 
 is to first design/implement the two Neutron related service chain BPs 
 https://review.openstack.org/#/c/177946 and get the feature approved by 
 Neutron Core/Driver team and implemented for the OpenStack L release. If we 
 can not get a slot in the design session, we will meet at the service chain 
 presentation on May 18 and find a room to discuss how to move forward with 
 this feature development in OpenStack.
 
 Feel free to chime in if I miss any points.
 
 Thanks,
 Cathy
 
 
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata isaku.yamah...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova][api] Allow passing body for more API methods

2015-05-11 Thread Dean Troyer
On Mon, May 11, 2015 at 11:44 AM, Rosa, Andrea (HP Cloud Services) 
andrea.r...@hp.com wrote:

  Agreed. Violating the HTTP spec is something that should be avoided.

 Actually it is not violating the HTTP spec, from RFC:
   A payload within a DELETE request message has no defined semantics;
sending a payload body on a DELETE request might cause some existing
implementations to reject the request.

 The RFC prohibit the use of a body just for the TRACE:
  A client MUST NOT send a message body in a TRACE request.



When playing in undefined areas such as this it is best to keep in ming Jon
Postel's RFC 1122 principle: Be liberal in what you accept, and conservative
in what you send.

I'll put it this way:  An RFC not prohibiting something does not make it a
good idea.  This is not how we build a robust API that developers and user
can easily adopt.

dt


Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Server Error when attempting to sign ICLA at review.openstack.org

2015-05-11 Thread Jeremy Stanley
On 2015-05-11 18:43:48 + (+), Earle Philhower wrote:
[...]
   Code Review - Error
   Server Error
   Cannot store contact information
[...]

Make sure you follow the instructions completely and in sequence:

http://docs.openstack.org/infra/manual/developers.html#account-setup

Also https://ask.openstack.org/en/question/56720 has a couple of
useful troubleshooting steps.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-11 Thread Tidwell, Ryan
Erik,

I’m looking forward to seeing this blueprint re-proposed and am able to pitch 
in to help get this in to Liberty.  Let me know how I can help.

-Ryan

From: Erik Moe [mailto:erik@ericsson.com]
Sent: Friday, May 08, 2015 6:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?


Hi,

I have not been able to work with upstreaming of this for some time now. But 
now it looks like I may make another attempt. Who else is interested in this, 
as a user or to help contributing? If we get some traction we can have an IRC 
meeting sometime next week.

Thanks,
Erik


From: Scott Drennan [mailto:sco...@nuagenetworks.net]
Sent: den 4 maj 2015 18:42
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs 
in Liberty?

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't see 
any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and specs[2] 
which was deferred from Kilo - is this something anyone is looking at as a 
Liberty candidate?  I looked but didn't find any recent work - is there 
somewhere else work on this is happening?  No-one has listed it on the liberty 
summit topics[3] etherpad, which could mean it's uncontroversial, but given 
history on this, I think that's unlikely.

cheers,
Scott

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]: https://review.openstack.org/#/c/94612
[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Allow passing body for more API methods

2015-05-11 Thread Everett Toews
On May 11, 2015, at 1:05 PM, Dean Troyer 
dtro...@gmail.commailto:dtro...@gmail.com wrote:

On Mon, May 11, 2015 at 11:44 AM, Rosa, Andrea (HP Cloud Services) 
andrea.r...@hp.commailto:andrea.r...@hp.com wrote:
 Agreed. Violating the HTTP spec is something that should be avoided.

Actually it is not violating the HTTP spec, from RFC:
  A payload within a DELETE request message has no defined semantics;
   sending a payload body on a DELETE request might cause some existing
   implementations to reject the request.

The RFC prohibit the use of a body just for the TRACE:
 A client MUST NOT send a message body in a TRACE request.


When playing in undefined areas such as this it is best to keep in ming Jon 
Postel's RFC 1122 principle: Be liberal in what you accept, and conservative 
in what you send.

I'll put it this way:  An RFC not prohibiting something does not make it a good 
idea.  This is not how we build a robust API that developers and user can 
easily adopt.

I’m agreed that this is not a good idea. I’d also invoke the principle of least 
astonishment here. Because it’s a de facto standard (so much so that some proxy 
in the middle may alter or reject such a request), it would be surprising to 
developers to allow such requests.

REST APIs are difficult enough. Let’s avoid things with no defined semantics.

I’d comment on the review but Gerrit is having a fit so I filed a bug.

https://code.google.com/p/gerrit/issues/detail?id=3361

Everett

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Mike Bayer



On 5/11/15 2:02 PM, Attila Fazekas wrote:


Not just with local database connections,
the 10G network itself also fast. Is is possible you spend more time even on
the kernel side tcp/ip stack (and the context switch..) (Not in physical I/O 
wait)
than in the actual work on the DB side. (Check netperf TCP_RR)

The scary part of a blocking I/O call is when you have two
python thread (or green thread) and one of them is holding a DB lock the other
is waiting for the same lock in a native blocking I/O syscall.
that's a database deadlock and whether you use eventlet, threads, 
asycnio or even just two transactions in a single-threaded script, that 
can happen regardless.  if your two eventlet non blocking greenlets 
are waiting forever for a deadlock,  you're just as deadlocked as if you 
have OS threads.




If you do a read(2) in native code, the python itself might not be able to 
preempt it
Your transaction might be finished with `DB Lock wait timeout`,
with 30 sec of doing nothing, instead of scheduling to the another python 
thread,
which would be able to release the lock.



Here's the you're losing me part because Python threads are OS 
threads, so Python isn't directly involved trying to preempt anything, 
unless you're referring to the effect of the GIL locking up the 
program.   However, it's pretty easy to make two threads in Python hit a 
database and do a deadlock against each other, and the rest of the 
program's threads continue to run just fine; in a DB deadlock situation 
you are blocked on IO and IO releases the GIL.


If you can illustrate a test script that demonstrates the actual failing 
of OS threads that does not occur greenlets here, that would make it 
immediately apparent what it is you're getting at here.







[1] http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] [magnum] [nova] Returning nova-docker to Nova Tree

2015-05-11 Thread Andreas Jaeger

On 05/11/2015 09:58 PM, Russell Bryant wrote:
 [...]

If the Magnum team is interested in helping to maintain it, why not just
keep it as a separate repo?  What's the real value in bringing it into
the Nova tree?

It could serve as a good example of how an optional nova component can
continue be maintained in a separate repo.


Indeed.

So, what do you (=original poster) want to achieve? Have it part of the 
nova project - or part of the nova repository?


You could have it as separate repository but part of nova - and then 
move it from stackforge to openstack namespace.


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-11 Thread Fox, Kevin M
Sorry, I reread what I wrote and it came off.. harsher then I had intended. I 
don't mean I will never support it, simply that I can't support it as it is.

I have security researchers that use this particular cloud, and don't want to 
give Murano a black eye if they discover something...

I would help if I could, but unfortunatly, I already have my fingers in too 
many pies. If I do manage to get a bit of time, maybe I can help. we'll see. 
I'm not convinced it will be a simple thing to fix though. Rabbit wasn't build 
with multitenancy in mind, so fixing it right might take a fair amount of work.

http polling is not a good long term solution. Uses more power/resources then 
necessary.

Zaquar, being designed for multi tenancy messaging from the get go, seems like 
a good way to go. Unfortunatly, they still only support http polling I think, 
but have it on their roadmap to fix. Most of the hard work is switching over to 
Zaquar though. So once they add a non polling option, it should be relatively 
cheep to add support.

Thanks,
Kevin


From: Stan Lagun [sla...@mirantis.com]
Sent: Sunday, May 10, 2015 3:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

Kevin,

I do agree that lack of RabbitMQ multi-tenancy is a problem. However as this is 
developers mailing list I would suggest to contribute and make Murano guest 
agent secure for the benefit of all users and all existing applications instead 
of spending comparable amount of time developing intricate solutions that will 
make everything way more complex. If Murano application developers need to 
write additional Mistral workflow for each deployment step that would make 
application development be extremely harder and Murano be mostly useless.

There are several approaches that can be taken to solve agent isolation problem 
and all of them are relatively easy to implement.
This task is one of out top priorities for the Liberty and will be solved very 
soon anyway.

Another approach that IMO also better than SSH is to use HOT software config 
that uses HTTP polling and doesn't suffer from lack of tenant isolation.

I do want to see better Mistral integration in Murano as well as many other 
tools like puppet etc. And there are some good use cases for Mistral. But when 
it comes to the most basic things that Murano was designed to do from the 
foundation I want to make sure that Murano can do them the best way possible 
without requiring users to learn additional DSLs/tools or go extra step and 
involve additional services where not necessary. If something important is 
missing in Murano that makes usage of Mistral for deployment more attractive 
I'd rather focus on improving Murano and bringing those features to Murano. We 
can even use Mistral under the hood as long as we not make users to write both 
MuranoPL and Mistral DSL code for trivial things like service restart.

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

mailto:sla...@mirantis.com

On Sun, May 10, 2015 at 8:44 PM, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
Im planning on deploying murano but wont be supporting the murano guest agent. 
The lack of multi tenant security is a big problem I think.

Thanks,
Kevin


From: Stan Lagun
Sent: Saturday, May 09, 2015 7:21:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

Filip,

If I got you right the plan is to have Murano application execute Mistral 
workflow that SSH to VM and executes particular command? And alternative is 
Murano-Mistral-Zaquar-Zaquar agent?
Why can't you just send this command directly from Murano (to Murano agent on 
VM)? This is the most common use case that is found in nearly all Murano 
applications and it is battle-proven. If you need SSH you can contribute SSH 
plugin to Murano (Mistral will require similar plugin anyway). The more moving 
parts you involve the more chances you have for everything to fail


Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

mailto:sla...@mirantis.com

On Fri, May 8, 2015 at 11:22 AM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:
Generally yes, std.ssh action works as long as network infrastructure allows 
access to a host using specified IP, it doesn’t provide anything on top of that.


 On 06 May 2015, at 22:26, Fox, Kevin M 
 kevin@pnnl.govmailto:kevin@pnnl.gov wrote:

 This would also probably be a good use case for Zaqar I think. Have a generic 
 run shell commands from Zaqar queue agent, that pulls commands from a Zaqar 
 queue, and executes it.
 The vm's don't have to be directly reachable from the network then. You just 
 have to push messages into Zaqar.

Yes, in Mistral it would be another action that puts a command into Zaqar 
queue. This type 

Re: [openstack-dev] [Neutron] service chaining feature development meeting

2015-05-11 Thread Anita Kuno
On 05/11/2015 05:30 PM, Cathy Zhang wrote:
 Hello everyone,
 
 
 Our next service chain feature development meeting will be 10am~11am May 12th 
 pacific time. Anyone who has interest in this feature development is welcome 
 to join the meeting and contribute together to the service chain feature in 
 OpenStack.
 
 
 
 
 OpenStack BP discussion for service chaining
 Please join the meeting from your computer, tablet or smartphone.
 https://global.gotomeeting.com/join/199553557, meeting password: 199-553-557
 You can also dial in using your phone.
 United States +1 (224) 501-3212
 Access Code: 199-553-557
 
 -
 Following are the links to the Neutron related service chain specs and the 
 bug IDs. Feel free to sign up and add you comments/input to the BPs.
 https://review.openstack.org/#/c/177946
 https://bugs.launchpad.net/neutron/+bug/1450617
 https://bugs.launchpad.net/neutron/+bug/1450625
 
 
 
 Thanks,
 
 Cathy
 
 
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
OpenStack conducts public, archivable meetings on irc.

https://wiki.openstack.org/wiki/Meetings

If you have difficulty holding meetings as OpenStack needs them to be
held do reach out to our community manager to allow him to offer his
assistance to you and your group.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] xstatic-angular-fileupload with no license

2015-05-11 Thread Michael Krotscheck
If only JavaScript had a package manager that would take care of all
this... ;)

Michael

On Mon, May 11, 2015 at 12:45 PM Thomas Goirand z...@debian.org wrote:

 Hi,

 We had the issue with lrdragndrop, and angular-bootstrap. Now, we're
 having the same issue again with this angular-fileupload. So I'll say it
 again, and hope that it wont happen again:

 When releasing anything using the MIT license, we *MUST* also ship the
 license itself, or otherwise, this is a license violation:

 The above copyright notice and this permission notice shall be included
 in all copies or substantial portions of the Software.

 Jordan, you're marked as the maintainer of angular-fileupload. Please
 fix the xstatic repository at
 https://github.com/danialfarid/ng-file-upload. Also, please get this
 release tagged.

 Cheers,

 Thomas Goirand (zigo)

 P.S: The frustration builds up here, as that's a reoccurring issue. It'd
 be nice to not just add anything to the global-requirements.txt without
 further checking of licensing, and that the stackforge repository is
 properly tagged as well...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] etherpads for design summit

2015-05-11 Thread Sergey Lukjanov
Hey folks,

I've ensured that all threads created and added them to
https://wiki.openstack.org/wiki/Summit/Liberty/Etherpads#Sahara and
http://libertydesignsummit.sched.org/type/design+summit/Sahara

Session drivers, please, fill the etherpads with info.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Attila Fazekas




- Original Message -
 From: Mike Bayer mba...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Monday, May 11, 2015 9:07:13 PM
 Subject: Re: [openstack-dev] [all] Replace mysql-python with mysqlclient
 
 
 
 On 5/11/15 2:02 PM, Attila Fazekas wrote:
 
  Not just with local database connections,
  the 10G network itself also fast. Is is possible you spend more time even
  on
  the kernel side tcp/ip stack (and the context switch..) (Not in physical
  I/O wait)
  than in the actual work on the DB side. (Check netperf TCP_RR)
 
  The scary part of a blocking I/O call is when you have two
  python thread (or green thread) and one of them is holding a DB lock the
  other
  is waiting for the same lock in a native blocking I/O syscall.
 that's a database deadlock and whether you use eventlet, threads,
 asycnio or even just two transactions in a single-threaded script, that
 can happen regardless.  if your two eventlet non blocking greenlets
 are waiting forever for a deadlock,  you're just as deadlocked as if you
 have OS threads.
 
 
  If you do a read(2) in native code, the python itself might not be able to
  preempt it
  Your transaction might be finished with `DB Lock wait timeout`,
  with 30 sec of doing nothing, instead of scheduling to the another python
  thread,
  which would be able to release the lock.
 
 
 Here's the you're losing me part because Python threads are OS
 threads, so Python isn't directly involved trying to preempt anything,
 unless you're referring to the effect of the GIL locking up the
 program.   However, it's pretty easy to make two threads in Python hit a
 database and do a deadlock against each other, and the rest of the
 program's threads continue to run just fine; in a DB deadlock situation
 you are blocked on IO and IO releases the GIL.
 
 If you can illustrate a test script that demonstrates the actual failing
 of OS threads that does not occur greenlets here, that would make it
 immediately apparent what it is you're getting at here.


http://www.fpaste.org/220824/raw/

I just put together hello word C example and a hello word threading example,
and replaced the print with sleep(3).

When I use the sleep(3) from python, the 5 thread program runs in ~3 second,
when I use the sleep(3) from native code, it runs ~15 sec.

So yes, it is very likely a GIL lock wait related issue,
when the native code is not assisting.
 
Do you need a DB example, by using the mysql C driver,
and waiting in an actual I/O primitive ?

The greenthreads will not help here.

If I would import the python time.sleep from the C code it might help.

Using pure python driver helps to avoid this kind of issues,
but in this case you have the `cPython is slow` issue.

 
 
  [1]
  http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/
 
 
 
 
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Allow passing body for more API methods

2015-05-11 Thread Jay Pipes

On 05/11/2015 11:53 AM, Sean Dague wrote:

Why is DELETE /volumes/ID?force=true not an option?


Yes, this is what I would recommend as well.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] xstatic-angular-fileupload with no license

2015-05-11 Thread Thomas Goirand

Hi,

We had the issue with lrdragndrop, and angular-bootstrap. Now, we're 
having the same issue again with this angular-fileupload. So I'll say it 
again, and hope that it wont happen again:


When releasing anything using the MIT license, we *MUST* also ship the 
license itself, or otherwise, this is a license violation:


The above copyright notice and this permission notice shall be included 
in all copies or substantial portions of the Software.


Jordan, you're marked as the maintainer of angular-fileupload. Please 
fix the xstatic repository at 
https://github.com/danialfarid/ng-file-upload. Also, please get this 
release tagged.


Cheers,

Thomas Goirand (zigo)

P.S: The frustration builds up here, as that's a reoccurring issue. It'd 
be nice to not just add anything to the global-requirements.txt without 
further checking of licensing, and that the stackforge repository is 
properly tagged as well...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] [magnum] [nova] Returning nova-docker to Nova Tree

2015-05-11 Thread Adrian Otto
Dan and John,

On May 11, 2015, at 7:06 AM, Dan Smith d...@danplanet.com wrote:

 +1 Agreed nested containers are a thing. Its a great reason to keep
 our LXC driver.
 
 I don't think that's a reason we should keep our LXC driver, because you
 can still run containers in containers with other things. If anything,
 using a nova vm-like container to run application-like containers inside
 them is going to beg the need to tweak more detailed things on the
 vm-like container to avoid restricting the application one, I think.
 
 IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
 nova is because it's nearly free. It is the libvirt driver with a few
 conditionals to handle different things when necessary for LXC. The
 docker driver is a whole other nova driver to maintain, with even less
 applicability to being a system container (IMHO).
 
 I am keen we set the right expectations here. If you want to treat
 docker containers like VMs, thats OK.
 
 I guess a remaining concern is the driver dropping into diss-repair
 if most folks end up using Magnum when they want to use docker.
 
 I think this is likely the case and I'd like to avoid getting into this
 situation again. IMHO, this is not our target audience, it's very much
 not free to just put it into the tree because meh, some people might
 like it instead of the libvirt-lxc driver”.

This is a valid point. I do expect that the combined use of Nova + (nova-docker 
| libvirt-lxc) + Magnum will be popular in situations where workload 
consolidation is a key goal, and security isolation is a non-goal. For this 
reason, I’m very interested in making sure that we have some choice for decent 
Nova virt drivers that produce Nova instances that are containers. This 
matters, because Magnum currently expects to get all its instances from Nova.

I do recognize that nova-docker has stabilized to the point where it would be 
practical to maintain it within the Nova tree. As Eric Windisch mentioned, the 
reasons for having this as a separate code repo have vanished. It’s feature 
complete, has and passes the necessary tests, and has a low commit velocity 
now. 

Perhaps our Nova team would feel more comfortable about ongoing maintenance if 
the Magnum team were willing to bring nova-docker into its own scope of support 
so we don’t suffer from orphaned code. If we can agree to adopt this from a 
maintenance perspective, then we should be able to agree to have it in tree 
again, right?

I have added this to the Containers Team IRC meeting agenda for tomorrow. Let’s 
see what the team thinks about this.

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-05-12_1600_UTC

I invite Nova and nova-docker team members to join us to discuss this topic, 
and give us your input.

Thanks,

Adrian

 
 --Dan
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef][infra] #openstack-meeting-5 proposal to open up

2015-05-11 Thread JJ Asghar
Hey everyone!

The openstack-chef project is attempting to move into the big tent[1]. As part 
of this we need to move our meeting from #openstack-chef to one of the official 
meeting rooms.

We have our official meeting time at 1500UTC/1600GMTorBST on Monday and it 
seems all the rooms are taken.  Is it possible to open up #openstack-meeting-5?

I asked in #openstack-infra and fungi suggested I ask the mailing list.

Thoughts, questions, concerns?

-JJ

[1]: https://review.openstack.org/#/c/175000 
https://review.openstack.org/#/c/175000__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NOVA] [CINDER] BUG : Booting from volume does not actually boot from volume

2015-05-11 Thread Fichter, Dane G.
Hey all,

when looking in more depth at the booting from volume procedure, I noticed that 
the instance is actually booted from image data streamed from Glance. Does 
anyone have any insight into how to fix this? It is described in more detail in 
the following bug report:

https://bugs.launchpad.net/nova/+bug/1449084

Thanks,

Dane Fichter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] [magnum] [nova] Returning nova-docker to Nova Tree

2015-05-11 Thread Russell Bryant
On 05/11/2015 03:51 PM, Adrian Otto wrote:
 Dan and John,
 
 On May 11, 2015, at 7:06 AM, Dan Smith d...@danplanet.com wrote:
 
 +1 Agreed nested containers are a thing. Its a great reason to keep
 our LXC driver.

 I don't think that's a reason we should keep our LXC driver, because you
 can still run containers in containers with other things. If anything,
 using a nova vm-like container to run application-like containers inside
 them is going to beg the need to tweak more detailed things on the
 vm-like container to avoid restricting the application one, I think.

 IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
 nova is because it's nearly free. It is the libvirt driver with a few
 conditionals to handle different things when necessary for LXC. The
 docker driver is a whole other nova driver to maintain, with even less
 applicability to being a system container (IMHO).

 I am keen we set the right expectations here. If you want to treat
 docker containers like VMs, thats OK.

 I guess a remaining concern is the driver dropping into diss-repair
 if most folks end up using Magnum when they want to use docker.

 I think this is likely the case and I'd like to avoid getting into this
 situation again. IMHO, this is not our target audience, it's very much
 not free to just put it into the tree because meh, some people might
 like it instead of the libvirt-lxc driver”.
 
 This is a valid point. I do expect that the combined use of Nova +
 (nova-docker | libvirt-lxc) + Magnum will be popular in situations
 where workload consolidation is a key goal, and security isolation is
 a non-goal. For this reason, I’m very interested in making sure that
 we have some choice for decent Nova virt drivers that produce Nova
 instances that are containers. This matters, because Magnum currently
 expects to get all its instances from Nova.
 
 I do recognize that nova-docker has stabilized to the point where it
 would be practical to maintain it within the Nova tree. As Eric
 Windisch mentioned, the reasons for having this as a separate code
 repo have vanished. It’s feature complete, has and passes the
 necessary tests, and has a low commit velocity now.
 
 Perhaps our Nova team would feel more comfortable about ongoing
 maintenance if the Magnum team were willing to bring nova-docker into
 its own scope of support so we don’t suffer from orphaned code. If we
 can agree to adopt this from a maintenance perspective, then we
 should be able to agree to have it in tree again, right?
 
 I have added this to the Containers Team IRC meeting agenda for
 tomorrow. Let’s see what the team thinks about this.
 
 https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-05-12_1600_UTC

  I invite Nova and nova-docker team members to join us to discuss
 this topic, and give us your input.

If the Magnum team is interested in helping to maintain it, why not just
keep it as a separate repo?  What's the real value in bringing it into
the Nova tree?

It could serve as a good example of how an optional nova component can
continue be maintained in a separate repo.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Proposing Michael McCune as an API Working Group core

2015-05-11 Thread Everett Toews
I would like to propose Michael McCune (elmiko) as an API Working Group core.

Among Michael’s many fine qualities:

  * Active from the start
  * Highly available
  * Very knowledgable about APIs
  * Committed the guideline template 
  * Working on moving the API Guidelines wiki page
  * Lots of solid review work

Cheers,
Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Proposing Michael McCune as an API Working Group core

2015-05-11 Thread Jay Pipes

+1 from me.

On 05/11/2015 04:18 PM, Everett Toews wrote:

I would like to propose Michael McCune (elmiko) as an API Working Group core.

Among Michael’s many fine qualities:

   * Active from the start
   * Highly available
   * Very knowledgable about APIs
   * Committed the guideline template
   * Working on moving the API Guidelines wiki page
   * Lots of solid review work

Cheers,
Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Proposing Michael McCune as an API Working Group core

2015-05-11 Thread Sergey Lukjanov
Hi,

I'm not an API WG core or active participant (unfortunately), but from
Sahara in-project API discussions Michael is very active and he's now
driving our next API design as well as being Sahara liaison in API WG.

Thanks.

On Mon, May 11, 2015 at 11:18 PM, Everett Toews everett.to...@rackspace.com
 wrote:

 I would like to propose Michael McCune (elmiko) as an API Working Group
 core.

 Among Michael’s many fine qualities:

   * Active from the start
   * Highly available
   * Very knowledgable about APIs
   * Committed the guideline template
   * Working on moving the API Guidelines wiki page
   * Lots of solid review work

 Cheers,
 Everett


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session (was: [all] who is the ptl of trove?) [imm]

2015-05-11 Thread Flavio Percoco

On 11/05/15 19:35 +, Amrith Kumar wrote:

One thing I'd definitely like to learn more about is how far along Zaqar is at 
this stage. I don't have a great understanding of it and would certainly have a 
number of newbie kinds of questions as well. But I'll do some reading in the 
next week and be sure to attend the session.


Sounds perfect, bring them all and lets talk about it.



-amrith

| -Original Message-
| From: Flavio Percoco [mailto:fla...@redhat.com]
| Sent: Monday, May 11, 2015 1:28 PM
| To: OpenStack Development Mailing List (not for usage questions)
| Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration.
| Summit working session (was: [all] who is the ptl of trove?)
|
| On 11/05/15 16:52 +, Doug Shelley wrote:
| Flavio,
| 
| This definitely sounds like a good idea. I know that many of us in the
| Trove community will be in Vancouver - what would make sense in terms
| of organizing a discussion?
|
| I think we can use one of Zaqar's work sessions for this. We don't have
| any currently allocated so, you guys are free to pick one that works Ok
| for you.[0]
|
| I've some info gotten from previous summits that I could put in the
| session description but I'd like to have an outline of points/doubts you
| would like to discuss from which we can reach agreements or at least
| generate action items.
|
| Do you have doubts/questions you'd like answered in this session?
| Flavio
|
| [0]
| http://libertydesignsummit.sched.org/overview/type/design+summit/Zaqar#.VV
| Dl_PYU9hF
|
| 
| Regards,
| Doug
| 
| 
| 
| On 2015-05-11, 5:49 AM, Flavio Percoco fla...@redhat.com wrote:
| 
| On 08/05/15 00:45 -0700, Nikhil Manchanda wrote:
| 3)  The trove-guest-agent is in vm. it is connected by taskmanager
| by rabbitmq. We designed it. But is there some prectise to do
| this?
|  how to make the vm be connected in vm-network and management
|  network?
| 
| Most deployments of Trove that I am familiar with set up a separate
| RabbitMQ server in cloud that is used by Trove. It is not recommended
| to use the same infrastructure RabbitMQ server for Trove for security
| reasons. Also most deployments of Trove set up a private (neutron)
| network that the RabbitMQ server and guests are connected to, and all
| RPC messages are sent over this network.
| 
| We've discussed trove+zaqar in the past and I believe some folks from
| the Trove team have been in contact with Fei Long lately about this.
| Since one of the projects goal's for this cycle is to provide support
| to other projects and contribute to the adoption, I'm wondering if any
| of the members of the trove team would be willing to participate in a
| Zaqar working session completely dedicated to this integration?
| 
| It'd be a great opportunity to figure out what's really needed, edge
| cases and get some work done on this specific case.
| 
| Thanks,
| Flavio
| 
| --
| @flaper87
| Flavio Percoco
| 
| 
| ___
| ___ OpenStack Development Mailing List (not for usage questions)
| Unsubscribe:
| openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
|
| --
| @flaper87
| Flavio Percoco


--
@flaper87
Flavio Percoco


pgpdxKsqQo7I9.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Proposing Michael McCune as an API Working Group core

2015-05-11 Thread Chris Dent

On Mon, 11 May 2015, Everett Toews wrote:


I would like to propose Michael McCune (elmiko) as an API Working Group core.


+1

a fine idea
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Server Error when attempting to sign ICLA at review.openstack.org

2015-05-11 Thread Earle Philhower
Thanks for your quick help, Jeremy.

For posterity, my problem was that I'd skimmed the instructions and
had only created a review.openstack.org account.

Users need both this and an OpenStack Foundation profile
(https://www.openstack.org/join/), with the same email, and then all
goes smoothly.

Much appreciated,
-EFP3

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Monday, May 11, 2015 11:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Server Error when attempting to sign ICLA at 
review.openstack.org

On 2015-05-11 18:43:48 + (+), Earle Philhower wrote:
[...]
   Code Review - Error
   Server Error
   Cannot store contact information
[...]

Make sure you follow the instructions completely and in sequence:

http://docs.openstack.org/infra/manual/developers.html#account-setup

Also https://ask.openstack.org/en/question/56720 has a couple of useful 
troubleshooting steps.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Jenkins CI failure for patch: https://review.openstack.org/180019

2015-05-11 Thread Madhusudhan Kandadai
Hello,

I am seeing odd behavior with Jenkins CI for the patch:
https://review.openstack.org/180019

A quick background:

1. I could see all the tests passing on my virtual box and have attached
the logs for your ready reference.
2. Whereas in Jenkins, I could see the different behavior which is not
reproducible on my virtual box. I have tried the following to make it
succeed.
  (a) 'recheck' comment - Jenkins reported unhappy.
  (b) restacked my devstack instance on my vm and run 'tox -e tempest', it
succeeds on virtual box, but not for Jenkins.

Not sure of the exact problem though, but would like to have community help
on the same.

Thanks!
Madhu
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 git commit --amend
[review/madhusudhan_kandadai/pools_admin_ddt_tests acba386] Tempest tests using 
testscenarios for create/update pool
 Author: Ubuntu madhusudhan.openst...@gmail.com
 1 file changed, 169 insertions(+)
 create mode 100644 
neutron_lbaas/tests/tempest/v2/scenario/test_pools_admin_state.py
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 git review
remote: Resolving deltas: 100% (5/5)
remote: Processing changes: updated: 1, refs: 1, done
remote: 
remote: Updated Changes:
remote:   https://review.openstack.org/180019 Tempest tests using testscenarios 
for create/update pool
remote: 
To 
ssh://madhusudhan-kanda...@review.openstack.org:29418/openstack/neutron-lbaas.git
 * [new branch]  HEAD - refs/publish/master/pools_admin_ddt_tests
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 
devstack@ubuntu:/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/scenario$
 tox -e tempest
tempest develop-inst-nodeps: /opt/stack/neutron-lbaas
tempest runtests: PYTHONHASHSEED='3706390691'
tempest runtests: commands[0] | sh tools/pretty_tox.sh 
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} 
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron_lbaas/tests/unit}  

{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor
 [11.086096s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor_extra_attribute
 [0.013065s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor_invalid_attribute
 [0.033716s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_create_health_monitor_missing_attribute
 [0.020612s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_delete_health_monitor
 [10.916318s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_get_health_monitor
 [11.037575s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_list_health_monitors_empty
 [0.062835s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_list_health_monitors_one
 [10.991776s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_list_health_monitors_two
 [44.915230s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_udpate_health_monitor_invalid_attribute
 [10.893886s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_update_health_monitor
 [21.516263s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_health_monitors.TestHealthMonitors.test_update_health_monitor_extra_attribute
 [11.019241s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener
 [11.068447s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_admin_state_up
 [0.123651s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_connection_limit
 [0.163494s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_description
 [11.030286s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_load_balancer_id
 [0.127011s] ... ok
{0} 
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_empty_name
 [11.307731s] ... ok
{0} 

Re: [openstack-dev] [nova-docker] [magnum] [nova] Returning nova-docker to Nova Tree

2015-05-11 Thread Adrian Otto
Andreas,

On May 11, 2015, at 2:04 PM, Andreas Jaeger a...@suse.com wrote:

 On 05/11/2015 09:58 PM, Russell Bryant wrote:
  [...]
 If the Magnum team is interested in helping to maintain it, why not just
 keep it as a separate repo?  What's the real value in bringing it into
 the Nova tree?
 
 It could serve as a good example of how an optional nova component can
 continue be maintained in a separate repo.
 
 Indeed.
 
 So, what do you (=original poster) want to achieve? Have it part of the nova 
 project - or part of the nova repository?
 
 You could have it as separate repository but part of nova - and then move it 
 from stackforge to openstack namespace.

Good point. This is probably a good balance. Ideally the driver would be 
available whenever nova is installed, so regardless of how we develop the 
driver that it be available for OpenStack cloud operators to use without 
downloading something separately in order to use it. I’d argue the same for all 
virt drivers, not just this one. There is probably no value in coupling 
nova-docker with Magnum from a software distribution perspective. We probably 
could share Gerrit groups. I’ll let you know what input the Magnum team offers 
tomorrow.

Adrian

 
 Andreas
 -- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-11 Thread Karthik Natarajan
Hi Eric,

Brocade is also interested in the VLAN aware VM's BP. Let's discuss it during 
the design summit.

Thanks,
Karthik

From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
Sent: Monday, May 11, 2015 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?

Hi Eric,

Cisco is also interested in the kind of VLAN trunking feature that your 
VLAN-aware VM's BP describe. If this could be achieved in Liberty it'd be great.
Perhaps your BP could be brought up during one of the Neutron sessions in 
Vancouver, e.g., the one on OVN since there seems to be some similarities?

Thanks
Bob


From: Erik Moe erik@ericsson.commailto:erik@ericsson.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: fredag 8 maj 2015 06:29
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?


Hi,

I have not been able to work with upstreaming of this for some time now. But 
now it looks like I may make another attempt. Who else is interested in this, 
as a user or to help contributing? If we get some traction we can have an IRC 
meeting sometime next week.

Thanks,
Erik


From: Scott Drennan [mailto:sco...@nuagenetworks.net]
Sent: den 4 maj 2015 18:42
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs 
in Liberty?

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't see 
any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and specs[2] 
which was deferred from Kilo - is this something anyone is looking at as a 
Liberty candidate?  I looked but didn't find any recent work - is there 
somewhere else work on this is happening?  No-one has listed it on the liberty 
summit topics[3] etherpad, which could mean it's uncontroversial, but given 
history on this, I think that's unlikely.

cheers,
Scott

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]: https://review.openstack.org/#/c/94612
[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Jenkins CI failure for patch: https://review.openstack.org/180019

2015-05-11 Thread Rochelle Grober
Madhu,

There have been problems with Gerrit all day.  And, trying the link to you 
review, the service is now down.  Consider letting the infra folks work the 
problem some more, look for a posting from them, or watch the topic messages on 
IRC and hold off on resubmitting until  you see an all clear from Infra 
somewhere.

--Rocky

From: Madhusudhan Kandadai [mailto:madhusudhan.openst...@gmail.com]
Sent: Monday, May 11, 2015 17:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Jenkins CI failure for patch: 
https://review.openstack.org/180019

Hello,
I am seeing odd behavior with Jenkins CI for the patch: 
https://review.openstack.org/180019
A quick background:
1. I could see all the tests passing on my virtual box and have attached the 
logs for your ready reference.
2. Whereas in Jenkins, I could see the different behavior which is not 
reproducible on my virtual box. I have tried the following to make it succeed.
  (a) 'recheck' comment - Jenkins reported unhappy.
  (b) restacked my devstack instance on my vm and run 'tox -e tempest', it 
succeeds on virtual box, but not for Jenkins.

Not sure of the exact problem though, but would like to have community help on 
the same.
Thanks!
Madhu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Eugene Nikanorov
 All production Openstack applications today are fully serialized to only
be able to emit a single query to the database at a time;
True. That's why any deployment configures tons (tens) of workers of any
significant service.

  When I talk about moving to threads, this is not a won't help or hurt
kind of issue, at the moment it's a change that will immediately allow
massive improvement to the performance of all Openstack applications
instantly.
Not sure If it will give much benefit over separate processes.
I guess we don't configure many worker for gate testing (at least, neutron
still doesn't do it), so there could be an improvement, but I guess to
enable multithreading we would need to fix the same issues that prevented
us from configuring multiple workers in the gate, plus possibly more.

 We need to change the DB library or dump eventlet.
I'm +1 for the 1st option.

Other option, which is multithreading will most certainly bring concurrency
issues other than database.

Thanks,
Eugene.


On Mon, May 11, 2015 at 4:46 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Mike,

 Thank you for saying all that you said above.

 Best regards,
 Boris Pavlovic

 On Tue, May 12, 2015 at 2:35 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Mike Bayer's message of 2015-05-11 15:44:30 -0700:
 
  On 5/11/15 5:25 PM, Robert Collins wrote:
  
   Details: Skip over this bit if you know it all already.
  
   The GIL plays a big factor here: if you want to scale the amount of
   CPU available to a Python service, you have two routes:
   A) move work to a different process through some RPC - be that DB's
   using SQL, other services using oslo.messaging or HTTP - whatever.
   B) use C extensions to perform work in threads - e.g. openssl context
   processing.
  
   To increase concurrency you can use threads, eventlet, asyncio,
   twisted etc - because within a single process *all* Python bytecode
   execution happens inside the GIL lock, so you get at most one CPU for
   a CPU bound workload. For an IO bound workload, you can fit more work
   in by context switching within that one CPU capacity. And - the GIL is
   a poor scheduler, so at the limit - an IO bound workload where the IO
   backend has more capacity than we have CPU to consume it within our
   process, you will run into priority inversion and other problems.
   [This varies by Python release too].
  
   request_duration = time_in_cpu + time_blocked
   request_cpu_utilisation = time_in_cpu/request_duration
   cpu_utilisation = concurrency * request_cpu_utilisation
  
   Assuming that we don't want any one process to spend a lot of time at
   100% - to avoid such at-the-limit issues, lets pick say 80%
   utilisation, or a safety factor of 0.2. If a single request consumes
   50% of its duration waiting on IO, and 50% of its duration executing
   bytecode, we can only run one such request concurrently without
   hitting 100% utilisations. (2*0.5 CPU == 1). For a request that spends
   75% of its duration waiting on IO and 25% on CPU, we can run 3 such
   requests concurrently without exceeding our target of 80% utilisation:
   (3*0.25=0.75).
  
   What we have today in our standard architecture for OpenStack is
   optimised for IO bound workloads: waiting on the
   network/subprocesses/disk/libvirt etc. Running high numbers of
   eventlet handlers in a single process only works when the majority of
   the work being done by a handler is IO.
 
  Everything stated here is great, however in our situation there is one
  unfortunate fact which renders it completely incorrect at the moment.
  I'm still puzzled why we are getting into deep think sessions about the
  vagaries of the GIL and async when there is essentially a full-on
  red-alert performance blocker rendering all of this discussion useless,
  so I must again remind us: what we have *today* in Openstack is *as
  completely un-optimized as you can possibly be*.
 
  The most GIL-heavy nightmare CPU bound task you can imagine running on
  25 threads on a ten year old Pentium will run better than the Openstack
  we have today, because we are running a C-based, non-eventlet patched DB
  library within a single OS thread that happens to use eventlet, but the
  use of eventlet is totally pointless because right now it blocks
  completely on all database IO.   All production Openstack applications
  today are fully serialized to only be able to emit a single query to the
  database at a time; for each message sent, the entire application blocks
  an order of magnitude more than it would under the GIL waiting for the
  database library to send a message to MySQL, waiting for MySQL to send a
  response including the full results, waiting for the database to unwrap
  the response into Python structures, and finally back to the Python
  space, where we can send another database message and block the entire
  application and all greenlets while this single message proceeds.
 
  To share a link I've already shared 

Re: [openstack-dev] [Neutron] service chaining feature development meeting minutes

2015-05-11 Thread Cathy Zhang
Hi Isaku,

Sorry that I missed your email. I sent out the meeting info. Hope you received 
it. 
I will forward it to you in a separate email.

Thanks,
Cathy

-Original Message-
From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com] 
Sent: Monday, May 11, 2015 10:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: isaku.yamah...@gmail.com
Subject: Re: [openstack-dev] [Neutron] service chaining feature development 
meeting minutes

Hello Cathy. Thank you for arranging the meeting.

Will we have goto meeting/irc meeting this week(May 12) on this topic?
I haven't seen any announcement yet.

thanks in advance
Isaku Yamahata

On Tue, May 05, 2015 at 08:55:25PM +,
Cathy Zhang cathy.h.zh...@huawei.com wrote:

 Attendees (Sorry we did not catch all the names):
 Cathy Zhang (Huawei), Louis Fourie, Alex Barclay (HP), Vikram Choudhary, 
 Carlos Goncalves (NTT) m...@cgoncalves.ptmailto:m...@cgoncalves.pt, Adolfo 
 Duarte (HP) adolfo.dua...@hp.commailto:adolfo.dua...@hp.com
 German Eichberger (HP)  
 german.eichber...@hp.commailto:german.eichber...@hp.com, Swami Vasudevan 
 (HP)  swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com, Uri  
 Elzur (Intel)  uri.el...@intel.commailto:uri.el...@intel.com
 Joe D'Andrea (ATT) 
 jdand...@research.att.commailto:jdand...@research.att.com, Isaku Yamahata 
 (Intel) isaku.yamah...@gmail.commailto:isaku.yamah...@gmail.com, Malini 
 Bhandaru malini.k.bhand...@intel.commailto:malini.k.bhand...@intel.com,
 Michael Johnson (HP) john...@hp.commailto:john...@hp.com, Lynn Li (HP) 
 lynn...@hp.commailto:lynn...@hp.com, Mickey Spiegel (IBM) 
 emspi...@us.ibm.commailto:emspi...@us.ibm.com, Ryan Tidwell (HP) 
 ryan.tidw...@hp.commailto:ryan.tidw...@hp.com
 Ralf Trezeciak openst...@trezeciak.demailto:openst...@trezeciak.de, David 
 Pinheiro, Hardik Italia
 
 Agenda:
 Cathy presented the service chain architecture and blueprints and Gerrit 
 review for:
 
 -   Neutron Extensions for Service chaining
 
 -  Common Neutron SFC Driver API
 
 
 
 There is a lot of interest on this feature.
 
 
 
 Questions
 
 Uri: Does this BP specify the implementation in the backend data path? Cathy: 
 This could be implemented by various data path chaining schemes: eg. IETF 
 service chain header, VLAN.
 
 Uri: What happens if the security group/iptables conflict with SGs for 
 port-chain? Cathy: it is expected that conflicts will be resolved by the 
 upper layer Intent Infra.
 
 Vikram: can other datapath transports be used. Cathy: Yes, what is defined in 
 the BP is the NBI for Neutron.
 
 Swami: add use case and example. Cathy: will add. There is a SFC use case BP, 
 will contact authors.
 
 Carlos: how does this relate to older traffic steering BP. Cathy: we can 
 discuss offline and work together to incorporate the traffic steering BP idea 
 into this BP.
 
 Uri: are these two BPs for NBI and SBI for Neutron? Cathy: yes
 
 Isaku: does Kyle know about BPs? Cathy: yes
 
 Uri: how do these BPs relate to rest of Openstack? Cathy: There will be a 
 presentation on the service chain framework at 2pm May 18 at the Vancouver 
 Summit. It will be presented by Cathy together with Kyle Mestery and Dave 
 Lenrow.
 
 Swami:  It helps to present a simple service chain use case at meeting next 
 week. Suggest to have IRC meeting too as it provides a record of what is 
 discussed. Cathy: I have already created the IRC meeting for service chain 
 project, will start the IRC meeting too.
 
 Malini: we should have a complete spec so redesign is avoided.
 
 Swami: We should use Neutron design session at Vancouver to discuss this BP 
 and flesh out details.
 Action Items
 
 
 1. We will have two meetings next week: one goto meeting with audio and 
 data sharing, the other IRC meeting with link to google doc for sharing 
 diagrams. Cathy will send out the meeting info to the community
 
 2. Swami will send an email to Kyle Mestery requesting a time slot for 
 service chaining topic in Neutron design session so that we can get all 
 interested parties in one room for a good face-to-face discussion.
 
 3. Cathy will discuss with Carlos offline about incorporating the traffic 
 steering idea into the service chain BPs.
 
 4. Service chaining is a complicated solution involving management plane, 
 control plane, and data plane as well as multiple components. The consensus 
 is to first design/implement the two Neutron related service chain BPs 
 https://review.openstack.org/#/c/177946 and get the feature approved by 
 Neutron Core/Driver team and implemented for the OpenStack L release. If we 
 can not get a slot in the design session, we will meet at the service chain 
 presentation on May 18 and find a room to discuss how to move forward with 
 this feature development in OpenStack.
 
 Feel free to chime in if I miss any points.
 
 Thanks,
 Cathy
 
 
 

 __
 OpenStack Development 

Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Dieterly, Deklan
Given Python’s inherent inability to scale (GIL) relative to other 
languages/platforms, have there been any serious discussions on allowing other 
more scalable languages into the OpenStack ecosystem when 
concurrency/scalability is paramount?

Regards.
--
Deklan Dieterly
Hewlett-Packard Company
Sr. Systems Software Engineer
HP Cloud


From: Eugene Nikanorov enikano...@mirantis.commailto:enikano...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, May 11, 2015 at 6:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

 All production Openstack applications today are fully serialized to only be 
 able to emit a single query to the database at a time;
True. That's why any deployment configures tons (tens) of workers of any 
significant service.

  When I talk about moving to threads, this is not a won't help or hurt kind 
 of issue, at the moment it's a change that will immediately allow massive 
 improvement to the performance of all Openstack applications instantly.
Not sure If it will give much benefit over separate processes.
I guess we don't configure many worker for gate testing (at least, neutron 
still doesn't do it), so there could be an improvement, but I guess to enable 
multithreading we would need to fix the same issues that prevented us from 
configuring multiple workers in the gate, plus possibly more.

 We need to change the DB library or dump eventlet.
I'm +1 for the 1st option.

Other option, which is multithreading will most certainly bring concurrency 
issues other than database.

Thanks,
Eugene.


On Mon, May 11, 2015 at 4:46 PM, Boris Pavlovic 
bo...@pavlovic.memailto:bo...@pavlovic.me wrote:
Mike,

Thank you for saying all that you said above.

Best regards,
Boris Pavlovic

On Tue, May 12, 2015 at 2:35 AM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:
Excerpts from Mike Bayer's message of 2015-05-11 15:44:30 -0700:

 On 5/11/15 5:25 PM, Robert Collins wrote:
 
  Details: Skip over this bit if you know it all already.
 
  The GIL plays a big factor here: if you want to scale the amount of
  CPU available to a Python service, you have two routes:
  A) move work to a different process through some RPC - be that DB's
  using SQL, other services using oslo.messaging or HTTP - whatever.
  B) use C extensions to perform work in threads - e.g. openssl context
  processing.
 
  To increase concurrency you can use threads, eventlet, asyncio,
  twisted etc - because within a single process *all* Python bytecode
  execution happens inside the GIL lock, so you get at most one CPU for
  a CPU bound workload. For an IO bound workload, you can fit more work
  in by context switching within that one CPU capacity. And - the GIL is
  a poor scheduler, so at the limit - an IO bound workload where the IO
  backend has more capacity than we have CPU to consume it within our
  process, you will run into priority inversion and other problems.
  [This varies by Python release too].
 
  request_duration = time_in_cpu + time_blocked
  request_cpu_utilisation = time_in_cpu/request_duration
  cpu_utilisation = concurrency * request_cpu_utilisation
 
  Assuming that we don't want any one process to spend a lot of time at
  100% - to avoid such at-the-limit issues, lets pick say 80%
  utilisation, or a safety factor of 0.2. If a single request consumes
  50% of its duration waiting on IO, and 50% of its duration executing
  bytecode, we can only run one such request concurrently without
  hitting 100% utilisations. (2*0.5 CPU == 1). For a request that spends
  75% of its duration waiting on IO and 25% on CPU, we can run 3 such
  requests concurrently without exceeding our target of 80% utilisation:
  (3*0.25=0.75).
 
  What we have today in our standard architecture for OpenStack is
  optimised for IO bound workloads: waiting on the
  network/subprocesses/disk/libvirt etc. Running high numbers of
  eventlet handlers in a single process only works when the majority of
  the work being done by a handler is IO.

 Everything stated here is great, however in our situation there is one
 unfortunate fact which renders it completely incorrect at the moment.
 I'm still puzzled why we are getting into deep think sessions about the
 vagaries of the GIL and async when there is essentially a full-on
 red-alert performance blocker rendering all of this discussion useless,
 so I must again remind us: what we have *today* in Openstack is *as
 completely un-optimized as you can possibly be*.

 The most GIL-heavy nightmare CPU bound task you can imagine running on
 25 threads on a ten year old Pentium will run better than the Openstack
 we have today, because we are running a C-based, non-eventlet patched DB
 library within a single 

Re: [openstack-dev] [trove] How we make service vm to connect to management network

2015-05-11 Thread Li Tianqing
1) Would you expect a deployer to create this tenant prior to configuring Trove 
in this manner?
Yes, i think we can use tenant 'services'


2) What impact would you expect this to have on quotas? For example, should 
this tenant just have “infinite” quota for CPU/storage etc or should the 
resources be proxied back to the tenant creating the Trove instance?
I think infinite would be prefer. We only conculate database instance and it's 
flavor. We do not care which tenant the nova instance is in. 


3) Provide clear documentation as to how the separate messaging network is 
setup between the Trove control plane and the guest instances.
Is it the final way to set up  trove? For you have limit on user to use 
private network. How about the trove network is same with the user's private 
network in cidr?
Or is there better way to make the trove-taskmanager to connect to with 
guest instances.


4) Can you specify which management operations are of most interest to you?
quotas.


5 ) I’m confused by this comment: The client can not fully use the api. It is 
.  I think may be trove developers think all trove user are nice person who 
will never curse.” Would you mind providing a little more explanation about 
what you are getting at?
   I find the trove-client can not use mgmt api. So i say that the client can 
not fully use the api. Is it clear?


--

Best
Li Tianqing

At 2015-05-12 00:48:16, Doug Shelley d...@tesora.com wrote:

Li,


Thanks for your input – definitely useful to get your perspective on some of 
the challenges implementing Trove in production. I definitely have an interest 
in understanding how we can improve the project going forward to address some 
of these concerns. 


Given that Summit is coming up (now one week away!) it seems like it might be 
useful to collect some more info on the requirements and convene a session 
during design summit for the community to discuss. 


As I understand it what you are looking for is:
Make it easily configurable for Trove to allocate Nova instances within a 
particular named tenant. Some questions for you:
Would you expect a deployer to create this tenant prior to configuring Trove in 
this manner?
What impact would you expect this to have on quotas? For example, should this 
tenant just have “infinite” quota for CPU/storage etc or should the resources 
be proxied back to the tenant creating the Trove instance?
Provide clear documentation as to how the separate messaging network is setup 
between the Trove control plane and the guest instances.
Implement a CLI for Trove management. Some questions for you:
Can you specify which management operations are of most interest to you?
I’m confused by this comment: The client can not fully use the api. It is 
.  I think may be trove developers think all trove user are nice person who 
will never curse.” Would you mind providing a little more explanation about 
what you are getting at?
Regards,
Doug


From: Li Tianqing jaze...@163.com
Reply-To: OpenStack List openstack-dev@lists.openstack.org
Date: Monday, May 11, 2015 at 5:44 AM
To: OpenStack List openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove] How we make service vm to connect to 
management network



Hello,
 Now:
  The vm is created by trove is installed trove-guestagent. The agent 
should connect to the rabbitmq in management network for notifications and 
billing.
  Right now, the trove vm can boot by two or more net cards. One is user's, 
the other one is trove-defined (which is defined in configuration.)
 Problem:
 1)  The vm is put into user's tenant, so the user can login into this vm. 
It is quite inscure.  We could overwrite the remote.py to put the vm into trove 
tenant.
  But after overwrite, the user can not start, stop, the instance, the 
network ip uesed for to connect to msyql is also can not get. 
  Then we should overwrite the instance view to add those information.
  We should make the decision now. If put the vm into trove's tenant is 
better than into user's tenant. We should add api, rewrite view. Not just give 
choise to users that use trove.
 Because we are the developer for trove. We should know which is better.
 2)  If we deployment trove like that. The user can not use private network 
fully. For there is the chance that the user-defined-network is same as the 
trove-defined-nework
   in cidr. Then the packet can not send out.We should also try other 
deployment that can make trove connect to management's rabbitmq. For example, 
make the vm can 
 passthrough to the host that load that vm. For that deployment do not 
limit the user's in private network use. So i say we should talk a lot on this 
problem.

3)  we should add mgmt-cli quickly. The client can not fully use the api. 
It is .  I think may be trove developers think all trove user are nice 
person who will never curse.
  
May be i am not right. But i am 

Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Robert Collins
On 12 May 2015 at 10:12, Attila Fazekas afaze...@redhat.com wrote:





 If you can illustrate a test script that demonstrates the actual failing
 of OS threads that does not occur greenlets here, that would make it
 immediately apparent what it is you're getting at here.


 http://www.fpaste.org/220824/raw/

 I just put together hello word C example and a hello word threading example,
 and replaced the print with sleep(3).

 When I use the sleep(3) from python, the 5 thread program runs in ~3 second,
 when I use the sleep(3) from native code, it runs ~15 sec.

 So yes, it is very likely a GIL lock wait related issue,
 when the native code is not assisting.

Your test code isn't releasing the GIL here, and I'd expect C DB
drivers to be releasing the GIL: you've illustrated how a C extension
can hold the GIL, but not whether thats happening.

 Do you need a DB example, by using the mysql C driver,
 and waiting in an actual I/O primitive ?

waiting in an I/O primitive is fine as long as the GIL has been released.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-11 Thread Steve Martinelli
The Keystone team actually uses ascii diagrams in some of our specs, [0] 
for instance.
I might be in the minority here, but I actually like them and find the 
easy to create and read.

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/kilo/websso-portal.html


Thanks,

Steve Martinelli
OpenStack Keystone Core

Joe Gordon joe.gord...@gmail.com wrote on 05/11/2015 05:57:48 PM:

 From: Joe Gordon joe.gord...@gmail.com
 To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
 Date: 05/11/2015 05:59 PM
 Subject: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?
 
 When learning about how a project works one of the first things I 
 look for is a brief architecture description along with a diagram. 
 For most OpenStack projects, all I can find is a bunch of random 
 third party slides and diagrams.
 
 Most Individual OpenStack projects have either no architecture 
 diagram or ascii art. Searching for 'OpenStack X architecture' where
 X is any of the OpenStack projects turns up pretty sad results. For 
 example heat [0] an Keystone [1] have no diagram. Nova on the other 
 hand does have a diagram, but its ascii art [2]. I don't think ascii
 art makes for great user facing documentation (for any kind of user).
 
 So how can we do better then ascii art architecture diagrams?
 
 [0] http://docs.openstack.org/developer/heat/architecture.html
 [1] http://docs.openstack.org/developer/keystone/architecture.html
 [2] http://docs.openstack.org/developer/nova/devref/architecture.html
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Angus Lees
On Tue, 12 May 2015 at 05:08 Mike Bayer mba...@redhat.com wrote:

 On 5/11/15 2:02 PM, Attila Fazekas wrote:
  The scary part of a blocking I/O call is when you have two
  python thread (or green thread) and one of them is holding a DB lock the
 other
  is waiting for the same lock in a native blocking I/O syscall.
 that's a database deadlock and whether you use eventlet, threads,
 asycnio or even just two transactions in a single-threaded script, that
 can happen regardless.  if your two eventlet non blocking greenlets
 are waiting forever for a deadlock,  you're just as deadlocked as if you
 have OS threads.


Not true (if I understand the situation Attila is referring to).

 If you do a read(2) in native code, the python itself might not be able
 to preempt it
  Your transaction might be finished with `DB Lock wait timeout`,
  with 30 sec of doing nothing, instead of scheduling to the another
 python thread,
  which would be able to release the lock.


 Here's the you're losing me part because Python threads are OS
 threads, so Python isn't directly involved trying to preempt anything,
 unless you're referring to the effect of the GIL locking up the
 program.   However, it's pretty easy to make two threads in Python hit a
 database and do a deadlock against each other, and the rest of the
 program's threads continue to run just fine; in a DB deadlock situation
 you are blocked on IO and IO releases the GIL.

 If you can illustrate a test script that demonstrates the actual failing
 of OS threads that does not occur greenlets here, that would make it
 immediately apparent what it is you're getting at here.


1. Thread A does something that takes a lock on the DB side
2. Thread B does something that blocks waiting for that same DB lock
3. Depends on the threading model - see below

In a true preemptive threading system (eg: regular python threads), (3)
is:

3.  Eventually A finishes its transaction/whatever, commits and releases
the DB lock
4. B then takes the lock and proceeds
5. Profit

However, in a system where B's DB client can't be preempted (eg: eventlet
or asyncio calling into a C-based mysql library, and A and B are running on
the same underlying kernel thread), (3) is:

3. B will never be preempted, A will never be rescheduled, and thus A will
never complete whatever it was doing.
4. Deadlock (in mysql-python's case, until a deadlock timer raises an
exception and kills B 30s later)
5. Sadness.  More specifically, we add a @retry to paper over the
particular observed occurrence and then repeat this discussion on os-dev
when the topic comes up again 6 months later.

Note that this is not the usual database transaction deadlock caused by A
and B each taking a lock and then trying to take the other's lock - this is
a deadlock purely in the client-side code caused entirely by the lack of
preemption during an otherwise safe series of DB operations.

See my oslo.db unittest in Ib35c95defea8ace5b456af28801659f2ba67eb96 that
reproduces the above with eventlet and allows you to test the behaviour of
various DB drivers.

(zzzeek: I know you've already seen all of the above in previous
discussions, so sorry for repeating).

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-11 Thread Zane Bitter

Hello!

This looks like a perfect soapbox from which to talk about my favourite 
issue ;)


You're right about the ssh idea, for the reasons discussed related to 
networking and a few more that weren't (e.g. users shouldn't have to and 
generally don't want to give their private SSH keys to cloud services). 
I didn't know, or had forgotten, about the message queue implementation 
in Murano and while I think that's the correct shape for the solution, 
as long as the service in question is not multi-tenant capable it's a 
non-starter for a public clouds at least and probably many private 
clouds as well (after all, if you don't need multi-tenancy then why are 
you using OpenStack?).


There's been a tendency within the application-facing OpenStack projects 
to hack together whatever local solutions to problems that we can in 
order to make progress without being held up by other projects. Let's 
take a moment to acknowledge that Heat is both the earliest and the 
biggest offender here, and that I am as culpable as anyone in the 
current state of affairs. There are multiple reasons for how things have 
gone - part of it is that it turned out we developed services in the 
wrong order, starting at too high a level. Part of it, frankly, is due 
to that element of the community that maintains a hostile position 
toward application-facing services and have used their influence in the 
community to maintain a disincentive against integrating projects 
together.[1] (If deployment of your project is discouraged that's one 
thing, but if it depends on another project whose deployment is also 
being discouraged then the hurdle you have to jump over is twice the 
height.)


That said, I think we're at the point where we are hurting ourselves 
more than anyone else is by failing to come up with coherent, 
cross-project solutions.


The problem articulated in this thread is not an isolated one. It's part 
of a more general pattern that affects a lot of projects: we need a way 
for the cloud to communicate to applications running in it. Angus 
started a recent discussion of this already on the list.[2] The 
requirements, IMHO, are roughly:


 * Reliability - we must be able to guarantee delivery to applications
 * Asynchrony - the cloud cannot block on user-controlled processes
 * Multitenancy - this is table stakes for OpenStack
 * Access control - even within tenants, we need to trust guest VMs 
minimally


IMNSHO Zaqar messages are the obvious choice for the transport here. (Or 
something very similar in shape to Zaqar - but it'd be much better to 
join forces with the Zaqar team to improve it where necessary than to 
start a new project.) I really believe that if we work together to come 
up with consistent solutions to these problems that keep popping up 
across OpenStack, we can prove wrong all the naysayers who think that 
application-facing services are only for proprietary clouds. I wrote up 
my vision for why that's important and what there first steps are here:


http://www.zerobanana.com/archive/2015/04/24#a-vision-for-openstack

Note that there are some subtleties that not everyone here will be able 
to contribute directly to fixing. For example, as I highlight in that 
post, Keystone is built around the concept that applications never talk 
to the cloud. But there are lots of other things people can work on now 
that would really make a big difference. For Mistral and Murano 
specifically, and in rough order of priority:


 * Add an action in Mistral for sending a message to a Zaqar queue. 
This is easy and there's no reason you couldn't do it right now.
 * Encourage any deployers and distributors you know (or, ahem, may 
work for ;) to make Zaqar available as an option.
 * Add a way to trigger a Mistral workflow with a Zaqar message. This 
is one piece in the puzzle to build user-configurable messaging flows 
between OpenStack services.[3]
 * Make Zaqar an alternative to Rabbit for communicating to the Murano 
agent.
 * Use your experience in implementing notifications over email and the 
like in Mistral to help the Zaqar team to add the notification features 
they've long been planning. These could take the form of microservices 
listening on a Zaqar queue. You get the reliable, asynchronous queuing 
semantics for free and *every* service and user can benefit from your work.


Imagine if there were one place where we implemented reliable queuing 
semantics at cloud scale, and when we added e.g. long-polling or 
WebSockets everyone could benefit immediately.[4] Imagine if there were 
one place for notifications, at cloud scale, for operators to secure. 
(How many webhook implementations are there in OpenStack right now? How 
many of them are actually secure against malicious users?) One format 
for messages between services so that users can connect up their own 
custom pipelines. We're not that far away! All of this is within reach 
if we work together.


Thanks for reading. Please grab me at summit if 

Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Robert Collins
On 10 May 2015 at 03:26, John Garbutt j...@johngarbutt.com wrote:
 On 9 May 2015 at 15:02, Mike Bayer mba...@redhat.com wrote:
 On 5/9/15 6:45 AM, John Garbutt wrote:

 I am leaning towards us moving to making DB calls with a thread pool and
 some fast C based library, so we get the 'best' performance. Is that a crazy
 thing to be thinking? What am I missing here? Thanks, John

So 'best' performance, and the number of processes we have are all
tied together.

tl;dr: the number of Python processes required to handle a concurrency
of N requests for a service is given by
N*(1-safety_factor) *
avg_request_cpu_use/(avg_request_cpu_use+avg_request_time_blocking)
When requests are CPU bound, you need one process per concurrent request.
When requests are IO bound, you can multiplex requests into a process,
until the sum of the CPU work per second exceeds your safety factor
(which I like to keep down around 0.8 to leave leeway for bursts).

Threads don't help this at all. They don't hinder it either (broadly
speaking - Mike has very specific performance metrics that show the
overheads within the system of different  multiplexing approachs).
Threads are useful for dealing with things that expect threads, like
most DB libraries. Using a thread pool is fine, but don't expect it to
alter the fundamentals around how many processes we need.

Details: Skip over this bit if you know it all already.

The GIL plays a big factor here: if you want to scale the amount of
CPU available to a Python service, you have two routes:
A) move work to a different process through some RPC - be that DB's
using SQL, other services using oslo.messaging or HTTP - whatever.
B) use C extensions to perform work in threads - e.g. openssl context
processing.

To increase concurrency you can use threads, eventlet, asyncio,
twisted etc - because within a single process *all* Python bytecode
execution happens inside the GIL lock, so you get at most one CPU for
a CPU bound workload. For an IO bound workload, you can fit more work
in by context switching within that one CPU capacity. And - the GIL is
a poor scheduler, so at the limit - an IO bound workload where the IO
backend has more capacity than we have CPU to consume it within our
process, you will run into priority inversion and other problems.
[This varies by Python release too].

request_duration = time_in_cpu + time_blocked
request_cpu_utilisation = time_in_cpu/request_duration
cpu_utilisation = concurrency * request_cpu_utilisation

Assuming that we don't want any one process to spend a lot of time at
100% - to avoid such at-the-limit issues, lets pick say 80%
utilisation, or a safety factor of 0.2. If a single request consumes
50% of its duration waiting on IO, and 50% of its duration executing
bytecode, we can only run one such request concurrently without
hitting 100% utilisations. (2*0.5 CPU == 1). For a request that spends
75% of its duration waiting on IO and 25% on CPU, we can run 3 such
requests concurrently without exceeding our target of 80% utilisation:
(3*0.25=0.75).

What we have today in our standard architecture for OpenStack is
optimised for IO bound workloads: waiting on the
network/subprocesses/disk/libvirt etc. Running high numbers of
eventlet handlers in a single process only works when the majority of
the work being done by a handler is IO.

For some of our servers, e.g. Nova-compute, where we're spending a lot
of time waiting on the DB (via the conductor), or libvirt, or VMWare
callouts etc - this makes a lot of sense. In fact its nearly ideal:
we're going to spend stuff all time executing bytecode, and the
majority of time waiting.

For other servers, e.g. heat-engine or murano, were we are doing
complex processing of the state that was stored in the persistent
store backing the system, that ratio is going to change dramatically.

And for some, like nova-conductor, the better and faster we make the
DB layer, the less time we spend blocked, and the *less* concurrency
we can support in a single process. (But hopefully the less
concurrency that is needed, for a given workload).

So - a thread pool doesn't help with the number of

 I'd like to do that but I want the whole Openstack DB API layer in the
 thread pool, not just the low level DBAPI (Python driver) calls.   There's
 no need for eventlet-style concurrency or even less for async-style
 concurrency in transactionally-oriented code.

 Sorry, not sure I get which DB API is which.

 I was thinking we could dispatch all calls to this API into a thread pool:
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py

That would work I think.

 I guess an alternative is to add this in the objects layer, on top of
 the rpc dispatch:
 https://github.com/openstack/nova/blob/master/nova/objects/base.py#L188
 But that somehow feels like a layer violation, maybe its not.

No opinion here, sorry :)

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud


[openstack-dev] [Neutron] service chaining feature development meeting

2015-05-11 Thread Cathy Zhang
Hello everyone,


Our next service chain feature development meeting will be 10am~11am May 12th 
pacific time. Anyone who has interest in this feature development is welcome to 
join the meeting and contribute together to the service chain feature in 
OpenStack.




OpenStack BP discussion for service chaining
Please join the meeting from your computer, tablet or smartphone.
https://global.gotomeeting.com/join/199553557, meeting password: 199-553-557
You can also dial in using your phone.
United States +1 (224) 501-3212
Access Code: 199-553-557

-
Following are the links to the Neutron related service chain specs and the bug 
IDs. Feel free to sign up and add you comments/input to the BPs.
https://review.openstack.org/#/c/177946
https://bugs.launchpad.net/neutron/+bug/1450617
https://bugs.launchpad.net/neutron/+bug/1450625



Thanks,

Cathy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-11 Thread Joe Gordon
When learning about how a project works one of the first things I look for
is a brief architecture description along with a diagram. For most
OpenStack projects, all I can find is a bunch of random third party slides
and diagrams.

Most Individual OpenStack projects have either no architecture diagram or
ascii art. Searching for 'OpenStack X architecture' where X is any of the
OpenStack projects turns up pretty sad results. For example heat [0] an
Keystone [1] have no diagram. Nova on the other hand does have a diagram,
but its ascii art [2]. I don't think ascii art makes for great user facing
documentation (for any kind of user).

So how can we do better then ascii art architecture diagrams?

[0] http://docs.openstack.org/developer/heat/architecture.html
[1] http://docs.openstack.org/developer/keystone/architecture.html
[2] http://docs.openstack.org/developer/nova/devref/architecture.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group (gantt) meeting 5/12 agenda

2015-05-11 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (9:00AM MDT)

Try and have a short meeting, basically one topic:

1) Vancouver design summit - are we ready?

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-11 Thread shihanzhang
Hi Eric,


Huawei is also interested in this BP, Hope it can be discussed during the 
design summit.


Thanks,
shihanzhang




在 2015-05-12 08:23:07,Karthik Natarajan natar...@brocade.com 写道:


Hi Eric,

 

Brocade is also interested in the VLAN aware VM’s BP. Let’s discuss it during 
the design summit.

 

Thanks,

Karthik

 

From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
Sent: Monday, May 11, 2015 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?

 

Hi Eric,

 

Cisco is also interested in the kind of VLAN trunking feature that your 
VLAN-aware VM’s BP describe. If this could be achieved in Liberty it’d be great.

Perhaps your BP could be brought up during one of the Neutron sessions in 
Vancouver, e.g., the one on OVN since there seems to be some similarities?

 

Thanks

Bob

 

 

From: Erik Moe erik@ericsson.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: fredag 8 maj 2015 06:29
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?

 

 

Hi,

 

I have not been able to work with upstreaming of this for some time now. But 
now it looks like I may make another attempt. Who else is interested in this, 
as a user or to help contributing? If we get some traction we can have an IRC 
meeting sometime next week.

 

Thanks,

Erik

 

 

From: Scott Drennan [mailto:sco...@nuagenetworks.net]
Sent: den 4 maj 2015 18:42
To:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs 
in Liberty?

 

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't see 
any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and specs[2] 
which was deferred from Kilo - is this something anyone is looking at as a 
Liberty candidate?  I looked but didn't find any recent work - is there 
somewhere else work on this is happening?  No-one has listed it on the liberty 
summit topics[3] etherpad, which could mean it's uncontroversial, but given 
history on this, I think that's unlikely.

 

cheers,

Scott

 

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

[2]: https://review.openstack.org/#/c/94612

[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Chris Friesen

On 05/11/2015 08:22 PM, Jay Pipes wrote:

c) Many OpenStack services, including Nova, Cinder, and Neutron, when looked at
from a thousand-foot level, are little more than glue code that pipes out to a
shell to execute system commands (sorry, but it's true).


No apologies necessary. :)


So, bottom line for me: focus on the things that will have the biggest impact to
long-term cost reduction of our codebase.


+1


So, to me, the highest priority performance and scale fixes actually have to do
with the simplification of our subsystems and architecture, not with whether we
use mysql-python, PyMySQL, Python vs. Scala vs. Rust, or any other distractions.


Arguably if we're actually seeing performance issues then it's not a distraction 
but rather a real problem that needs fixing.


But I agree that we shouldn't be trying to optimize the performance pf code that 
isn't causing problems.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] glance_store release 0.5.0

2015-05-11 Thread Nikhil Komawar

The glance_store release management team is pleased to announce:

Release of glance_store version 0.5.0

Please find the details related to the release at:

https://launchpad.net/glance-store/+milestone/0.5.0

Please report the issues through launchpad:

https://bugs.launchpad.net/glance-store

Thanks,
-Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs in nova

2015-05-11 Thread John Villalovos
On Sat, May 9, 2015 at 3:10 AM, John Garbutt j...@johngarbutt.com wrote:
 On 6 May 2015 at 19:04, John Villalovos openstack@sodarock.com wrote:
 JohnG,

 I work on Ironic and would be willing to be a cross project liaison for Nova
 and Ironic.  I would just need a little info on what to do from the Nova
 side.  Meetings to attend, web pages to monitor, etc...

 I assume I would start with this page:
 https://bugs.launchpad.net/nova/+bugs?field.tag=ironic

 And try to work with the Ironic and Nova teams on getting bugs resolved.

 I would appreciate any other info and suggestions to help improve the
 process.

 Thank you for stepping up to help us here.

 I have added your name on this list:
 https://wiki.openstack.org/wiki/Nova#People
 (if you can please add your IRC handle for me, that would be awesome)

 In terms of whats required, thats really up to what works for you.

 The top things that come to mind:
 * Raise ironic questions to nova in nova-meetings (and maybe v.v.)
 * For ironic features that need exposing in Nova, track those
 * Help triage Nova's ironic bugs into Nova bugs and Ironic bugs
 * Try to find folks to fix important ironic bugs

 But fundamentally, lets just see what works, and evolve the role to
 match what works for you.

 I hope that helps.

JohnG,

Thanks for the info.

I have added my IRC nick to:
https://wiki.openstack.org/wiki/Nova#People

I have added the Nova meetings to my calendar.  So I will start attending them.

Thanks,
John





 Thanks,
 John

 John

 On Wed, May 6, 2015 at 2:55 AM, John Garbutt j...@johngarbutt.com wrote:

 On 6 May 2015 at 09:39, Lucas Alvares Gomes lucasago...@gmail.com wrote:
  Hi
 
  I noticed last night that there are 23 bugs currently filed in nova
  tagged as ironic related. Whilst some of those are scheduler issues, a
  lot of them seem like things in the ironic driver itself.
 
  Does the ironic team have someone assigned to work on these bugs and
  generally keep an eye on their driver in nova? How do we get these
  bugs resolved?
 
 
  Thanks for this call out. I don't think we have anyone specifically
  assigned to keep an eye on the Ironic
  Nova driver, we would look at it from time to time or when someone ask
  us to in the Ironic channel/ML/etc...
  But that said, I think we need to pay more attention to the bugs in
  Nova.
 
  I've added one item about it to be discussed in the next Ironic
  meeting[1]. And in the meantime, I will take a
  look at some of the bugs myself.
 
  [1]
  https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting

 Thanks to you both for raising this and pushing on this.

 Maybe we can get a named cross project liaison to bridge the Ironic
 and Nova meetings. We are working on building a similar pattern for
 Neutron. It doesn't necessarily mean attending every nova-meeting,
 just someone to act as an explicit bridge between our two projects?

 I am open to whatever works though, just hoping we can be more
 proactive about issues and dependencies that pop up.

 Thanks,
 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2015-05-11 15:44:30 -0700:
 
 On 5/11/15 5:25 PM, Robert Collins wrote:
 
  Details: Skip over this bit if you know it all already.
 
  The GIL plays a big factor here: if you want to scale the amount of
  CPU available to a Python service, you have two routes:
  A) move work to a different process through some RPC - be that DB's
  using SQL, other services using oslo.messaging or HTTP - whatever.
  B) use C extensions to perform work in threads - e.g. openssl context
  processing.
 
  To increase concurrency you can use threads, eventlet, asyncio,
  twisted etc - because within a single process *all* Python bytecode
  execution happens inside the GIL lock, so you get at most one CPU for
  a CPU bound workload. For an IO bound workload, you can fit more work
  in by context switching within that one CPU capacity. And - the GIL is
  a poor scheduler, so at the limit - an IO bound workload where the IO
  backend has more capacity than we have CPU to consume it within our
  process, you will run into priority inversion and other problems.
  [This varies by Python release too].
 
  request_duration = time_in_cpu + time_blocked
  request_cpu_utilisation = time_in_cpu/request_duration
  cpu_utilisation = concurrency * request_cpu_utilisation
 
  Assuming that we don't want any one process to spend a lot of time at
  100% - to avoid such at-the-limit issues, lets pick say 80%
  utilisation, or a safety factor of 0.2. If a single request consumes
  50% of its duration waiting on IO, and 50% of its duration executing
  bytecode, we can only run one such request concurrently without
  hitting 100% utilisations. (2*0.5 CPU == 1). For a request that spends
  75% of its duration waiting on IO and 25% on CPU, we can run 3 such
  requests concurrently without exceeding our target of 80% utilisation:
  (3*0.25=0.75).
 
  What we have today in our standard architecture for OpenStack is
  optimised for IO bound workloads: waiting on the
  network/subprocesses/disk/libvirt etc. Running high numbers of
  eventlet handlers in a single process only works when the majority of
  the work being done by a handler is IO.
 
 Everything stated here is great, however in our situation there is one 
 unfortunate fact which renders it completely incorrect at the moment.   
 I'm still puzzled why we are getting into deep think sessions about the 
 vagaries of the GIL and async when there is essentially a full-on 
 red-alert performance blocker rendering all of this discussion useless, 
 so I must again remind us: what we have *today* in Openstack is *as 
 completely un-optimized as you can possibly be*.
 
 The most GIL-heavy nightmare CPU bound task you can imagine running on 
 25 threads on a ten year old Pentium will run better than the Openstack 
 we have today, because we are running a C-based, non-eventlet patched DB 
 library within a single OS thread that happens to use eventlet, but the 
 use of eventlet is totally pointless because right now it blocks 
 completely on all database IO.   All production Openstack applications 
 today are fully serialized to only be able to emit a single query to the 
 database at a time; for each message sent, the entire application blocks 
 an order of magnitude more than it would under the GIL waiting for the 
 database library to send a message to MySQL, waiting for MySQL to send a 
 response including the full results, waiting for the database to unwrap 
 the response into Python structures, and finally back to the Python 
 space, where we can send another database message and block the entire 
 application and all greenlets while this single message proceeds.
 
 To share a link I've already shared about a dozen times here, here's 
 some tests under similar conditions which illustrate what that 
 concurrency looks like: 
 http://www.diamondtin.com/2014/sqlalchemy-gevent-mysql-python-drivers-comparison/.
  
 MySQLdb takes *20 times longer* to handle the work of 100 sessions than 
 PyMySQL when it's inappropriately run under gevent, when there is 
 modestly high concurrency happening.   When I talk about moving to 
 threads, this is not a won't help or hurt kind of issue, at the moment 
 it's a change that will immediately allow massive improvement to the 
 performance of all Openstack applications instantly.  We need to change 
 the DB library or dump eventlet.
 
 As far as if we should dump eventlet or use a pure-Python DB library, my 
 contention is that a thread based + C database library will outperform 
 an eventlet + Python-based database library. Additionally, if we make 
 either change, when we do so we may very well see all kinds of new 
 database-concurrency related bugs in our apps too, because we will be 
 talking to the database much more intensively all the sudden; it is my 
 opinion that a traditional threading model will be an easier environment 
 to handle working out the approach to these issues; we have to assume 
 concurrency at any time 

Re: [openstack-dev] [Ironic] how about those alternating meeting times?

2015-05-11 Thread Ruby Loo
On 11 May 2015 at 18:39, Devananda van der Veen devananda@gmail.com
wrote:

 Lucas inspired me to take a look at the raw numbers ... so I hacked up a
 little python to digest all our meeting logs since we made the switch to
 alternating times.

 in particular, I'd like to point out the number of meetings with less than
 half of our core review team present, ie, where we didn't have quorum to
 make any decisions. There were 6 on Tuesdays (two thirds of all Tuesday
 meetings), but only 1 on Monday (it was the openstack vacation week).

 A few stats below, hackish code here:
 http://paste.openstack.org/show/220234/

 # of meetings by day:
   Monday: 11
   Tuesday: 9

 Total lines in IRC during the meetings by day:
   Monday: 3793
   Tuesday: 2475

 Unique attendees per day:
   Monday: total: 54 - cores: 9
   Tuesday: total: 32 - cores: 5


Thanks for bringing this up and getting those numbers. The reason for the
alternating times was to (among other things?) accommodate our
contributors in EMEA better [1]. Maybe the question is better posed to
those folks -- was it useful or not? And if not, why? Because the date/time
still didn't work, or because not enough (or the right persons) weren't
there so their issues of interest weren't discussed, or they wouldn't have
attended anyway, or ? And if it was useful, for how many was it useful?
(Devananda's poll will capture some of that info.)

I don't attend the Tuesday meetings but I also don't recall any
decisions/discussions at those meetings that made me think that I disagreed
strongly or couldn't provide feedback after the meeting so I am happy to
have them continue if it is useful to others/the project :-) (Or maybe
that's cuz there wasn't quorum to make decisions.)

--ruby

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-November/050838.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-11 Thread Josh Durgin

On 05/08/2015 12:41 AM, Arne Wiebalck wrote:

Hi Josh,

In our case adding the monitor hostnames (alias) would have made only a
slight difference:
as we moved the servers to another cluster, the client received an
authorisation failure rather
than a connection failure and did not try to fail over to the next IP in
the list. So, adding the
alias to list would have improved the chances to hit a good monitor, but
it would not have
eliminated the problem.


Could you provide more details on the procedure you followed to move
between clusters? I missed the separate clusters part initially, and
thought you were simply replacing the monitor nodes.


I’m not sure storing IPs in the nova database is a good idea in gerenal.
Replacing (not adding)
these by the hostnames is probably better. Another approach may be to
generate this part of
connection_info (and hence the XML) dynamically from the local ceph.conf
when the connection
is created. I think a mechanism like this is for instance used to select
a free port for the vnc
console when the instance is started.


Yes, with different clusters only using the hostnames is definitely
the way to go. I agree that keeping the information in nova's db may
not be the best idea. It is handy to allow nova to use different
clusters from cinder, so I'd prefer not generating the connection info
locally. The qos_specs are also part of connection_info, and if changed
they would have a similar problem of not applying the new value to
existing instances, even after reboot. Maybe nova should simply refresh
the connection info each time it uses a volume.

Josh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Boris Pavlovic
Mike,

Thank you for saying all that you said above.

Best regards,
Boris Pavlovic

On Tue, May 12, 2015 at 2:35 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Mike Bayer's message of 2015-05-11 15:44:30 -0700:
 
  On 5/11/15 5:25 PM, Robert Collins wrote:
  
   Details: Skip over this bit if you know it all already.
  
   The GIL plays a big factor here: if you want to scale the amount of
   CPU available to a Python service, you have two routes:
   A) move work to a different process through some RPC - be that DB's
   using SQL, other services using oslo.messaging or HTTP - whatever.
   B) use C extensions to perform work in threads - e.g. openssl context
   processing.
  
   To increase concurrency you can use threads, eventlet, asyncio,
   twisted etc - because within a single process *all* Python bytecode
   execution happens inside the GIL lock, so you get at most one CPU for
   a CPU bound workload. For an IO bound workload, you can fit more work
   in by context switching within that one CPU capacity. And - the GIL is
   a poor scheduler, so at the limit - an IO bound workload where the IO
   backend has more capacity than we have CPU to consume it within our
   process, you will run into priority inversion and other problems.
   [This varies by Python release too].
  
   request_duration = time_in_cpu + time_blocked
   request_cpu_utilisation = time_in_cpu/request_duration
   cpu_utilisation = concurrency * request_cpu_utilisation
  
   Assuming that we don't want any one process to spend a lot of time at
   100% - to avoid such at-the-limit issues, lets pick say 80%
   utilisation, or a safety factor of 0.2. If a single request consumes
   50% of its duration waiting on IO, and 50% of its duration executing
   bytecode, we can only run one such request concurrently without
   hitting 100% utilisations. (2*0.5 CPU == 1). For a request that spends
   75% of its duration waiting on IO and 25% on CPU, we can run 3 such
   requests concurrently without exceeding our target of 80% utilisation:
   (3*0.25=0.75).
  
   What we have today in our standard architecture for OpenStack is
   optimised for IO bound workloads: waiting on the
   network/subprocesses/disk/libvirt etc. Running high numbers of
   eventlet handlers in a single process only works when the majority of
   the work being done by a handler is IO.
 
  Everything stated here is great, however in our situation there is one
  unfortunate fact which renders it completely incorrect at the moment.
  I'm still puzzled why we are getting into deep think sessions about the
  vagaries of the GIL and async when there is essentially a full-on
  red-alert performance blocker rendering all of this discussion useless,
  so I must again remind us: what we have *today* in Openstack is *as
  completely un-optimized as you can possibly be*.
 
  The most GIL-heavy nightmare CPU bound task you can imagine running on
  25 threads on a ten year old Pentium will run better than the Openstack
  we have today, because we are running a C-based, non-eventlet patched DB
  library within a single OS thread that happens to use eventlet, but the
  use of eventlet is totally pointless because right now it blocks
  completely on all database IO.   All production Openstack applications
  today are fully serialized to only be able to emit a single query to the
  database at a time; for each message sent, the entire application blocks
  an order of magnitude more than it would under the GIL waiting for the
  database library to send a message to MySQL, waiting for MySQL to send a
  response including the full results, waiting for the database to unwrap
  the response into Python structures, and finally back to the Python
  space, where we can send another database message and block the entire
  application and all greenlets while this single message proceeds.
 
  To share a link I've already shared about a dozen times here, here's
  some tests under similar conditions which illustrate what that
  concurrency looks like:
 
 http://www.diamondtin.com/2014/sqlalchemy-gevent-mysql-python-drivers-comparison/
 .
  MySQLdb takes *20 times longer* to handle the work of 100 sessions than
  PyMySQL when it's inappropriately run under gevent, when there is
  modestly high concurrency happening.   When I talk about moving to
  threads, this is not a won't help or hurt kind of issue, at the moment
  it's a change that will immediately allow massive improvement to the
  performance of all Openstack applications instantly.  We need to change
  the DB library or dump eventlet.
 
  As far as if we should dump eventlet or use a pure-Python DB library, my
  contention is that a thread based + C database library will outperform
  an eventlet + Python-based database library. Additionally, if we make
  either change, when we do so we may very well see all kinds of new
  database-concurrency related bugs in our apps too, because we will be
  talking to the database much more intensively 

[openstack-dev] [release] heat-cfntools 1.3.0

2015-05-11 Thread Steve Baker

We are chuffed to announce the release of:

heat-cfntools 1.3.0: Tools required to be installed on Heat
provisioned cloud instances

For more details, please see the git log history below and:

http://launchpad.net/heat-cfntools/+milestone/1.3.0

Please report issues through launchpad:

http://bugs.launchpad.net/heat-cfntools

Changes in heat-cfntools 1.2.8..1.3.0
-

02acffb README changes to make release_notes.py happy
57f8ae8 Ported tests from mox3 to mock to support Python = 3.3
f879612 Python 3 compatibility
a7ffb71 Support dnf when specified or yum is missing
9862bd7 Fix RST syntax errors/warnings in README.rst
d96f73c Fixes cfn-hup hooks functionality
16a9a83 Workflow documentation is now in infra-manual

Diffstat (except docs and test files)
-

CONTRIBUTING.rst   |   7 +-
README.rst |  14 +-
heat_cfntools/cfntools/cfn_helper.py   | 151 +-
requirements.txt   |   1 +
test-requirements.txt  |   2 +-
tox.ini|   2 +-
8 files changed, 610 insertions(+), 528 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3e6b445..531eb32 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,0 +5 @@ psutil=1.1.1,2.0.0
+six=1.9.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 3890c0a..5d3b372 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4 +4 @@ hacking=0.8.0,0.9
-mox3=0.7.0
+mock=1.0


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] how about those alternating meeting times?

2015-05-11 Thread Michael Davies
On Tue, May 12, 2015 at 9:08 AM, Ruby Loo rlooya...@gmail.com wrote:

 Maybe the question is better posed to those folks -- was it useful or not?
 And if not, why? Because the date/time still didn't work, or because not
 enough (or the right persons) weren't there so their issues of interest
 weren't discussed, or they wouldn't have attended anyway, or ? And if it
 was useful, for how many was it useful? (Devananda's poll will capture some
 of that info.)


I found it useful - it's nice to be awake at meeting time :)

There's certainly a subset of the team that I never overlap with now, which
is a shame, but timezones present challenges for a geographically dispersed
team.

Previously the meeting was at 4:30am (or 5:30am depending upon daylight
savings), which was quite hard, but I did make it most weeks.  The new
timeslot of 2:30am/pm (3:30am/pm) is certainly only achievable for me every
other week (no surprises for guessing which one :)

I think it's great that we try and accommodate contributors from all around
the globe!

Michael...
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-11 Thread Robert Collins
On 12 May 2015 at 11:35, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Mike Bayer's message of 2015-05-11 15:44:30 -0700:

 Anyway, there is additional thought that might change the decision
 a bit. There is one pro to changing to use pymsql vs. changing to
 use threads, and that is that it isolates the change to only database
 access. Switching to threading means introducing threads to every piece
 of code we might touch while multiple threads are active.

I agree.

 It really seems worth it to see if I/O bound portions of OpenStack
 become more responsive with pymysql before embarking on a change to the
 concurrency model. If it doesn't, not much harm done, and if it does,
 but makes us CPU bound, well then we have even more of a reason to set
 out on such a large task.

And yes.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >