Re: [openstack-dev] [swift] Go! Swift!

2015-05-08 Thread Clint Byrum
Excerpts from Clay Gerrard's message of 2015-05-07 18:35:23 -0700:
 On Thu, May 7, 2015 at 3:48 PM, Clint Byrum cl...@fewbar.com wrote:
 
  I'm still very curious to hear if anybody has been willing to try to
  make Swift work on pypy.
 
 
 yeah, Alex Gaynor was helping out with it for awhile.  It worked.  And it
 helped.  A little bit.
 
 Probably still worth looking at if you're curious, but I'm not aware of
 anyone who's currently working aggressively to productionize swift running
 on pypy.

So if I take your phrase A little bit to mean Not enough to matter
then I can imagine there isn't much more that can be done.

It sounds like there are really deep architectural issues in Swift that
need addressing, not just make code run faster, but get closer to
the metal type efficiencies that are being sought.

This should be interesting indeed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-05-08 Thread Kevin Benton
I'm not sure I understand the behavior you are seeing. When your mechanism
driver gets initialized and kicks off processing, all of that should be
happening in the parent PID. I don't know why your child processes start
executing code that wasn't invoked. Can you provide a pointer to the code
or give a sample that reproduces the issue?

I modified the linuxbridge mech driver to try to reproduce it:
http://paste.openstack.org/show/216859/

In the output, I never received any of the init code output I added more
than once, including the function spawned using eventlet.

The only time I ever saw anything executed by a child process was actual
API requests (e.g. the create_port method).


On Thu, May 7, 2015 at 6:08 AM, Neil Jerram neil.jer...@metaswitch.com
wrote:

 Is there a design for how ML2 mechanism drivers are supposed to cope with
 the Neutron server forking?

 What I'm currently seeing, with api_workers = 2, is:

 - my mechanism driver gets instantiated and initialized, and immediately
 kicks off some processing that involves communicating over the network

 - the Neutron server process then forks into multiple copies

 - multiple copies of my driver's network processing then continue, and
 interfere badly with each other :-)

 I think what I should do is:

 - wait until any forking has happened

 - then decide (somehow) which mechanism driver is going to kick off that
 processing, and do that.

 But how can a mechanism driver know when the Neutron server forking has
 happened?

 Thanks,
 Neil

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Liberty Summit Topics etherpad

2015-05-08 Thread Giulio Fidente

On 04/28/2015 09:43 PM, James Slagle wrote:

On Mon, Apr 6, 2015 at 3:11 PM, James Slagle james.sla...@gmail.com wrote:

I've created an etherpad for for TripleO to track the topics we'd like
to discuss at the Liberty Summit:
https://etherpad.openstack.org/p/tripleo-liberty-proposed-sessions

It's also linked from the main Design Summit Planning wiki page:
https://wiki.openstack.org/wiki/Design_Summit/Planning

If you have something you'd like to propose to discuss, please add it
to the etherpad.


TripleO has 2 fishbowl sessions and 2 working sessions at the Summit,
as well as an all day contributor's meetup on Friday.  I'd like to
finalize the topics for the sessions over the next day or 2. We can
continue to refine as needed, but I'd like to get the summaries out
there so that folks can start planning what sessions they want to
attend.

My thinking right now is that we devote one fishbowl session to a
discussion around tripleo-heat-templates. Particularly around refining
the interfaces and what we can further do to enable Docker integration
for a containerized Overcloud. We could also discuss making the
template implementations more composable at the individual service
level, and plans around deprecating the elements based templates.


I won't be at the summit but the idea of making the params 'more 
composable' by distributing them on a per-service basis seems very 
interesting to me


I am not sure to which extent these would need to be matched by a 
per-service manifest; sounds like something which could be done by steps



For the second fishbowl session, we could make it testing/CI focused.
We could devote some time to talking about diskimage-builder testing,
and TripleO CI as it relates to quintupleo, the puppet modules, and
possibly using the infra clouds. Depending on time and interest, we
could also discuss if and how we might move forward with a devtest
alternative that was more production oriented.


For the CI topic I would also think about making a fedora job use the 
puppet-stack-config element to configure the seed!



For the working sessions, I don't think we need as much of a defined
summary. But I suspect we could pick a few things to focus on at each
session: tripleo-heat-templates, HA, network architecture,
diskimage-builder testing.

Let me know any feedback/opinions, and I'll get the schedule updated
on sched.org this week. Thanks.


I added a line to the etherpad, not something I can show with code, only 
a thought:


How about 'changing' our approach for the Undercloud turning it from a 
separate entity into just a 'peculiar' configuration of the Overcloud 
(maybe with a different base image)? Sure there is some complexity but 
we won't have to rethink about HA for the undercloud for example nor to 
duplicate the templates/manifests for it.


--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-08 Thread Nikhil Manchanda

Comments and answers inline.

Li Tianqing writes:

 [...]

1) why we put the trove vm into user's tenant, not the trove's
tenant? User can login on that vm, and that vm must connect to
rabbitmq. It is quite insecure.
what's about put the tenant into trove tenant?

While the default configuration of Trove in devstack puts Trove guest
VMs into the users' respective tenants, it's possible to configure Trove
to create VMs in a single Trove tenant. You would do this by
overriding the default novaclient class in Trove's remote.py with one
that creates all Trove VMs in a particular tenant whose user credentials
you will need to supply. In fact, most production instances of Trove do
something like this.

2) Why there is no trove mgmt cli, but mgmt api is in the code?
Does it disappear forever ?

The reason for this is because the old legacy Trove client was rewritten
to be in line with the rest of the openstack clients. The new client
has bindings for the management API, but we didn't complete the work on
writing the shell pieces for it. There is currently an effort to
support Trove calls in the openstackclient, and we're looking to
support the management client calls as part of this as well. If this is
something that you're passionate about, we sure could use help landing
this in Liberty.

3)  The trove-guest-agent is in vm. it is connected by taskmanager
by rabbitmq. We designed it. But is there some prectise to do this?
 how to make the vm be connected in vm-network and management
 network?

Most deployments of Trove that I am familiar with set up a separate
RabbitMQ server in cloud that is used by Trove. It is not recommended to
use the same infrastructure RabbitMQ server for Trove for security
reasons. Also most deployments of Trove set up a private (neutron)
network that the RabbitMQ server and guests are connected to, and all
RPC messages are sent over this network.

Hope this helps,

Thanks,
Nikhil

 [...]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Andreas Jaeger

On 05/08/2015 10:02 AM, Robert Collins wrote:

I don't know if they are *intended* to be, but right now there is no
set of versions that can be co-installed, of everything listed in
global requirements.

I don't have a full set of the conflicts (because I don't have a good
automatic trace for 'why X is unresolvable' - its nontrivial).

However right now:
openstack-doc-tools=0.23
and
oslo.config=1.11.0


We haven't imported yet the new requirements for oslo.config and 
released a new version of openstack-doc-tools. I'll take care of this,


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-05-08 Thread Salvatore Orlando
Just like the Neutron plugin manager, also ML2 driver manager ensure
drivers are loaded only once regardless of the number of workers.
What Kevin did proves that drivers are correctly loaded before forking (I
reckon).

However, forking is something to be careful about especially when using
eventlet. For the plugin my team maintains we were creating a periodic task
during plugin initialisation.
This lead to an interesting condition where API workers were hanging [1].
This situation was fixed with a rather pedestrian fix - by adding a delay.

Generally speaking I would find useful to have a way to identify an API
worker in order to designate a specific one for processing that should not
be made redundant.
On the other hand I self-object to the above statement by saying that API
workers are not supposed to do this kind of processing, which should be
deferred to some other helper process.

Salvatore

[1] https://bugs.launchpad.net/vmware-nsx/+bug/1420278

On 8 May 2015 at 09:43, Kevin Benton blak...@gmail.com wrote:

 I'm not sure I understand the behavior you are seeing. When your mechanism
 driver gets initialized and kicks off processing, all of that should be
 happening in the parent PID. I don't know why your child processes start
 executing code that wasn't invoked. Can you provide a pointer to the code
 or give a sample that reproduces the issue?

 I modified the linuxbridge mech driver to try to reproduce it:
 http://paste.openstack.org/show/216859/

 In the output, I never received any of the init code output I added more
 than once, including the function spawned using eventlet.

 The only time I ever saw anything executed by a child process was actual
 API requests (e.g. the create_port method).


 On Thu, May 7, 2015 at 6:08 AM, Neil Jerram neil.jer...@metaswitch.com
 wrote:

 Is there a design for how ML2 mechanism drivers are supposed to cope with
 the Neutron server forking?

 What I'm currently seeing, with api_workers = 2, is:

 - my mechanism driver gets instantiated and initialized, and immediately
 kicks off some processing that involves communicating over the network

 - the Neutron server process then forks into multiple copies

 - multiple copies of my driver's network processing then continue, and
 interfere badly with each other :-)

 I think what I should do is:

 - wait until any forking has happened

 - then decide (somehow) which mechanism driver is going to kick off that
 processing, and do that.

 But how can a mechanism driver know when the Neutron server forking has
 happened?

 Thanks,
 Neil

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][nova][oslo][messaging]

2015-05-08 Thread yatin kumbhare
Hi,

Nova boot with latest devstack results into following error on nova-compute

I'm trying to understand, what has gone wrong?

has anyone seen such error on compute logs?

http://paste.openstack.org/show/216844/

Got this error on repeated setups on ubuntu trusty.

#For this error: libguestfs installed but not usable
(/usr/bin/supermin-helper exited with error status 1
I tried following
$ sudo apt-get install libguestfs-tools
$ sudo update-guestfs-appliance

But, didn't help and hit at MessagingTimeout error.

Regards,
Yatin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-08 Thread Arne Wiebalck
Hi Josh,

In our case adding the monitor hostnames (alias) would have made only a slight 
difference:
as we moved the servers to another cluster, the client received an 
authorisation failure rather
than a connection failure and did not try to fail over to the next IP in the 
list. So, adding the
alias to list would have improved the chances to hit a good monitor, but it 
would not have
eliminated the problem.

I’m not sure storing IPs in the nova database is a good idea in gerenal. 
Replacing (not adding)
these by the hostnames is probably better. Another approach may be to generate 
this part of
connection_info (and hence the XML) dynamically from the local ceph.conf when 
the connection
is created. I think a mechanism like this is for instance used to select a free 
port for the vnc
console when the instance is started.

Cheers,
 Arne

—
Arne Wiebalck
CERN IT


On 08 May 2015, at 05:37, David Medberry 
openst...@medberry.netmailto:openst...@medberry.net wrote:

Josh,

Certainly in our case the the monitor hosts (in addition to IPs) would have 
made a difference.

On Thu, May 7, 2015 at 3:21 PM, Josh Durgin 
jdur...@redhat.commailto:jdur...@redhat.com wrote:
Hey folks, thanks for filing a bug for this:

https://bugs.launchpad.net/cinder/+bug/1452641

Nova stores the volume connection info in its db, so updating that
would be a workaround to allow restart/migration of vms to work.
Otherwise running vms shouldn't be affected, since they'll notice any
new or deleted monitors through their existing connection to the
monitor cluster.

Perhaps the most general way to fix this would be for cinder to return
any monitor hosts listed in ceph.conf (as they are listed, so they may
be hostnames or ips) in addition to the ips from the current monmap
(the current behavior).

That way an out of date ceph.conf is less likely to cause problems,
and multiple clusters could still be used with the same nova node.

Josh

On 05/06/2015 12:46 PM, David Medberry wrote:
Hi Arne,

We've had this EXACT same issue.

I don't know of a way to force an update as you are basically pulling
the rug out from under a running instance. I don't know if it is
possible/feasible to update the virsh xml in place and then migrate to
get it to actually use that data. (I think we tried that to no avail.)
dumpxml=massage cephmons=import xml

If you find a way, let me know, and that's part of the reason I'm
replying so that I stay on this thread. NOTE: We did this on icehouse.
Haven't tried since upgrading to Juno but I don't note any change
therein that would mitigate this. So I'm guessing Liberty/post-Liberty
for a real fix.



On Wed, May 6, 2015 at 12:57 PM, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch
mailto:arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:

Hi,

As we swapped a fraction of our Ceph mon servers between the
pre-production and production cluster
— something we considered to be transparent as the Ceph config
points to the mon alias—, we ended
up in a situation where VMs with volumes attached were not able to
boot (with a probability that matched
the fraction of the servers moved between the Ceph instances).

We found that the reason for this is the connection_info in
block_device_mapping which contains the
IP adresses of the mon servers as extracted by the rbd driver in
initialize_connection() at the moment
when the connection is established. From what we see, however, this
information is not updated as long
as the connection exists, and will hence be re-applied without
checking even when the XML is recreated.

The idea to extract the mon servers by IP from the mon map was
probably to get all mon servers (rather
than only one from a load-balancer or an alias), but while our
current scenario may be special, we will face
a similar problem the day the Ceph mons need to be replaced. And
that makes it a more general issue.

For our current problem:
Is there a user-transparent way to force an update of that
connection information? (Apart from fiddling
with the database entries, of course.)

For the general issue:
Would it be possible to simply use the information from the
ceph.conf file directly (an alias in our case)
throughout the whole stack to avoid hard-coding IPs that will be
obsolete one day?

Thanks!
  Arne

—
Arne Wiebalck
CERN IT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe

http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





[openstack-dev] [mistral] Mistral 2015.1 released!

2015-05-08 Thread Renat Akhmerov
Hi,

I’d like to announce that Mistral 2015.1 and Mistral Client 0.2 have been 
released!

To see all the details on implemented blueprints and fixed bugs and to download 
tarballs please visit corresponding release pages:
Mistral - https://launchpad.net/mistral/kilo/2015.1 
https://launchpad.net/mistral/kilo/2015.1
Mistral Client - https://launchpad.net/python-mistralclient/kilo/0.2.0 
https://launchpad.net/python-mistralclient/kilo/0.2.0

All documentation can be found at https://wiki.openstack.org/wiki/Mistral 
https://wiki.openstack.org/wiki/Mistral

Many thanks to all contributors from all companies for their active and 
insightful participation!

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Core reviewer update proposal

2015-05-08 Thread Chris Jones
Hi

On 5 May 2015 at 12:57, James Slagle james.sla...@gmail.com wrote:

 Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO
 Core.


+1


 I'd also like to give a heads-up to the following folks whose review
 activity is very low for the last 90 days:
 | cmsj **  |   60   2   0   4   266.7% |0 (
 0.0%)  |


I want to pick up my review rate, mostly with a focus on DIB, but I suspect
that will not keep me on track to remain core, which is fine :)

-- 
Cheers,

Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Mistral] SSH workflow action

2015-05-08 Thread Renat Akhmerov
Generally yes, std.ssh action works as long as network infrastructure allows 
access to a host using specified IP, it doesn’t provide anything on top of that.


 On 06 May 2015, at 22:26, Fox, Kevin M kevin@pnnl.gov wrote:
 
 This would also probably be a good use case for Zaqar I think. Have a generic 
 run shell commands from Zaqar queue agent, that pulls commands from a Zaqar 
 queue, and executes it.
 The vm's don't have to be directly reachable from the network then. You just 
 have to push messages into Zaqar.

Yes, in Mistral it would be another action that puts a command into Zaqar 
queue. This type of action doesn’t exist yet but it can be plugged in easily.

 Should Mistral abstract away how to execute the action, leaving it up to 
 Mistral how to get the action to the vm?

Like I mentioned previously it should be just a different type of action: 
“zaqar.something” instead of “std.ssh”. Mistral engine itself works with all 
actions equally, they are just basically functions that we can plug in and use 
in Mistral workflow language. From this standpoint Mistral is already abstract 
enough. 

 If that's the case, then ssh vs queue/agent is just a Mistral implementation 
 detail?

More precisely: implementation detail of Mistral action which may not be even 
hardcoded part of Mistral, we can rather plug them in (using stevedore 
underneath).


Renat Akhmerov
@ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Is Live-migration not supported in CONF.libvirt.images_type=lvm case?

2015-05-08 Thread Rui Chen
Hi all:

I find the bug [1] block/live migration doesn't work with LVM as
libvirt storage is marked as 'Fix released', but I don't think this issue
really is solved, I check the live-migration code and don't find any logic
for handling LVM disk. Please correct me if I'm wrong.

In the bug [1] comments, the only related merged patch is
https://review.openstack.org/#/c/73387/ , it cover the 'resize/migrate'
code path, not live-migration, and I don't think this bug [1] is duplicate
with bug [2], they are the different use case, live-migration and migration.

So should we reopen this bug and add some documentation to describe that
live-migration is not supported in current code?

[1]: https://bugs.launchpad.net/nova/+bug/1282643
[2]: https://bugs.launchpad.net/nova/+bug/1270305
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Andreas Jaeger

On 05/08/2015 10:02 AM, Robert Collins wrote:

I don't know if they are*intended*  to be, but right now there is no
set of versions that can be co-installed, of everything listed in
global requirements.

I don't have a full set of the conflicts (because I don't have a good
automatic trace for 'why X is unresolvable' - its nontrivial).

However right now:
openstack-doc-tools=0.23
and
oslo.config=1.11.0


These are not the only culprits, grepping requirements in repositories 
in the openstack namespace, I see:


castellan/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
ceilometermiddleware/requirements.txt:oslo.config=1.9.3,1.10.0  # 
Apache-2.0

congress/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
glance/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
gnocchi/requirements-py3.txt:oslo.config1.10.0,=1.9.3
gnocchi/requirements.txt:oslo.config1.10.0,=1.9.3
ironic-lib/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
ironic-python-agent/requirements.txt:oslo.config=1.9.3,1.10.0  # 
Apache-2.0

magnum/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
murano-agent/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
tempest/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
zaqar/requirements-py3.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0
zaqar/requirements.txt:oslo.config=1.9.3,1.10.0  # Apache-2.0

Note that some are not in requirements/projects.txt.


Still, openstack-doc-tools should be fixed later today,
Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pecan migration status

2015-05-08 Thread Przemyslaw Kaminski
Ping with [1] as additional argument for migrating.

[1]
https://openstack.nimeyo.com/43700/openstack-keystone-rehashing-pecan-falcon-other-wsgi-debate?qa_q=rehashing+pecan

P.

On 03/24/2015 09:09 AM, Przemyslaw Kaminski wrote:
 BTW, old urls do not match as yet exactly the new ones. There is a need
 to write a test that will check all urls.py list and compare with new
 handlers' urls to make sure nothing was missed.
 
 P.
 
 On 03/24/2015 08:46 AM, Przemyslaw Kaminski wrote:
 Hello,

 I want to summarize work I've done in spare time on migrating our API to
 Pecan [1]. This is partially based on previous Nicolay's work [2]. One
 test is still failing there but it's some DB lock and I'm not 100% sure
 that this is caused because something is yet not done on Pecan side or
 just some bug popped out (I was getting a different DB lock before but
 it disappeared after rebasing a fix for [5]).

 My main commitment here is the 'reverse' method [3] which is not
 provided by default in Pecan. I have kept compatibility with original
 reverse method in our code. I have additionally added a 'qs' keyword
 argument that is used for adding a query string to the URL (see
 test_node_nic_handler.py::test_change_mac_of_assigned_nics::get_nodes,
 test_node_nic_handler.py::test_remove_assigned_interfaces::get_nodes)

 I decided to keep original Nicolay's idea of copying all handlers to v2
 directory and not just modify original v1 handlers and concentrated
 instead of removing hacks around Pecan as in [6] (with post_all,
 post_one, put_all, put_one, etc). Merging current v2 into v1 should
 drastically decrease the number of changed lines in this patchset.

 I have so far found one fault in the URLs in our API that isn't easily
 handled by Pecan (some custom _route function would help) and IMHO
 should be fixed by rearranging URLs instead of adding _route hacks:
 /nodes/interfaces and /nodes/1/interfaces require the same get_all
 method in the interfaces controller. And /nodes/interfaces only usage is
 for doing a batch node interface update via PUT. The current v2 can be
 merged into v1 with some effort.

 We sometimes use a PUT request without specifying an object's ID -- this
 is unsupported in Pecan but can be easily hacked by giving a dummy
 keyword argument to function's definition:

 def put(self, dummy=None)

 Some bugs in tests were found and fixed (for example, wrong content-type
 in headers in [4]).

 I haven't put enough thought into error handling there yet, some stuff
 is implemented in hooks/error.py but I'm not fully satisfied with it.
 Most unfinished stuff I was marking with a TODO(pkaminski).

 P.

 [1] https://review.openstack.org/158661
 [2] https://review.openstack.org/#/c/99069/6
 [3]
 https://review.openstack.org/#/c/158661/35/nailgun/nailgun/api/__init__.py
 [4]
 https://review.openstack.org/#/c/158661/35/nailgun/nailgun/test/unit/test_handlers.py
 [5] https://bugs.launchpad.net/fuel/+bug/1433528
 [6]
 https://review.openstack.org/#/c/99069/6/nailgun/nailgun/api/v2/controllers/base.py


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient works with keystone sessions?

2015-05-08 Thread Jamie Lennox


- Original Message -
 From: Jay Reslock jresl...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, May 8, 2015 7:42:50 AM
 Subject: Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient 
 works with keystone sessions?
 
 Thanks very much to both of you for your help!
 
 I was able to get to another error now about EndpointNotFound. I will
 troubleshoot more and review the bugs mentioned by Sergey.
 
 -Jason

It's nice to see people using sessions for this sort of script. Just as a 
pointer EndpointNotFound generally means that it couldn't find a url for the 
service you wanted in the service catalog. Have a look at the catalog you're 
getting and make sure the heat entry matches what it should, you may have to 
change the service_type or interface to match. 

 On Thu, May 7, 2015 at 5:34 PM Sergey Kraynev  skray...@mirantis.com 
 wrote:
 
 
 
 Hi Jay.
 
 AFAIK, it works, but we can have some minor issues. There several atches on
 review to improve it:
 
 https://review.openstack.org/#/q/status:open+project:openstack/python-heatclient+branch:master+topic:improve-sessionclient,n,z
 
 Also as I remember we really had bug mentioned by you, but fix was merged.
 Please look:
 https://review.openstack.org/#/c/160431/1
 https://bugs.launchpad.net/python-heatclient/+bug/1427310
 
 Which version of client do you use? Try to use code from master, it should
 works.
 Also one note: the best place for such questions is
 openst...@lists.openstack.org or http://ask.openstack.org/ . And of course
 channel #heat in IRC.
 
 Regards,
 Sergey.
 
 On 7 May 2015 at 23:43, Jay Reslock  jresl...@gmail.com  wrote:
 
 
 
 Hi,
 This is my first mail to the group. I hope I set the subject correctly and
 that this hasn't been asked already. I searched archives and did not see
 this question asked or answered previously.
 
 I am working on a client thing that uses the python-keystoneclient and
 python-heatclient api bindings to set up an authenticated session and then
 use that session to talk to the heat service. This doesn't work for heat but
 does work for other services such as nova and sahara. Is this because
 sessions aren't supported in the heatclient api yet?
 
 sample code:
 
 https://gist.github.com/jreslock/a525abdcce53ca0492a7
 
 I'm using fabric to define tasks so I can call them via another tool. When I
 run the task I get:
 
 TypeError: Client() takes at least 1 argument (0 given)
 
 The documentation does not say anything about being able to pass session to
 the heatclient but the others seem to work. I just want to know if this is
 intended/expected behavior or not.
 
 -Jason
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Robert Collins
I don't know if they are *intended* to be, but right now there is no
set of versions that can be co-installed, of everything listed in
global requirements.

I don't have a full set of the conflicts (because I don't have a good
automatic trace for 'why X is unresolvable' - its nontrivial).

However right now:
openstack-doc-tools=0.23
and
oslo.config=1.11.0

conflict because openstack-doc-tools has no releases  =0.23
compatible with oslo.config=1.11.0:
e.g.
Cached requires for ('openstack-doc-tools', (), Version('0.26.0'))
(('argparse', (), SpecifierSet('')), ('oslo.config', (),
SpecifierSet('1.10.0,=1.9.3')), ('pbr', (),
SpecifierSet('!=0.7,1.0,=0.6')), ('iso8601', (),
SpecifierSet('=0.1.9')), ('lxml', (), SpecifierSet('=2.3')),
('demjson', (), SpecifierSet('')), ('babel', (),
SpecifierSet('=1.3')), ('sphinx', (),
SpecifierSet('!=1.2.0,!=1.3b1,1.3,=1.1.2')))

(this is output from the debug trace in my pip resolver branch).

Now, this is probably never showing up in the gate today, but
something as simple as:
pip install oslo.config
pip install 'openstack-doc-tools=0.23'
will show the issue:
$ pip install oslo.config
Downloading/unpacking oslo.config
  Downloading oslo.config-1.11.0-py2.py3-none-any.whl (67kB): 67kB downloaded
Requirement already satisfied (use --upgrade to upgrade): six=1.9.0
in /home/robertc/.virtualenvs/pip-test/lib/python2.7/site-packages
(from oslo.config)
Downloading/unpacking pbr=0.6,!=0.7,1.0 (from oslo.config)
  Downloading pbr-0.11.0-py2.py3-none-any.whl (78kB): 78kB downloaded
Downloading/unpacking stevedore=1.3.0 (from oslo.config)
  Downloading stevedore-1.4.0-py2.py3-none-any.whl
Downloading/unpacking netaddr=0.7.12 (from oslo.config)
  Downloading netaddr-0.7.14-py2.py3-none-any.whl (1.5MB): 1.5MB downloaded
Requirement already satisfied (use --upgrade to upgrade): argparse in
/usr/lib/python2.7 (from oslo.config)
Requirement already satisfied (use --upgrade to upgrade): pip in
/home/robertc/.virtualenvs/pip-test/lib/python2.7/site-packages (from
pbr=0.6,!=0.7,1.0-oslo.config)
Installing collected packages: oslo.config, pbr, stevedore, netaddr
Successfully installed oslo.config pbr stevedore netaddr
Cleaning up...
$ pip install 'openstack-doc-tools=0.23'
Downloading/unpacking openstack-doc-tools=0.23
  Downloading openstack_doc_tools-0.26.0-py2.py3-none-any.whl (180kB):
180kB downloaded
Downloading/unpacking iso8601=0.1.9 (from openstack-doc-tools=0.23)
  Downloading iso8601-0.1.10.tar.gz
  Running setup.py
(path:/home/robertc/.virtualenvs/pip-test/build/iso8601/setup.py)
egg_info for package iso8601
   Downloading/unpacking oslo.config=1.9.3,1.10.0 (from
openstack-doc-tools=0.23)
  Downloading oslo.config-1.9.3-py2.py3-none-any.whl (67kB): 67kB downloaded
 ^^^


So this will downgrade oslo.config to meet the requirements - and that
could of course be very very surprising to the gate ;)

I don't have a fully formed view yet on complete co-installability in
our global requirements. But without that it may be hard to calculate
a working set that we can pin (if we go all-pinned), and its certainly
harder to write tooling to do any sort of analysis because we have to
start partitioning it into sets that work together, and those that
don't.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Robert Collins
On 8 May 2015 at 20:39, Andreas Jaeger a...@suse.com wrote:
 On 05/08/2015 10:02 AM, Robert Collins wrote:

 I don't know if they are*intended*  to be, but right now there is no
 set of versions that can be co-installed, of everything listed in
 global requirements.

 I don't have a full set of the conflicts (because I don't have a good
 automatic trace for 'why X is unresolvable' - its nontrivial).

 However right now:
 openstack-doc-tools=0.23
 and
 oslo.config=1.11.0


 These are not the only culprits, grepping requirements in repositories in
 the openstack namespace, I see:


The key issue will be the intersection with entries in global-requirements.txt:
 ceilometermiddleware/requirements.txt:oslo.config=1.9.3,1.10.0  #

is the only one - the rest aren't in global-requirements at all.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Juno Multinode installation error

2015-05-08 Thread yatin kumbhare
Hi,

recently I faced this issue.

upgrade amqp lib on all nodes and it should work.

Regards,
Yatin

On Tue, May 5, 2015 at 5:37 PM, Ajaya Agrawal ajku@gmail.com wrote:

 Hi,

 A better place to ask this question would be ask.openstack.org .

 Cheers,
 Ajaya

 On Tue, May 5, 2015 at 4:56 PM, Abhishek Talwar abhishek.tal...@tcs.com
 wrote:

 Hi Folks,

 I am trying to setup a multinode OpenStack. When I boot an instance it is
 successfully created but it is going in ERROR state. I have checked the
 logs in /var/log/nova/nova-scheduler.log and it gives an Operational error
 “database is locked”. Moreover when I check the database there are no
 tables getting created in the Nova database, while Neutron and others have
 there tables.

 The logs are following :

 2015-05-05 14:35:13.158 18551 TRACE nova.openstack.common.periodic_task
 OperationalError: (OperationalError) database is locked u'UPDATE
 reservations SET deleted_at=?, deleted=id, updated_at=updated_at WHERE
 reservations.deleted = ? AND reservations.expire  ?' ('2015-05-05
 09:05:13.150105', 0, '2015-05-05 09:05:13.138314')
 2015-05-05 14:35:13.158 18551 TRACE nova.openstack.common.periodic_task
 2015-05-05 14:37:47.972 18551 INFO oslo.messaging._drivers.impl_rabbit
 [-] Connecting to AMQP server on controller:5672
 2015-05-05 14:37:47.991 18551 INFO oslo.messaging._drivers.impl_rabbit
 [-] Connected to AMQP server on controller:5672
 2015-05-05 15:10:59.535 18551 INFO nova.openstack.common.service [-]
 Caught SIGTERM, exiting
 2015-05-05 15:11:01.506 19260 AUDIT nova.service [-] Starting scheduler
 node (version 2014.2.2)
 2015-05-05 15:11:03.691 19260 INFO oslo.messaging._drivers.impl_rabbit
 [req-7b53c22a-2161-4f9f-9942-c39ad5b35ca0 ] Connecting to AMQP server on
 controller:5672
 2015-05-05 15:11:03.747 19260 INFO oslo.messaging._drivers.impl_rabbit
 [req-7b53c22a-2161-4f9f-9942-c39ad5b35ca0 ] Connected to AMQP server on
 controller:5672
 2015-05-05 15:21:55.601 19260 INFO nova.openstack.common.service [-]
 Caught SIGTERM, exiting
 2015-05-05 15:21:56.568 19542 AUDIT nova.service [-] Starting scheduler
 node (version 2014.2.2)
 2015-05-05 15:21:57.504 19542 INFO oslo.messaging._drivers.impl_rabbit
 [req-ee9a2d39-678d-48c2-b490-2894cb6370b5 ] Connecting to AMQP server on
 controller:5672
 2015-05-05 15:21:57.514 19542 INFO oslo.messaging._drivers.impl_rabbit
 [req-ee9a2d39-678d-48c2-b490-2894cb6370b5 ] Connected to AMQP server on
 controller:5672
 2015-05-05 15:32:39.316 19542 INFO oslo.messaging._drivers.impl_rabbit
 [-] Connecting to AMQP server on controller:5672
 2015-05-05 15:32:39.343 19542 INFO oslo.messaging._drivers.impl_rabbit
 [-] Connected to AMQP server on controller:5672
 2015-05-05 15:38:21.280 19542 INFO nova.openstack.common.service [-]
 Caught SIGTERM, exiting
 2015-05-05 15:38:22.434 19954 AUDIT nova.service [-] Starting scheduler
 node (version 2014.2.2)
 2015-05-05 15:38:23.173 19954 INFO oslo.messaging._drivers.impl_rabbit
 [req-b65ec7c6-6bbd-4e13-9694-da927c9cf337 ] Connecting to AMQP server on
 controller:5672
 2015-05-05 15:38:23.248 19954 INFO oslo.messaging._drivers.impl_rabbit
 [req-b65ec7c6-6bbd-4e13-9694-da927c9cf337 ] Connected to AMQP server on
 controller:5672
 2015-05-05 15:39:46.468 19954 INFO oslo.messaging._drivers.impl_rabbit
 [-] Connecting to AMQP server on controller:5672
 2015-05-05 15:39:46.484 19954 INFO oslo.messaging._drivers.impl_rabbit
 [-] Connected to AMQP server on controller:5672







 The configuration for nova.conf is :

 [DEFAULT]
 dhcpbridge_flagfile=/etc/nova/nova.conf
 dhcpbridge=/usr/bin/nova-dhcpbridge
 logdir=/var/log/nova
 state_path=/var/lib/nova
 lock_path=/var/lock/nova
 force_dhcp_release=True
 libvirt_use_virtio_for_bridges=True
 verbose=True
 ec2_private_dns_show_ip=True
 api_paste_config=/etc/nova/api-paste.ini
 enabled_apis=ec2,osapi_compute,metadata
 scheduler_default_filters=AllHostsFilter
 verbose = True
 connection = mysql://nova:NOVA_DBPASS@controller/nova
 rpc_backend = rabbit
 rabbit_host = controller
 rabbit_password = RABBIT_PASS

 auth_strategy = keystone

 my_ip = 10.10.10.10

 vncserver_listen = 10.10.10.10
 vncserver_proxyclient_address = 10.10.10.10

 network_api_class = nova.network.neutronv2.api.API
 security_group_api = neutron
 linuxnet_interface_driver =
 nova.network.linux_net.LinuxOVSInterfaceDriver
 firewall_driver = nova.virt.firewall.NoopFirewallDriver

 [keystone_authtoken]
 auth_uri = http://controller:5000/v2.0
 identity_uri = http://controller:35357
 admin_tenant_name = service
 admin_user = nova
 admin_password = NOVA_PASS

 [glance]
 host = controller

 [neutron]
 url = http://controller:9696
 auth_strategy = keystone
 admin_auth_url = http://controller:35357/v2.0
 admin_tenant_name = service
 admin_username = neutron
 admin_password = NEUTRON_PASS


 The configuration for neutron is:

 [DEFAULT]
 verbose = True
 lock_path = $state_path/lock
 core_plugin = ml2
 service_plugins = router
 allow_overlapping_ips = True

 auth_strategy = 

Re: [openstack-dev] [Neutron][QoS] Neutron QoS (Quality Of Service) update

2015-05-08 Thread Salvatore Orlando
On 7 May 2015 at 10:32, Miguel Ángel Ajo majop...@redhat.com wrote:

 Gal, thank you very much for the update to the list, I believe it’s very
 helpful,
 I’ll add some inline notes.

 On Thursday, 7 de May de 2015 at 8:51, Gal Sagie wrote:

 Hello All,

 I think that the Neutron QoS effort is progressing into critical point and
 i asked Miguel if i could post an update on our progress.

 First, i would like to thank Sean and Miguel for running this effort and
 everyone else that is involved, i personally think its on the right track,
 However i would like to see more people involved, especially more Neutron
 experienced members because i believe we want to make the right decisions
 and learn from past mistakes when making the API design.

 Feel free to update in the meeting wiki [1], and the spec review [2]

 *Topics*

 *API microversioning spec implications [3]*
 QoS can benefit from this design, however some concerns were raised that
 this will
 only be available at mid L cycle.
 I think a better view is needed how this aligns with the new QoS design and
 any feedback/recommendation is use full

 I guess an strategy here could be: go on with an extension, and translate
 that into
 an experimental API once micro versioning is ready, then after one cycle
 we could
 “graduate it” to get versioned.


Indeed. I think the guy who wrote the spec mentioned how to handle
extensions which are in the pipeline already, and has a kind word for QoS
in particular.


 *Changes to the QoS API spec: scoping into bandwidth limiting*
 At this point the concentration is on the API and implementation
 of bandwidth limiting.

 However it is important to keep the design easily extensible for some next
 steps
 like traffic classification and marking
 *.*

 This is important for architecting your data model, RPC interfaces, and to
some extent even the control plane.
From a user perspective (and hence API design) the question to ask would be
whether a generic QoS API (for instance based on generic QoS policies which
might have a different nature) is better than an explicit ones - where you
would have distinct URIs for rate limiting, traffic shaping, marking, etc.

I am not sure of what could be the right answer here. I tend to think
distinct URIs are more immediate to use. On the other hand users will have
to learn more APIs, but even with a generic framework users will have to
learn how to create policies for different types of QoS policies.


 *Changes to the QoS API spec: modeling of rules (class hierarchy)
 (Guarantee split out)*
 There is a QoSPolicy which is composed of different QoSRules, there is
 a discussion of splitting the rules into different types like
 QoSTypeRule.
 (This in order to support easy extension of this model by adding new type
 of rules which extend the possible parameters)

 Plugins can then separate optional aspects into separate rules.
 Any feedback on this approach is appreciated.

 *Discuss multiple API end points (per rule type) vs single*


 here, the topic name was incorrect, where I said API end points, we were
 meaning URLs or REST resources.. (thanks Irena for the correction)


So probably my previous comment applies here as well.



 In summary this means  that in the above model, do we want to support
 /v1/qosrule/..  or   /v1/qostyperule/ API's
 I think the consensus right now is that the later is more flexible.

 Miguel is also checking the possibility of using something like:
 /v1/qosrule/type/... kind of parsing
 Feedback is desired here too :)

 *Traffic Classification considerations*
 The idea right now is to extract the TC classification to another data
 model
 and attach it to rule
 that way no need to repeat same filters for the same kind of traffic.


Didn't you say you were going to focus on rate limiting? ;)


 Of course we need to consider here what it means to update a classifier
 and not to introduce too much dependencies

 About this, the intention is not to fully model this, or to include it in
 the data model now,
 but try to see how could we do it in future iterations and see if it fits
 the current data model
 and APIs we’re proposing.


Can classifier management be considered an admin mgmt feature like instance
flavour?




 *The ingress vs egress differences and issues*
 Egress bandwidth limiting is much more use full and supported,
 There is still doubt on the support of Ingress bandwidth limiting in OVS,
 anyone
 that knows if Ingress QoS is supported in OVS we want your feedback :)

 I do not think so, but don't take my word for granted.
You can ping somebody in #openvswitch or post to ovs-disc...@openvswitch.org


 (For example implementing OF1.3 Metering spec)

 Thanks all (Miguel, Sean or anyone else, please update this if i forgot
 anything)

 [1] https://wiki.openstack.org/wiki/Meetings/QoS
 [2] https://review.openstack.org/#/c/88599/
 [3] https://review.openstack.org/#/c/136760/

 __
 

Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Sean Dague
I'm slightly confused how we got there, because we do try to install
everything all at once in the test jobs -
http://logs.openstack.org/83/181083/1/check/check-requirements-integration-dsvm/4effcf7/console.html#_2015-05-07_17_49_26_699

And it seemed to work, you can find similar lines in previous changes as
well. That was specifically added as a check for these kinds of issues.
Is this a race in the resolution?

On 05/08/2015 04:02 AM, Robert Collins wrote:
 I don't know if they are *intended* to be, but right now there is no
 set of versions that can be co-installed, of everything listed in
 global requirements.
 
 I don't have a full set of the conflicts (because I don't have a good
 automatic trace for 'why X is unresolvable' - its nontrivial).
 
 However right now:
 openstack-doc-tools=0.23
 and
 oslo.config=1.11.0
 
 conflict because openstack-doc-tools has no releases  =0.23
 compatible with oslo.config=1.11.0:
 e.g.
 Cached requires for ('openstack-doc-tools', (), Version('0.26.0'))
 (('argparse', (), SpecifierSet('')), ('oslo.config', (),
 SpecifierSet('1.10.0,=1.9.3')), ('pbr', (),
 SpecifierSet('!=0.7,1.0,=0.6')), ('iso8601', (),
 SpecifierSet('=0.1.9')), ('lxml', (), SpecifierSet('=2.3')),
 ('demjson', (), SpecifierSet('')), ('babel', (),
 SpecifierSet('=1.3')), ('sphinx', (),
 SpecifierSet('!=1.2.0,!=1.3b1,1.3,=1.1.2')))
 
 (this is output from the debug trace in my pip resolver branch).
 
 Now, this is probably never showing up in the gate today, but
 something as simple as:
 pip install oslo.config
 pip install 'openstack-doc-tools=0.23'
 will show the issue:
 $ pip install oslo.config
 Downloading/unpacking oslo.config
   Downloading oslo.config-1.11.0-py2.py3-none-any.whl (67kB): 67kB downloaded
 Requirement already satisfied (use --upgrade to upgrade): six=1.9.0
 in /home/robertc/.virtualenvs/pip-test/lib/python2.7/site-packages
 (from oslo.config)
 Downloading/unpacking pbr=0.6,!=0.7,1.0 (from oslo.config)
   Downloading pbr-0.11.0-py2.py3-none-any.whl (78kB): 78kB downloaded
 Downloading/unpacking stevedore=1.3.0 (from oslo.config)
   Downloading stevedore-1.4.0-py2.py3-none-any.whl
 Downloading/unpacking netaddr=0.7.12 (from oslo.config)
   Downloading netaddr-0.7.14-py2.py3-none-any.whl (1.5MB): 1.5MB downloaded
 Requirement already satisfied (use --upgrade to upgrade): argparse in
 /usr/lib/python2.7 (from oslo.config)
 Requirement already satisfied (use --upgrade to upgrade): pip in
 /home/robertc/.virtualenvs/pip-test/lib/python2.7/site-packages (from
 pbr=0.6,!=0.7,1.0-oslo.config)
 Installing collected packages: oslo.config, pbr, stevedore, netaddr
 Successfully installed oslo.config pbr stevedore netaddr
 Cleaning up...
 $ pip install 'openstack-doc-tools=0.23'
 Downloading/unpacking openstack-doc-tools=0.23
   Downloading openstack_doc_tools-0.26.0-py2.py3-none-any.whl (180kB):
 180kB downloaded
 Downloading/unpacking iso8601=0.1.9 (from openstack-doc-tools=0.23)
   Downloading iso8601-0.1.10.tar.gz
   Running setup.py
 (path:/home/robertc/.virtualenvs/pip-test/build/iso8601/setup.py)
 egg_info for package iso8601
Downloading/unpacking oslo.config=1.9.3,1.10.0 (from
 openstack-doc-tools=0.23)
   Downloading oslo.config-1.9.3-py2.py3-none-any.whl (67kB): 67kB downloaded
  ^^^
 
 
 So this will downgrade oslo.config to meet the requirements - and that
 could of course be very very surprising to the gate ;)
 
 I don't have a fully formed view yet on complete co-installability in
 our global requirements. But without that it may be hard to calculate
 a working set that we can pin (if we go all-pinned), and its certainly
 harder to write tooling to do any sort of analysis because we have to
 start partitioning it into sets that work together, and those that
 don't.
 
 -Rob
 
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Andreas Jaeger

On 05/08/2015 10:12 AM, Andreas Jaeger wrote:

On 05/08/2015 10:02 AM, Robert Collins wrote:

I don't know if they are *intended* to be, but right now there is no
set of versions that can be co-installed, of everything listed in
global requirements.

I don't have a full set of the conflicts (because I don't have a good
automatic trace for 'why X is unresolvable' - its nontrivial).

However right now:
openstack-doc-tools=0.23
and
oslo.config=1.11.0


We haven't imported yet the new requirements for oslo.config and
released a new version of openstack-doc-tools. I'll take care of this,


Fixed now - openstack-doc-tools 0.27 got released,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-08 Thread Li Tianqing






--

Best
Li Tianqing



At 2015-05-08 15:45:27, Nikhil Manchanda nik...@manchanda.me wrote:

Comments and answers inline.

Li Tianqing writes:

 [...]

1) why we put the trove vm into user's tenant, not the trove's
tenant? User can login on that vm, and that vm must connect to
rabbitmq. It is quite insecure.
what's about put the tenant into trove tenant?

While the default configuration of Trove in devstack puts Trove guest
VMs into the users' respective tenants, it's possible to configure Trove
to create VMs in a single Trove tenant. You would do this by
overriding the default novaclient class in Trove's remote.py with one
that creates all Trove VMs in a particular tenant whose user credentials
you will need to supply. In fact, most production instances of Trove do

something like this.


I argue that why we do not do this in upstream.  For that most production do 
this. And if you do this
you will find that there are many work need do. The community applies the 
laziest implementation.



2) Why there is no trove mgmt cli, but mgmt api is in the code?
Does it disappear forever ?

The reason for this is because the old legacy Trove client was rewritten
to be in line with the rest of the openstack clients. The new client
has bindings for the management API, but we didn't complete the work on
writing the shell pieces for it. There is currently an effort to
support Trove calls in the openstackclient, and we're looking to
support the management client calls as part of this as well. If this is
something that you're passionate about, we sure could use help landing

this in Liberty.


i do not see any bp about this.



3)  The trove-guest-agent is in vm. it is connected by taskmanager
by rabbitmq. We designed it. But is there some prectise to do this?
 how to make the vm be connected in vm-network and management
 network?

Most deployments of Trove that I am familiar with set up a separate
RabbitMQ server in cloud that is used by Trove. It is not recommended to
use the same infrastructure RabbitMQ server for Trove for security
reasons. Also most deployments of Trove set up a private (neutron)
network that the RabbitMQ server and guests are connected to, and all

RPC messages are sent over this network.


But how the billing notifications of trove send to billing server? the billing 
server is definitely in management network.
The root of this problem is that you should make one service vm that can 
service user and can be connected in you management network.
This deployment can not be used in production. 
This deployment is not proper, it just an lazy implementation too.



Hope this helps,

Thanks,
Nikhil

 [...]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Robert Collins
On 8 May 2015 at 22:54, Sean Dague s...@dague.net wrote:
 I'm slightly confused how we got there, because we do try to install
 everything all at once in the test jobs -
 http://logs.openstack.org/83/181083/1/check/check-requirements-integration-dsvm/4effcf7/console.html#_2015-05-07_17_49_26_699

 And it seemed to work, you can find similar lines in previous changes as
 well. That was specifically added as a check for these kinds of issues.
 Is this a race in the resolution?

What resolution :).

So what happens with pip install -r
/opt/stack/new/requirements/global-requirements.txt is that the
constraints in that file are all immediately put into pip's state,
including oslo.config = 1.11.0, and then all other constraints that
reference to oslo.config are simply ignored. this is 1b (and 2a) on
https://github.com/pypa/pip/issues/988.

IOW we haven't been testing what we've thought we've been testing.
What we've been testing is that 'python setup.py install X' for X in
global-requirements.txt works, which sadly doesn't tell us a lot at
all.

So, as I have a working (but unpolished) resolver, when I try to do
the same thing, it chews away at the problem and concludes that no, it
can't do it - because its no longer ignoring the additional
constraints.

To get out of the hole, we might consider using pip-compile now as a
warning job - if it can succeed we'll be able to be reasonably
confident that pip itself will succeed once the resolver is merged.

The resolver I have doesn't preserve the '1b' feature at all at this
point, and we're going to need to find a way to separate out 'I want
X' from 'I want X and I know better than you', which will let folk get
into tasty tasty trouble (like we're in now).

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-08 Thread Erik Moe

Hi,

I have not been able to work with upstreaming of this for some time now. But 
now it looks like I may make another attempt. Who else is interested in this, 
as a user or to help contributing? If we get some traction we can have an IRC 
meeting sometime next week.

Thanks,
Erik


From: Scott Drennan [mailto:sco...@nuagenetworks.net]
Sent: den 4 maj 2015 18:42
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs 
in Liberty?

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't see 
any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and specs[2] 
which was deferred from Kilo - is this something anyone is looking at as a 
Liberty candidate?  I looked but didn't find any recent work - is there 
somewhere else work on this is happening?  No-one has listed it on the liberty 
summit topics[3] etherpad, which could mean it's uncontroversial, but given 
history on this, I think that's unlikely.

cheers,
Scott

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]: https://review.openstack.org/#/c/94612
[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Rich Megginson

On 05/08/2015 07:17 AM, Doug Hellmann wrote:

Excerpts from Ben Nemec's message of 2015-05-07 15:57:48 -0500:

I don't know much about the puppet project organization so I won't
comment on whether 1 or 2 is better, but a big +1 to having a common
way to configure Oslo opts.  Consistency of those options across all
services is one of the big reasons we pushed so hard for the libraries
to own their option definitions, so this would align well with the way
the projects are consumed.

- -Ben

Well said, Ben.

Doug


On 05/07/2015 03:19 PM, Emilien Macchi wrote:

Hi,

I think one of the biggest challenges working on Puppet OpenStack
modules is to keep code consistency across all our modules (~20).
If you've read the code, you'll see there is some differences
between RabbitMQ configuration/parameters in some modules and this
is because we did not have the right tools to make it properly. A
lot of the duplicated code we have comes from Oslo libraries
configuration.

Now, I come up with an idea and two proposals.

Idea 

We could have some defined types to configure oslo sections in
OpenStack configuration files.

Something like: define oslo::messaging::rabbitmq( $user, $password
) { ensure_resource($name, 'oslo_messaging_rabbit/rabbit_userid',
{'value' = $user}) ... }

Usage in puppet-nova: ::oslo::messaging::rabbitmq{'nova_config':
user = 'nova', password = 'secrete', }

And patch all our modules to consume these defines and finally
have consistency at the way we configure Oslo projects (messaging,
logging, etc).

Proposals =

#1 Creating puppet-oslo ... and having oslo::messaging::rabbitmq,
oslo::messaging::qpid, ..., oslo::logging, etc. This module will be
used only to configure actual Oslo libraries when we deploy
OpenStack. To me, this solution is really consistent with how
OpenStack works today and is scalable as soon we contribute Oslo
configuration changes in this module.


+1 - For the Keystone authentication options, I think it is important to 
encapsulate this and hide the implementation from the other services as 
much as possible, to make it easier to use all of the different types of 
authentication supported by Keystone now and in the future.  I would 
think that something similar applies to the configuration of other 
OpenStack services.




#2 Using puppet-openstacklib ... and having
openstacklib::oslo::messaging::(...) A good thing is our modules
already use openstacklib. But openstacklib does not configure
OpenStack now, it creates some common defines  classes that are
consumed in other modules.


I personally prefer #1 because: * it's consistent with OpenStack. *
I don't want openstacklib the repo where we put everything common.
We have to differentiate *common-in-OpenStack* and
*common-in-our-modules*. I think openstacklib should continue to be
used for common things in our modules, like providers, wrappers,
database management, etc. But to configure common OpenStack bits
(aka Oslo©), we might want to create puppet-oslo.

As usual, any thoughts are welcome,

Best,



__





OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-BEGIN PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-08 Thread Monty Taylor
On 05/08/2015 03:45 AM, Nikhil Manchanda wrote:
 
 Comments and answers inline.
 
 Li Tianqing writes:
 
 [...]
 
1) why we put the trove vm into user's tenant, not the trove's
tenant? User can login on that vm, and that vm must connect to
rabbitmq. It is quite insecure.
what's about put the tenant into trove tenant?
 
 While the default configuration of Trove in devstack puts Trove guest
 VMs into the users' respective tenants, it's possible to configure Trove
 to create VMs in a single Trove tenant. You would do this by
 overriding the default novaclient class in Trove's remote.py with one
 that creates all Trove VMs in a particular tenant whose user credentials
 you will need to supply. In fact, most production instances of Trove do
 something like this.

Might I suggest that if this is how people regularly deploy, that such a
class be included in trove proper, and that a config option be provided
like use_tenant='name_of_tenant_to_use' that would trigger the use of
the overridden novaclient class?

I think asking an operator as a standard practice to override code in
remote.py is a bad pattern.

2) Why there is no trove mgmt cli, but mgmt api is in the code?
Does it disappear forever ?
 
 The reason for this is because the old legacy Trove client was rewritten
 to be in line with the rest of the openstack clients. The new client
 has bindings for the management API, but we didn't complete the work on
 writing the shell pieces for it. There is currently an effort to
 support Trove calls in the openstackclient, and we're looking to
 support the management client calls as part of this as well. If this is
 something that you're passionate about, we sure could use help landing
 this in Liberty.
 
3)  The trove-guest-agent is in vm. it is connected by taskmanager
by rabbitmq. We designed it. But is there some prectise to do this?
 how to make the vm be connected in vm-network and management
 network?
 
 Most deployments of Trove that I am familiar with set up a separate
 RabbitMQ server in cloud that is used by Trove. It is not recommended to
 use the same infrastructure RabbitMQ server for Trove for security
 reasons. Also most deployments of Trove set up a private (neutron)
 network that the RabbitMQ server and guests are connected to, and all
 RPC messages are sent over this network.

This sounds like a great chunk of information to potentially go into
deployer docs.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-08 Thread Dan Prince
On Thu, 2015-05-07 at 09:10 -0400, Jay Dobies wrote:
 
 On 05/07/2015 06:01 AM, Giulio Fidente wrote:
  On 05/07/2015 11:15 AM, marios wrote:
  On 07/05/15 05:32, Dan Prince wrote:
 
  [..]
 
  Something like this:
 
  https://review.openstack.org/#/c/180833/
 
  +1 I like this as an idea. Given we've already got quite a few reviews
  in flight making changes to overcloud_controller.pp (we're still working
  out how to, and enabling services) I'd be happier to let those land and
  have the tidy up once it settles (early next week at the latest) -
  especially since there's some working out+refactoring to do still,
 
  +1 on not block ongoing work
 
  as of today a split would cause the two .pp to have a lot of duplicated
  data, making them not better than one with the ifs IMHO
 
 I'm with Giulio here. I'm not as strong on my puppet as everyone else, 
 but I don't see the current approach as duplication, it's just passing 
 in different configurations.

What about this isn't duplication?


if $enable_pacemaker {

class { 'neutron::agents::metering':
  manage_service = false,
  enabled = false,
}
  pacemaker::resource::service
{ $::neutron::params::metering_agent_service:
clone   = true,
require = Class['::neutron::agents::metering'],
  }
} else {

  include ::neutron::agents::metering

}

---

We'll have this for basically all the pacemaker enabled services. It is
already messy. It will get worse.

Again, One of the goals of our current TripleO puppet manifests
architecture was that our role scripts would be just 'include'
statements for the related puppet classes. Using Hiera as much as
possible, etc. Having two controller templates which have the same
include statements isn't ideal, but I think it is better than the mess
we've got now.

Actually seeing the pacemaker stuff implemented we have a lot of bioler
plate stuff now going into the templates. All the conditional
manage_service = false, enabled = false could be simply stashed into a
new Hiera data file that only gets included w/ that implementation. This
makes the pacemaker version cleaner... something you just can't do with
the current implementation.

Unfortunately I don't know of a way to do any of this without the
resource registry. If it is concerning it would for example be quite
easy for someone developing a product to simply make this resource
registry by default (thus leaving pacemaker on, always). No tuskar or
template changes would be required for this.

Dan

 
  we should probably move out of the existing .pp the duplicated parts
  first (see my other email on the matter)
 
 My bigger concern is Tuskar. It has the ability to set parameters. It's 
 hasn't moved to a model where you're configuring the overcloud through 
 selecting entries in the resource registry. I can see that making sense 
 in the future, but that's going to require API changes.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-08 Thread Ronald Bradford
Has anybody considered the native python connector for MySQL that supports
Python 3.

Here are the Ubuntu Packages.


$ apt-get show python-mysql.connector
E: Invalid operation show
rbradfor@rubble:~$ apt-cache show python-mysql.connector
Package: python-mysql.connector
Priority: optional
Section: universe/python
Installed-Size: 386
Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
Original-Maintainer: Sandro Tosi mo...@debian.org
Architecture: all
Source: mysql-connector-python
Version: 1.1.6-1
Replaces: mysql-utilities ( 1.3.5-2)
Depends: python:any (= 2.7.5-5~), python:any ( 2.8)
Breaks: mysql-utilities ( 1.3.5-2)
Filename:
pool/universe/m/mysql-connector-python/python-mysql.connector_1.1.6-1_all.deb
Size: 67196
MD5sum: 22b2cb35cf8b14ac0bf4493b0d676adb
SHA1: de626403e1b14f617e9acb0a6934f044fae061c7
SHA256: 99e34f67d085c28b49eb8145c281deaa6d2b2a48d741e6831e149510087aab94
Description-en: pure Python implementation of MySQL Client/Server protocol
 MySQL driver written in Python which does not depend on MySQL C client
 libraries and implements the DB API v2.0 specification (PEP-249).
 .
 MySQL Connector/Python is implementing the MySQL Client/Server protocol
 completely in Python. This means you don't have to compile anything or
MySQL
 (client library) doesn't even have to be installed on the machine.
Description-md5: bb7e2eba7769d706d44e0ef91171b4ed
Homepage: http://dev.mysql.com/doc/connector-python/en/index.html
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu

$ apt-cache show python3-mysql.connector
Package: python3-mysql.connector
Priority: optional
Section: universe/python
Installed-Size: 385
Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
Original-Maintainer: Sandro Tosi mo...@debian.org
Architecture: all
Source: mysql-connector-python
Version: 1.1.6-1
Depends: python3:any (= 3.3.2-2~)
Filename:
pool/universe/m/mysql-connector-python/python3-mysql.connector_1.1.6-1_all.deb
Size: 64870
MD5sum: 461208ed1b89d516d6f6ce43c003a173
SHA1: bd439c4057824178490b402ad6c84067e1e2884e
SHA256: 487af52b98bc5f048faf4dc73420eff20b75a150e1f92c82de2ecdd4671659ae
Description-en: pure Python implementation of MySQL Client/Server protocol
(Python3)
 MySQL driver written in Python which does not depend on MySQL C client
 libraries and implements the DB API v2.0 specification (PEP-249).
 .
 MySQL Connector/Python is implementing the MySQL Client/Server protocol
 completely in Python. This means you don't have to compile anything or
MySQL
 (client library) doesn't even have to be installed on the machine.
 .
 This package contains the Python 3 version of mysql.connector.
Description-md5: 4bca3815f5856ddf4a629b418ec76c8f
Homepage: http://dev.mysql.com/doc/connector-python/en/index.html
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu


Ronald Bradford

Web Site: http://ronaldbradford.com
LinkedIn: http://www.linkedin.com/in/ronaldbradford
Twitter: @RonaldBradford http://twitter.com/ronaldbradford
Skype: RonaldBradford
GTalk:  Ronald.Bradford



On Thu, May 7, 2015 at 9:39 PM, Mike Bayer mba...@redhat.com wrote:



 On 5/7/15 5:32 PM, Thomas Goirand wrote:

 If there are really fixes and features we

 need in Py2K then of course we have to either convince MySQLdb to merge
 them or switch to mysqlclient.


 Given the no reply in 6 months I think that's enough to say it:
 mysql-python is a dangerous package with a non-responsive upstream. That's
 always bad, and IMO, enough to try to get rid of it. If you think switching
 to PyMYSQL is effortless, and the best way forward, then let's do that ASAP!


 haha - id rather have drop eventlet + mysqlclient :)

 as far as this thread, where this has been heading is that django has
 already been recommending mysqlclient and it's become apparent just what a
 barrage of emails and messages have been sent Andy Dustman's way, with no
 response.I agree this is troubling behavior, and I've alerted people at
 RH internal that we need to start thinking about this package switch.My
 original issue was that for Fedora etc., changing it in this way is
 challenging, and from my discussions with packaging people, this is
 actually correct - this isn't an easy way to do it for them and there have
 been many emails as a result.  My other issue is the SQLAlchemy testing
 issue - I'd essentially have to just stop testing mysql-python and switch
 to mysqlclient entirely, which means i need to revise all my docs and get
 all my users to switch also when the SQLAlchemy MySQLdb dialect eventually
 diverges from mysql-python 1.2.5, hence the whole thing is in a
 not-minor-enough way my problem as well.A simple module name change for
 mysqlclient, then there's no problem.   But there you go - assuming
 continued crickets from AD, and seeing that people continue find it
 important to appease projects like Trac that IMO quite amateurishly
 hardcode import MySQLdb, I don't see much other option.


 

Re: [openstack-dev] [cinder] Taskflow 0.10.0 incompatible with NetApp NFS drivers

2015-05-08 Thread Joshua Harlow

Will release that when I get in to work this morning,

Needed to sleep ;)

-Josh

Kerr, Andrew wrote:

The problem is in the version of taskflow that is downloaded from pypi
by devstack. You will need to wait until a new version 0.10.0 is
available [1]

[1] https://pypi.python.org/pypi/taskflow/

Andrew Kerr
OpenStack QA
Cloud Solutions Group
NetApp

From: Bharat Kumar bharat.kobag...@redhat.com
mailto:bharat.kobag...@redhat.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Date: Friday, May 8, 2015 at 7:37 AM
To: openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Taskflow 0.10.0 incompatible with
NetApp NFS drivers

GlusterFS CI job is still failing with the same issue.

I gave couple of rechecks on [1], after
https://review.openstack.org/#/c/181288/ patch got merged.

But still GlusterFS CI job is failing with below error [2]:
ObjectDereferencedError: Can't emit change event for attribute
'Volume.provider_location' - parent object of type Volume has been
garbage collected.

Also I found the same behaviour with NetApp CI also.


[1] https://review.openstack.org/#/c/165424/
[2]
http://logs.openstack.org/24/165424/6/check/check-tempest-dsvm-full-glusterfs-nv/f386477/logs/screen-c-vol.txt.gz


On 05/08/2015 10:21 AM, Joshua Harlow wrote:

Alright, it was as I had a hunch for, a small bug found in the new
algorithm to make the storage layer
copy-original,mutate-copy,save-copy,update-original (vs
update-original,save-original) more reliable.

https://bugs.launchpad.net/taskflow/+bug/1452978 opened and a one line
fix made @ https://review.openstack.org/#/c/181288/ to stop trying to
copy task results (which was activating logic that must of caused the
reference to drop out of existence and therefore the issue noted below).

Will get that released in 0.10.1 once it flushes through the pipeline.

Thanks alex for helping double check, if others want to check to
that'd be nice, can make sure that's the root cause (overzealous usage
of copy.copy, ha).

Overall I'd still *highly* recommend that the following still happen:

 One way to get around whatever the issue is would be to change the
 drivers to not update the object directly as it is not needed. But
 this should not fail. Perhaps a more proper fix is for the volume
 manager to not pass around sqlalchemy objects.

But that can be a later tweak that cinder does; using any taskflow
engine that isn't the greenthreaded/threaded/serial engine will
require results to be serializable, and therefore copyable, so that
those results can go across IPC or MQ/other boundaries. Sqlalchemy
objects won't fit either of these cases (obviously).

-Josh

Joshua Harlow wrote:

Are we sure this is taskflow? I'm wondering since those errors are more
from task code (which is in cinder) and the following seems to be a
general garbage collection issue (not connected to taskflow?):

'Exception during message handling: Can't emit change event for
attribute 'Volume.provider_location' - parent object of type Volume
has been garbage collected.'''

Or:

'''2015-05-07 22:42:51.142 17040 TRACE oslo_messaging.rpc.dispatcher
ObjectDereferencedError: Can't emit change event for attribute
'Volume.provider_location' - parent object of type Volume has been
garbage collected.'''

Alex Meade wrote:

So it seems that this will break a number of drivers, I see that
glusterfs does the same thing.

On Thu, May 7, 2015 at 10:29 PM, Alex Meade mr.alex.me...@gmail.com
mailto:mr.alex.me...@gmail.com wrote:

It appears that the release of taskflow 0.10.0 exposed an issue in
the NetApp NFS drivers. Something changed that caused the sqlalchemy
Volume object to be garbage collected even though it is passed into
create_volume()

An example error can be found in the c-vol logs here:

http://dcf901611175aa43f968-c54047c910227e27e1d6f03bb1796fd7.r95.cf5.rackcdn.com/57/181157/1/check/cinder-cDOT-NFS/0473c54/


One way to get around whatever the issue is would be to change the
drivers to not update the object directly as it is not needed. But
this should not fail. Perhaps a more proper fix is for the volume
manager to not pass around sqlalchemy objects.


+1



Something changed in taskflow, however, and we should just
understand if that has other impact.


I'd like to understand that also: the only one commit that touched this
stuff is https://github.com/openstack/taskflow/commit/227cf52 (which
basically ensured that a storage object copy is modified, then saved,
then the local object is updated vs updating the local object, and then
saving, which has problems/inconsistencies if the save fails).



-Alex


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Doug Hellmann
Excerpts from Ben Nemec's message of 2015-05-07 15:57:48 -0500:
 I don't know much about the puppet project organization so I won't
 comment on whether 1 or 2 is better, but a big +1 to having a common
 way to configure Oslo opts.  Consistency of those options across all
 services is one of the big reasons we pushed so hard for the libraries
 to own their option definitions, so this would align well with the way
 the projects are consumed.
 
 - -Ben

Well said, Ben.

Doug

 
 On 05/07/2015 03:19 PM, Emilien Macchi wrote:
  Hi,
  
  I think one of the biggest challenges working on Puppet OpenStack 
  modules is to keep code consistency across all our modules (~20). 
  If you've read the code, you'll see there is some differences
  between RabbitMQ configuration/parameters in some modules and this
  is because we did not have the right tools to make it properly. A
  lot of the duplicated code we have comes from Oslo libraries 
  configuration.
  
  Now, I come up with an idea and two proposals.
  
  Idea 
  
  We could have some defined types to configure oslo sections in
  OpenStack configuration files.
  
  Something like: define oslo::messaging::rabbitmq( $user, $password 
  ) { ensure_resource($name, 'oslo_messaging_rabbit/rabbit_userid',
  {'value' = $user}) ... }
  
  Usage in puppet-nova: ::oslo::messaging::rabbitmq{'nova_config': 
  user = 'nova', password = 'secrete', }
  
  And patch all our modules to consume these defines and finally
  have consistency at the way we configure Oslo projects (messaging,
  logging, etc).
  
  Proposals =
  
  #1 Creating puppet-oslo ... and having oslo::messaging::rabbitmq,
  oslo::messaging::qpid, ..., oslo::logging, etc. This module will be
  used only to configure actual Oslo libraries when we deploy
  OpenStack. To me, this solution is really consistent with how 
  OpenStack works today and is scalable as soon we contribute Oslo 
  configuration changes in this module.
  
  #2 Using puppet-openstacklib ... and having
  openstacklib::oslo::messaging::(...) A good thing is our modules
  already use openstacklib. But openstacklib does not configure
  OpenStack now, it creates some common defines  classes that are
  consumed in other modules.
  
  
  I personally prefer #1 because: * it's consistent with OpenStack. *
  I don't want openstacklib the repo where we put everything common.
  We have to differentiate *common-in-OpenStack* and
  *common-in-our-modules*. I think openstacklib should continue to be
  used for common things in our modules, like providers, wrappers,
  database management, etc. But to configure common OpenStack bits
  (aka Oslo©), we might want to create puppet-oslo.
  
  As usual, any thoughts are welcome,
  
  Best,
  
  
  
  __
 
 
  
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 -BEGIN PGP SIGNATURE-
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][clients] - Should we implement project to endpoint group?

2015-05-08 Thread Marcos Fermin Lobo
Hi all,

I would like to know if any of you would be interested to implement project to 
endpoint group actions 
(http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-ep-filter-ext.html#project-to-endpoint-group-relationship)
 for keystone client. Are any of you already behind this?.

Cheers,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] proposal to add Chris Dent to Ceilometer core

2015-05-08 Thread gordon chung
hi,

i'd like to nominate Chris Dent to the Ceilometer core team. he has been one of 
the leading reviewers in Ceilometer and gives solid comments. he also has led 
the api effort in Ceilometer and provides insight in specs.

as we did last time, please vote here: https://review.openstack.org/#/c/181394/ 
. if for whatever reason you cannot vote there, please respond to this.

reviews:
https://review.openstack.org/#/q/project:openstack/ceilometer+reviewer:%22Chris+Dent%22,n,z

patches:
https://review.openstack.org/#/q/project:openstack/ceilometer+owner:%22Chris+Dent%22,n,z

cheers,
gord
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is Live-migration not supported in CONF.libvirt.images_type=lvm case?

2015-05-08 Thread Coffman, Joel M.
I think you’re correct: it looks like the change you identified covers only the 
migrate code path but doesn’t address live migration. As identified in the 
comments on the bug report [1], it would be beneficial at least to raise to 
reasonable error message.

I also found an abandoned change for the bug: 
https://review.openstack.org/#/c/80029/

Joel


From: Rui Chen [mailto:chenrui.m...@gmail.com]
Sent: Friday, May 08, 2015 4:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] Is Live-migration not supported in 
CONF.libvirt.images_type=lvm case?

Hi all:

I find the bug [1] block/live migration doesn't work with LVM as libvirt 
storage is marked as 'Fix released', but I don't think this issue really is 
solved, I check the live-migration code and don't find any logic for handling 
LVM disk. Please correct me if I'm wrong.

In the bug [1] comments, the only related merged patch is 
https://review.openstack.org/#/c/73387/ , it cover the 'resize/migrate' code 
path, not live-migration, and I don't think this bug [1] is duplicate with bug 
[2], they are the different use case, live-migration and migration.

So should we reopen this bug and add some documentation to describe that 
live-migration is not supported in current code?

[1]: https://bugs.launchpad.net/nova/+bug/1282643
[2]: https://bugs.launchpad.net/nova/+bug/1270305
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][Nagios] Configure Nagios to monitor neutron agents

2015-05-08 Thread Leo Y
Hello,

Can anyone direct me to instructions or example of how to configure Nagios to
monitor neutron L2 and L3 agents?

Thank you
-- 
Regards,
Leo
-
I enjoy the massacre of ads. This sentence will slaughter ads without a
messy bloodbath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-08 Thread Ronald Bradford
I guess I may have spoken too soon.
https://wiki.openstack.org/wiki/PyMySQL_evaluation states   Oracle refuses
to publish MySQL-connector-Python on Pypi, which is critical to the
Openstack infrastructure.

I am unclear when this statement was made and who is involved in this
discussion.  As I have contacts in the MySQL engineering and Oracle
Corporation product development teams I will endeavor to seek a more
current and definitive response and statement.

Regards

Ronald



On Fri, May 8, 2015 at 10:33 AM, Ronald Bradford m...@ronaldbradford.com
wrote:

 Has anybody considered the native python connector for MySQL that supports
 Python 3.

 Here are the Ubuntu Packages.


 $ apt-get show python-mysql.connector
 E: Invalid operation show
 rbradfor@rubble:~$ apt-cache show python-mysql.connector
 Package: python-mysql.connector
 Priority: optional
 Section: universe/python
 Installed-Size: 386
 Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
 Original-Maintainer: Sandro Tosi mo...@debian.org
 Architecture: all
 Source: mysql-connector-python
 Version: 1.1.6-1
 Replaces: mysql-utilities ( 1.3.5-2)
 Depends: python:any (= 2.7.5-5~), python:any ( 2.8)
 Breaks: mysql-utilities ( 1.3.5-2)
 Filename:
 pool/universe/m/mysql-connector-python/python-mysql.connector_1.1.6-1_all.deb
 Size: 67196
 MD5sum: 22b2cb35cf8b14ac0bf4493b0d676adb
 SHA1: de626403e1b14f617e9acb0a6934f044fae061c7
 SHA256: 99e34f67d085c28b49eb8145c281deaa6d2b2a48d741e6831e149510087aab94
 Description-en: pure Python implementation of MySQL Client/Server protocol
  MySQL driver written in Python which does not depend on MySQL C client
  libraries and implements the DB API v2.0 specification (PEP-249).
  .
  MySQL Connector/Python is implementing the MySQL Client/Server protocol
  completely in Python. This means you don't have to compile anything or
 MySQL
  (client library) doesn't even have to be installed on the machine.
 Description-md5: bb7e2eba7769d706d44e0ef91171b4ed
 Homepage: http://dev.mysql.com/doc/connector-python/en/index.html
 Bugs: https://bugs.launchpad.net/ubuntu/+filebug
 Origin: Ubuntu

 $ apt-cache show python3-mysql.connector
 Package: python3-mysql.connector
 Priority: optional
 Section: universe/python
 Installed-Size: 385
 Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
 Original-Maintainer: Sandro Tosi mo...@debian.org
 Architecture: all
 Source: mysql-connector-python
 Version: 1.1.6-1
 Depends: python3:any (= 3.3.2-2~)
 Filename:
 pool/universe/m/mysql-connector-python/python3-mysql.connector_1.1.6-1_all.deb
 Size: 64870
 MD5sum: 461208ed1b89d516d6f6ce43c003a173
 SHA1: bd439c4057824178490b402ad6c84067e1e2884e
 SHA256: 487af52b98bc5f048faf4dc73420eff20b75a150e1f92c82de2ecdd4671659ae
 Description-en: pure Python implementation of MySQL Client/Server protocol
 (Python3)
  MySQL driver written in Python which does not depend on MySQL C client
  libraries and implements the DB API v2.0 specification (PEP-249).
  .
  MySQL Connector/Python is implementing the MySQL Client/Server protocol
  completely in Python. This means you don't have to compile anything or
 MySQL
  (client library) doesn't even have to be installed on the machine.
  .
  This package contains the Python 3 version of mysql.connector.
 Description-md5: 4bca3815f5856ddf4a629b418ec76c8f
 Homepage: http://dev.mysql.com/doc/connector-python/en/index.html
 Bugs: https://bugs.launchpad.net/ubuntu/+filebug
 Origin: Ubuntu


 Ronald Bradford

 Web Site: http://ronaldbradford.com
 LinkedIn: http://www.linkedin.com/in/ronaldbradford
 Twitter: @RonaldBradford http://twitter.com/ronaldbradford
 Skype: RonaldBradford
 GTalk:  Ronald.Bradford



 On Thu, May 7, 2015 at 9:39 PM, Mike Bayer mba...@redhat.com wrote:



 On 5/7/15 5:32 PM, Thomas Goirand wrote:

 If there are really fixes and features we

 need in Py2K then of course we have to either convince MySQLdb to merge
 them or switch to mysqlclient.


 Given the no reply in 6 months I think that's enough to say it:
 mysql-python is a dangerous package with a non-responsive upstream. That's
 always bad, and IMO, enough to try to get rid of it. If you think switching
 to PyMYSQL is effortless, and the best way forward, then let's do that ASAP!


 haha - id rather have drop eventlet + mysqlclient :)

 as far as this thread, where this has been heading is that django has
 already been recommending mysqlclient and it's become apparent just what a
 barrage of emails and messages have been sent Andy Dustman's way, with no
 response.I agree this is troubling behavior, and I've alerted people at
 RH internal that we need to start thinking about this package switch.My
 original issue was that for Fedora etc., changing it in this way is
 challenging, and from my discussions with packaging people, this is
 actually correct - this isn't an easy way to do it for them and there have
 been many emails as a result.  My other issue is the SQLAlchemy testing
 issue - I'd 

Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc. - Role Assignment

2015-05-08 Thread Tim Hinrichs
Hi David,

See below.

On 5/7/15, 1:01 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote:

Hi Tim

On 06/05/2015 21:53, Tim Hinrichs wrote:
 I wondered if we could properly protect the API call for adding a new
 Role using the current mechanism.  So I came up with a simple example.
 
 Suppose we want to write policy about the API call: addRole(user,
 role-name).  If we¹re hosting both Pepsi and Coke, we want to write a
 policy that says that only someone in the Pepsi admin role can change
 roles for Pepsi users (likewise for Coke).  We¹d want to write something
 likeŠ
 
 addRole(user, role) is permitted for caller if
 caller belongs to the Pepsi-admin role and
 user belongs to the Pepsi role
 
 The policy engine knows if ³caller belongs to the Pepsi-admin role²
 because that¹s part of the token.  But the policy engine doesn¹t know if
 ³user belongs to the Pepsi role² because user is just an argument to
 the API call, so we don¹t have role info about user.  This helps me
 understand *why* we can¹t handle the multi-customer use case right now:
 the policy engine doesn¹t have all the info it needs.
 
 But otherwise, it seems, we could handle the multi-customer use-case
 using mechanism that already exists.  Are there other examples where
 they can¹t write policy because the engine doesn¹t have enough info?
 

Your simple example does not work in the federated case. This is because
role and attribute assignments are not done by Keystone, or by any part
of Openstack, but by a remote IDP. It is assumed that the administrator
of this remote IDP knows who his users are, and will assign the correct
attributes to them. However, these are not necessarily OpenStack roles
(they most certainly wont be).

Therefore, we have built a perfectly good mechanism into Keystone, to
ensure that the users from any IDP (Coke, Pepsi or Virgin Cola etc.) get
the right Keystone/Openstack role(s), and this is via attibute mapping.
When the mapping takes place, the user is in the process of logging in,
therefore Keystone knows the attributes of the user (assigned by the
IDP) and can therefore know which Openstack role to assign to him/her.

I understand the idea of mapping attributes from a remote IDP to
OpenStack/Keystone roles.  But I don¹t understand the impact on my
example.  In my example, the policy statement fails to work for one of 2
reasons:

1. there¹s no such thing as a Pepsi-admin role
2. The policy engine can¹t check if ³user belongs to Pepsi

The policy statement fails to work because of (2) for sure.  But are you
saying it also fails to work because of (1) in the federated case?  I
would have thought that the Keystone roles used to represent the Pepsi IDP
attributes would be separate from the Keystone roles used to represent
Coke IDP attributes, and therefore there¹s be some role corresponding to
Pepsi-admin and Coke-admin.

Sorry if this is obvious.

Tim


I hope this helps.

regards

David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-08 Thread Giulio Fidente

On 05/08/2015 05:41 PM, James Slagle wrote:

On Thu, May 7, 2015 at 5:46 PM, Giulio Fidente gfide...@redhat.com wrote:

On 05/07/2015 07:35 PM, Dan Prince wrote:


On Thu, 2015-05-07 at 17:36 +0200, Giulio Fidente wrote:


On 05/07/2015 03:31 PM, Dan Prince wrote:


On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:



EnablePacemaker is set to 'false' by default. IMO it should be opt-in:


http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=1f7426a014f0f83ace4d2b3531014e22f7778b4d



sure that param is false by default, but one can enable it and deploy with
pacemaker on single node, and in fact many people do this for dev purposes

before that change, we were even running CI on single node with pacemaker so
as a matter of fact, one could get rid of the conditionals in the manifest
today by just assuming there will be pacemaker


This is the direction I thought we were moving. When you deploy a
single controller, it is an HA cluster of 1. As opposed to just not
using pacemaker entirely. This is the model we did previously for HA
and I thought it worked well in that it got everyone testing and using
the same code path.


indeed this holds true

today if EnablePacemaker is true and ControllerCount is 1 you do get a 
working overcloud, with Pacemaker and 1 controller


the very same overcloud config applies to ControllerCount = 3


I thought the EnablePacemaker parameter was more or less a temporary
thing to get us over the initial disruption of moving things over to
pacemaker.


not really, purpose of that boolean is to support the deployment of an 
overcloud without using Pacemaker


this is possible today as well, but only with ControllerCount 1

probably, in the future, we will work on the non-pacemaker scenario with 
ControllerCount = 3 as well, in which case a split of the manifests 
would really be useful


my only concerns on the topics are:

1. where and when to move the parts which are shared amongst the manifests
2. if it is urgent or not to do the split today given the shared parts 
would be duplicated

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] API service in service group API

2015-05-08 Thread Jay Pipes

On 05/08/2015 08:06 AM, Dulko, Michal wrote:

Hi,

I wonder why nova-api or cinder-api aren't present service group API of each 
project:


Technically, this is because the API workers do not inherit from 
nova.service.Service [1], which is for RPC-based workers. They inherit 
from nova.service.WSGIService [2], which is for REST-based workers.


Only the RPC-based workers ever get a service record created in the 
services table in the database, and thus only those records appear in 
the service list output.


Frankly, the entire services table, DB-based servicegroup API, and the 
services API extensions should die in a fire. They don't belong in Nova 
or Cinder at all. This kind of thing belongs in ZooKeeper or some other 
group monitoring solution, not in the projects themselves.


See also: https://review.openstack.org/#/c/138607/

Best,
-jay

[1] http://git.openstack.org/cgit/openstack/nova/tree/nova/service.py#n123

[2] http://git.openstack.org/cgit/openstack/nova/tree/nova/service.py#n308


mdulko:devstack/ (master) $ cinder service-list
+--+---+--+-+---++-+
|  Binary  |  Host | Zone |  Status | State |   
  Updated_at | Disabled Reason |
+--+---+--+-+---++-+
|  cinder-backup   |   mdulko-VirtualBox   | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
| cinder-scheduler |   mdulko-VirtualBox   | nova | enabled |   up  | 
2015-05-08T11:58:49.00 |-|
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-1 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-2 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
+--+---+--+-+---++-+

Are there any technical limitations to include API services there? Use case is 
that when service dies during request processing - it leaves some garbage in 
the DB and quotas. This could be cleaned up by another instance of a service. 
For that aforementioned instance would need to know if service that was 
processing the request is down.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-08 Thread Jay Pipes

On 05/08/2015 09:29 AM, Erik Moe wrote:

Hi,

I have not been able to work with upstreaming of this for some time now.
But now it looks like I may make another attempt. Who else is interested
in this, as a user or to help contributing? If we get some traction we
can have an IRC meeting sometime next week.


Hi Erik,

Mirantis has interest in this functionality, and depending on the amount 
of work involved, we could pitch in...


Please cc me or add me to relevant reviews and I'll make sure the right 
folks are paying attention.


All the best,
-jay


*From:*Scott Drennan [mailto:sco...@nuagenetworks.net]
*Sent:* den 4 maj 2015 18:42
*To:* openstack-dev@lists.openstack.org
*Subject:* [openstack-dev] [neutron]Anyone looking at support for
VLAN-aware VMs in Liberty?

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I
don't see any work on VLAN-aware VMs for Liberty.  There is a
blueprint[1] and specs[2] which was deferred from Kilo - is this
something anyone is looking at as a Liberty candidate?  I looked but
didn't find any recent work - is there somewhere else work on this is
happening?  No-one has listed it on the liberty summit topics[3]
etherpad, which could mean it's uncontroversial, but given history on
this, I think that's unlikely.

cheers,

Scott

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

[2]: https://review.openstack.org/#/c/94612

[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] API service in service group API

2015-05-08 Thread Joshua Harlow

See also:

http://lists.openstack.org/pipermail/openstack-dev/2015-May/063602.html

:-/

-Josh

Jay Pipes wrote:

On 05/08/2015 08:06 AM, Dulko, Michal wrote:

Hi,

I wonder why nova-api or cinder-api aren't present service group API
of each project:


Technically, this is because the API workers do not inherit from
nova.service.Service [1], which is for RPC-based workers. They inherit
from nova.service.WSGIService [2], which is for REST-based workers.

Only the RPC-based workers ever get a service record created in the
services table in the database, and thus only those records appear in
the service list output.

Frankly, the entire services table, DB-based servicegroup API, and the
services API extensions should die in a fire. They don't belong in Nova
or Cinder at all. This kind of thing belongs in ZooKeeper or some other
group monitoring solution, not in the projects themselves.

See also: https://review.openstack.org/#/c/138607/

Best,
-jay

[1] http://git.openstack.org/cgit/openstack/nova/tree/nova/service.py#n123

[2] http://git.openstack.org/cgit/openstack/nova/tree/nova/service.py#n308


mdulko:devstack/ (master) $ cinder service-list
+--+---+--+-+---++-+

| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+--+---+--+-+---++-+

| cinder-backup | mdulko-VirtualBox | nova | enabled | up |
2015-05-08T11:58:50.00 | - |
| cinder-scheduler | mdulko-VirtualBox | nova | enabled | up |
2015-05-08T11:58:49.00 | - |
| cinder-volume | mdulko-VirtualBox@lvmdriver-1 | nova | enabled | up
| 2015-05-08T11:58:50.00 | - |
| cinder-volume | mdulko-VirtualBox@lvmdriver-2 | nova | enabled | up
| 2015-05-08T11:58:50.00 | - |
+--+---+--+-+---++-+


Are there any technical limitations to include API services there? Use
case is that when service dies during request processing - it leaves
some garbage in the DB and quotas. This could be cleaned up by another
instance of a service. For that aforementioned instance would need to
know if service that was processing the request is down.

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Exception in rpc_dispatcher

2015-05-08 Thread Ben Nemec
On 05/07/2015 03:34 AM, Vikash Kumar wrote:
 I did following in my agent code:
 
 import eventlet
 
 eventlet.monkey_patch()
 
 but still I see same issue.

Unfortunately, monkey patching in an agent is probably too late.  The
monkey patching has to happen at application startup to be done before
everything else.  See the section starting at line 30 or so in
https://review.openstack.org/#/c/154642/2/specs/eventlet-best-practices.rst

 
 On Thu, May 7, 2015 at 1:22 PM, Mehdi Abaakouk sil...@sileht.net wrote:
 

 Hi,
 
 This is a well known issue when eventlet monkey patching is not done
 correctly.
 The application must do the monkey patching before anything else even
 loading another module that eventlet.
 
 You can find more information here:
 https://bugs.launchpad.net/oslo.messaging/+bug/1288878
 
 Or some examples of how nova and ceilometer ensure that:
 
  https://github.com/openstack/nova/blob/master/nova/cmd/__init__.py
 
 https://github.com/openstack/ceilometer/blob/master/ceilometer/cmd/__init__.py
 
 
 More recent version of oslo.messaging already outputs a better error
 message in this case.
 
 Cheers,
 
 ---
 Mehdi Abaakouk
 mail: sil...@sileht.net
 irc: sileht
 
 
 
 Le 2015-05-07 08:11, Vikash Kumar a écrit :
 
 Hi,

I am getting this error in my agent side. I am getting same message
 twice, one after other.

 2015-05-07 11:39:28.189 11363 ERROR oslo.messaging.rpc.dispatcher
 [req-43875dc3-99a9-4803-aba2-5cff22943c2c ] Exception during message
 handling: _oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 Traceback
 (most recent call last):
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
 134, in _dispatch_and_reply
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 incoming.message))
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
 179, in _dispatch
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 localcontext.clear_local_context()
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   File
 /usr/lib/python2.7/dist-packages/oslo/messaging/localcontext.py, line
 55,
 in clear_local_context
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 delattr(_STORE, _KEY)
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
 AttributeError:
 _oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
 2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding Mehdi Abaakouk (sileht) to oslo-core

2015-05-08 Thread Ben Nemec
+1!

On 05/07/2015 09:36 AM, Davanum Srinivas wrote:
 Dear Oslo folks,
 
 I'd like to propose adding Mehdi Abaakouk to oslo-core. He is already
 leading the oslo.messaging team and helping with Tooz, and futurist
 efforts.
 
 I am hoping to get Mehdi more involved across the board in Oslo.
 
 Thanks,
 Dims
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core

2015-05-08 Thread John Garbutt
On 30 April 2015 at 12:30, John Garbutt j...@johngarbutt.com wrote:
 Hi,

 I propose we add Melanie to nova-core.

 She has been consistently doing great quality code reviews[1],
 alongside a wide array of other really valuable contributions to the
 Nova project.

 Please respond with comments, +1s, or objections within one week.

Thank you all for your positive comments.

Melanie, welcome to nova-core  :)

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-08 Thread James Slagle
On Thu, May 7, 2015 at 5:46 PM, Giulio Fidente gfide...@redhat.com wrote:
 On 05/07/2015 07:35 PM, Dan Prince wrote:

 On Thu, 2015-05-07 at 17:36 +0200, Giulio Fidente wrote:

 On 05/07/2015 03:31 PM, Dan Prince wrote:

 On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:


 [...]

 on the other hand, we can very well get rid of the ifs today by
 deploying *with* pacemaker in single node scenario as well! we already
 have EnablePacemaker always set to true for dev purposes, even on single
 node


 EnablePacemaker is set to 'false' by default. IMO it should be opt-in:


 http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=1f7426a014f0f83ace4d2b3531014e22f7778b4d


 sure that param is false by default, but one can enable it and deploy with
 pacemaker on single node, and in fact many people do this for dev purposes

 before that change, we were even running CI on single node with pacemaker so
 as a matter of fact, one could get rid of the conditionals in the manifest
 today by just assuming there will be pacemaker

This is the direction I thought we were moving. When you deploy a
single controller, it is an HA cluster of 1. As opposed to just not
using pacemaker entirely. This is the model we did previously for HA
and I thought it worked well in that it got everyone testing and using
the same code path.

I thought the EnablePacemaker parameter was more or less a temporary
thing to get us over the initial disruption of moving things over to
pacemaker.


 this said, I prefer myself to leave some air for a (future?) non-pacemaker
 scenario, but I still wanted to point out the reason why the conditionals
 are there in the first place

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Mike Dorman
+1 I agree we should do this, etc., etc.

I don’t have a strong preference for #1 or #2, either.  But I do think #1 
is slightly more complicated from a deployer/operator perspective.  It’s 
another module I have to manage, pull in, etc.  Granted this is a trivial 
amount of incremental work.

I confess I am not super familiar with openstacklib, but I don’t 
understand why We have to differentiate *common-in-OpenStack* and 
*common-in-our-modules*.”  To me, openstacklib is for _anything_ that’s 
common.  Maybe you could expand upon your thinking on this a little more, 
just so it’s a little more explicit?

Since others are not chomping at the bit to chime in here, I guess there 
is probably not many major preferences on this.  I would be happy with 
getting this done, regardless of how it’s implemented.

Thanks,
Mike






On 5/8/15, 7:50 AM, Rich Megginson rmegg...@redhat.com wrote:

On 05/08/2015 07:17 AM, Doug Hellmann wrote:
 Excerpts from Ben Nemec's message of 2015-05-07 15:57:48 -0500:
 I don't know much about the puppet project organization so I won't
 comment on whether 1 or 2 is better, but a big +1 to having a common
 way to configure Oslo opts.  Consistency of those options across all
 services is one of the big reasons we pushed so hard for the libraries
 to own their option definitions, so this would align well with the way
 the projects are consumed.

 - -Ben
 Well said, Ben.

 Doug

 On 05/07/2015 03:19 PM, Emilien Macchi wrote:
 Hi,

 I think one of the biggest challenges working on Puppet OpenStack
 modules is to keep code consistency across all our modules (~20).
 If you've read the code, you'll see there is some differences
 between RabbitMQ configuration/parameters in some modules and this
 is because we did not have the right tools to make it properly. A
 lot of the duplicated code we have comes from Oslo libraries
 configuration.

 Now, I come up with an idea and two proposals.

 Idea 

 We could have some defined types to configure oslo sections in
 OpenStack configuration files.

 Something like: define oslo::messaging::rabbitmq( $user, $password
 ) { ensure_resource($name, 'oslo_messaging_rabbit/rabbit_userid',
 {'value' = $user}) ... }

 Usage in puppet-nova: ::oslo::messaging::rabbitmq{'nova_config':
 user = 'nova', password = 'secrete', }

 And patch all our modules to consume these defines and finally
 have consistency at the way we configure Oslo projects (messaging,
 logging, etc).

 Proposals =

 #1 Creating puppet-oslo ... and having oslo::messaging::rabbitmq,
 oslo::messaging::qpid, ..., oslo::logging, etc. This module will be
 used only to configure actual Oslo libraries when we deploy
 OpenStack. To me, this solution is really consistent with how
 OpenStack works today and is scalable as soon we contribute Oslo
 configuration changes in this module.

+1 - For the Keystone authentication options, I think it is important to 
encapsulate this and hide the implementation from the other services as 
much as possible, to make it easier to use all of the different types of 
authentication supported by Keystone now and in the future.  I would 
think that something similar applies to the configuration of other 
OpenStack services.


 #2 Using puppet-openstacklib ... and having
 openstacklib::oslo::messaging::(...) A good thing is our modules
 already use openstacklib. But openstacklib does not configure
 OpenStack now, it creates some common defines  classes that are
 consumed in other modules.


 I personally prefer #1 because: * it's consistent with OpenStack. *
 I don't want openstacklib the repo where we put everything common.
 We have to differentiate *common-in-OpenStack* and
 *common-in-our-modules*. I think openstacklib should continue to be
 used for common things in our modules, like providers, wrappers,
 database management, etc. But to configure common OpenStack bits
 (aka Oslo©), we might want to create puppet-oslo.

 As usual, any thoughts are welcome,

 Best,



 __
 

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 -BEGIN PGP SIGNATURE-

 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for 

[openstack-dev] [release] taskflow release 0.10.1 (liberty)

2015-05-08 Thread Joshua Harlow

We are eager to announce the release (with needed bug fixes) of:

taskflow 0.10.1: Taskflow structured state management library.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

For more details, please see the git log history below and:

http://launchpad.net/taskflow/+milestone/0.10.1

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

Changes in taskflow 0.10.0..0.10.1
--

e6a0419 Avoid trying to copy tasks results when cloning/copying

Diffstat (except docs and test files)
-

taskflow/persistence/logbook.py | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][Nagios] Configure Nagios to monitor neutron agents

2015-05-08 Thread Richard Raseley

Leo Y wrote:


Can anyone direct me to instructions or example of how to configure
Nagios to monitor neutron L2 and L3 agents?


Leo,

Though I don't think the content directly addresses the agents you 
called out, please take a look at the following links:


* https://wiki.openstack.org/wiki/Operations/Monitoring

* https://github.com/osops/tools-monitoring

Regards,

Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Colleen Murphy
On Fri, May 8, 2015 at 9:47 AM, Mike Dorman mdor...@godaddy.com wrote:

 +1 I agree we should do this, etc., etc.

 I don’t have a strong preference for #1 or #2, either.  But I do think #1
 is slightly more complicated from a deployer/operator perspective.  It’s
 another module I have to manage, pull in, etc.  Granted this is a trivial
 amount of incremental work.

 I confess I am not super familiar with openstacklib, but I don’t
 understand why We have to differentiate *common-in-OpenStack* and
 *common-in-our-modules*.”  To me, openstacklib is for _anything_ that’s
 common.  Maybe you could expand upon your thinking on this a little more,
 just so it’s a little more explicit?

 Since others are not chomping at the bit to chime in here, I guess there
 is probably not many major preferences on this.  I would be happy with
 getting this done, regardless of how it’s implemented.

 Thanks,
 Mike

I am strongly for #2. Adding another dependent module adds complexity for
both the operators who have to deploy it and the developers who have to
release it. puppet-openstacklib is already our dumping ground for shared
code, and I don't see why we should be shy of adding new things to it -
common-in-OpenStack IS common-in-our-modules and that's why
puppet-openstacklib was created.

Colleen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-08 Thread Doug Hellmann
Excerpts from Ronald Bradford's message of 2015-05-08 10:41:30 -0400:
 I guess I may have spoken too soon.
 https://wiki.openstack.org/wiki/PyMySQL_evaluation states   Oracle refuses
 to publish MySQL-connector-Python on Pypi, which is critical to the
 Openstack infrastructure.
 
 I am unclear when this statement was made and who is involved in this
 discussion.  As I have contacts in the MySQL engineering and Oracle
 Corporation product development teams I will endeavor to seek a more
 current and definitive response and statement.

We install all of our library dependencies via pip (for unit,
functional, and integration tests). New versions of pip require special
handling to install packages not hosted on PyPI, and that special
handling must be performed in every place where we have a dependency on
the package, which places an extra burden on us that we would prefer to
avoid.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Robert Collins
On 9 May 2015 at 05:51, Joe Gordon joe.gord...@gmail.com wrote:



 Once we are actually testing that all of global requirements is
 co-installable  will we end up with even more cases like this? Or is this
 just an artifact of capping fro kilo?
 https://review.openstack.org/#/c/166377/

As I read it, we've got some tooling that isn't PEP-440 compatible
(https://www.python.org/dev/peps/pep-0440/#compatible-release defines
~=) and as such we had to rollback the intended use of that. As long
as we identify and fix those tools, we should be fine. Did anyone
involved with that situation create a bug we can use to track this? I
don't think it has anything to do with the choice of cap-or-not.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Robert Collins
On 8 May 2015 at 23:23, Sean Dague s...@dague.net wrote:
 On 05/08/2015 07:13 AM, Robert Collins wrote:

 The resolver I have doesn't preserve the '1b' feature at all at this
 point, and we're going to need to find a way to separate out 'I want
 X' from 'I want X and I know better than you', which will let folk get
 into tasty tasty trouble (like we're in now).

 Gotcha, so, yes, so the subtleties of pip were lost here.

 Instead of using another tool, could we make a version of this job pull
 and use the prerelease version of your pip code. Then we can run the
 same tests and fix them in a non voting job against this code that has
 not yet released.

Thats certainly possible too. Upside: if it works we know it works in
pip. Downside, we'll be tracking something that is in active
development and late-prototype /early alpha stage. This is likely to
be in pip 8.0 (7.0 should be out inside of a week).

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Mathieu Gagné
On 2015-05-07 4:19 PM, Emilien Macchi wrote:
 
 Proposals
 =
 
 #1 Creating puppet-oslo
 ... and having oslo::messaging::rabbitmq, oslo::messaging::qpid, ...,
 oslo::logging, etc.
 This module will be used only to configure actual Oslo libraries when we
 deploy OpenStack. To me, this solution is really consistent with how
 OpenStack works today and is scalable as soon we contribute Oslo
 configuration changes in this module.
 
 #2 Using puppet-openstacklib
 ... and having openstacklib::oslo::messaging::(...)
 A good thing is our modules already use openstacklib.
 But openstacklib does not configure OpenStack now, it creates some
 common defines  classes that are consumed in other modules.
 

I prefer #1 due to oslo configs being specific to OpenStack versions.

The goal of openstacklib is to (hopefully) be OpenStack version agnostic
and be used only for code common across all *our* modules.

That's why I suggest going with solution #1, unless someone comes with a
solution to support multiple OpenStack versions in openstacklib without
the use of stable branches.

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-08 Thread Dan Prince
On Fri, 2015-05-08 at 11:41 -0400, James Slagle wrote:
 On Thu, May 7, 2015 at 5:46 PM, Giulio Fidente gfide...@redhat.com wrote:
  On 05/07/2015 07:35 PM, Dan Prince wrote:
 
  On Thu, 2015-05-07 at 17:36 +0200, Giulio Fidente wrote:
 
  On 05/07/2015 03:31 PM, Dan Prince wrote:
 
  On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:
 
 
  [...]
 
  on the other hand, we can very well get rid of the ifs today by
  deploying *with* pacemaker in single node scenario as well! we already
  have EnablePacemaker always set to true for dev purposes, even on single
  node
 
 
  EnablePacemaker is set to 'false' by default. IMO it should be opt-in:
 
 
  http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=1f7426a014f0f83ace4d2b3531014e22f7778b4d
 
 
  sure that param is false by default, but one can enable it and deploy with
  pacemaker on single node, and in fact many people do this for dev purposes
 
  before that change, we were even running CI on single node with pacemaker so
  as a matter of fact, one could get rid of the conditionals in the manifest
  today by just assuming there will be pacemaker
 
 This is the direction I thought we were moving. When you deploy a
 single controller, it is an HA cluster of 1. As opposed to just not
 using pacemaker entirely. This is the model we did previously for HA
 and I thought it worked well in that it got everyone testing and using
 the same code path.
 
 I thought the EnablePacemaker parameter was more or less a temporary
 thing to get us over the initial disruption of moving things over to
 pacemaker.

I personally think there may be value in having an option to deploy
without Pacemaker. I would very much like to see TripleO maintain an
option that supports that. The initial goal of EnablePacemaker (as I
understood it) was just this. To support deploying with and without
Pacemaker.

The talk in this thread is largely about the stylistic concerns of using
the $enable_pacemaker boolean within the same manifest and the
maintenance burden this is going to cause. Simply stated it seems
cleaner to just split things out into separate templates based upon the
pacemaker and non-pacemaker version. Given that our goal is to strive
towards minimal manifests (with just includes) this should be a nice
middle ground.

With regards to TripleO upstream defaults I suppose we need more
discussion about this. My understanding was that traditionally TripleO
has been in the keepalived/HAProxy camp for VIP management. Use of
EnablePacemaker would (currently) disable this.

From a product prospective a company could take either approach and run
w/ it as their default implementation. I think it is also fine for one
approach to evolve ahead of the other.

Dan

 
 
  this said, I prefer myself to leave some air for a (future?) non-pacemaker
  scenario, but I still wanted to point out the reason why the conditionals
  are there in the first place
 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-08 Thread Chris Friesen

On 05/08/2015 12:13 AM, Clint Byrum wrote:

Excerpts from Clay Gerrard's message of 2015-05-07 18:35:23 -0700:

On Thu, May 7, 2015 at 3:48 PM, Clint Byrum cl...@fewbar.com wrote:


I'm still very curious to hear if anybody has been willing to try to
make Swift work on pypy.



yeah, Alex Gaynor was helping out with it for awhile.  It worked.  And it
helped.  A little bit.

Probably still worth looking at if you're curious, but I'm not aware of
anyone who's currently working aggressively to productionize swift running
on pypy.


So if I take your phrase A little bit to mean Not enough to matter
then I can imagine there isn't much more that can be done.

It sounds like there are really deep architectural issues in Swift that
need addressing, not just make code run faster, but get closer to
the metal type efficiencies that are being sought.


I understood it to be not so much get closer to the metal, but more deal 
efficiently with parallel storage I/O.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Liberty Glance Summit schedule.

2015-05-08 Thread Nikhil Komawar
Hi all,

The summit schedule is online. Please check [1]; ping me on IRC ( nikhil_k ) if 
you have any last minute concerns. The summit schedule should be considered 
final barring such exceptions (if any).

The relevant discussions can be found at [2] and [3].
 
[1] https://libertydesignsummit.sched.org/overview/type/design+summit/Glance
[2] https://etherpad.openstack.org/p/liberty-glance-virtual-mini-summit
[3] https://etherpad.openstack.org/p/liberty-glance-summit-topics

Thanks,
 -Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Joe Gordon
On Fri, May 8, 2015 at 4:23 AM, Sean Dague s...@dague.net wrote:

 On 05/08/2015 07:13 AM, Robert Collins wrote:
  On 8 May 2015 at 22:54, Sean Dague s...@dague.net wrote:
  I'm slightly confused how we got there, because we do try to install
  everything all at once in the test jobs -
 
 http://logs.openstack.org/83/181083/1/check/check-requirements-integration-dsvm/4effcf7/console.html#_2015-05-07_17_49_26_699
 
  And it seemed to work, you can find similar lines in previous changes as
  well. That was specifically added as a check for these kinds of issues.
  Is this a race in the resolution?
 
  What resolution :).
 
  So what happens with pip install -r
  /opt/stack/new/requirements/global-requirements.txt is that the
  constraints in that file are all immediately put into pip's state,
  including oslo.config = 1.11.0, and then all other constraints that
  reference to oslo.config are simply ignored. this is 1b (and 2a) on
  https://github.com/pypa/pip/issues/988.
 
  IOW we haven't been testing what we've thought we've been testing.
  What we've been testing is that 'python setup.py install X' for X in
  global-requirements.txt works, which sadly doesn't tell us a lot at
  all.
 
  So, as I have a working (but unpolished) resolver, when I try to do
  the same thing, it chews away at the problem and concludes that no, it
  can't do it - because its no longer ignoring the additional
  constraints.
 
  To get out of the hole, we might consider using pip-compile now as a
  warning job - if it can succeed we'll be able to be reasonably
  confident that pip itself will succeed once the resolver is merged.
 
  The resolver I have doesn't preserve the '1b' feature at all at this
  point, and we're going to need to find a way to separate out 'I want
  X' from 'I want X and I know better than you', which will let folk get
  into tasty tasty trouble (like we're in now).

 Gotcha, so, yes, so the subtleties of pip were lost here.

 Instead of using another tool, could we make a version of this job pull
 and use the prerelease version of your pip code. Then we can run the
 same tests and fix them in a non voting job against this code that has
 not yet released.


Once we are actually testing that all of global requirements is
co-installable  will we end up with even more cases like this? Or is this
just an artifact of capping fro kilo?
https://review.openstack.org/#/c/166377/


 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-08 Thread Joshua Harlow

If that could get published, please do make it happen!

As for as who tried to contact oracle, and never got a response, I am 
not sure about that question (or answer). But if we can get that to 
happen it would be great for the whole python community (IMHO).


-Josh

Ronald Bradford wrote:

I guess I may have spoken too soon.
https://wiki.openstack.org/wiki/PyMySQL_evaluation states  Oracle
refuses to publish MySQL-connector-Python on Pypi, which is critical to
the Openstack infrastructure.

I am unclear when this statement was made and who is involved in this
discussion.  As I have contacts in the MySQL engineering and Oracle
Corporation product development teams I will endeavor to seek a more
current and definitive response and statement.

Regards

Ronald



On Fri, May 8, 2015 at 10:33 AM, Ronald Bradford m...@ronaldbradford.com
mailto:m...@ronaldbradford.com wrote:

Has anybody considered the native python connector for MySQL that
supports Python 3.

Here are the Ubuntu Packages.


$ apt-get show python-mysql.connector
E: Invalid operation show
rbradfor@rubble:~$ apt-cache show python-mysql.connector
Package: python-mysql.connector
Priority: optional
Section: universe/python
Installed-Size: 386
Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
mailto:ubuntu-devel-disc...@lists.ubuntu.com
Original-Maintainer: Sandro Tosi mo...@debian.org
mailto:mo...@debian.org
Architecture: all
Source: mysql-connector-python
Version: 1.1.6-1
Replaces: mysql-utilities ( 1.3.5-2)
Depends: python:any (= 2.7.5-5~), python:any ( 2.8)
Breaks: mysql-utilities ( 1.3.5-2)
Filename:

pool/universe/m/mysql-connector-python/python-mysql.connector_1.1.6-1_all.deb
Size: 67196
MD5sum: 22b2cb35cf8b14ac0bf4493b0d676adb
SHA1: de626403e1b14f617e9acb0a6934f044fae061c7
SHA256: 99e34f67d085c28b49eb8145c281deaa6d2b2a48d741e6831e149510087aab94
Description-en: pure Python implementation of MySQL Client/Server
protocol
  MySQL driver written in Python which does not depend on MySQL C client
  libraries and implements the DB API v2.0 specification (PEP-249).
  .
  MySQL Connector/Python is implementing the MySQL Client/Server
protocol
  completely in Python. This means you don't have to compile
anything or MySQL
  (client library) doesn't even have to be installed on the machine.
Description-md5: bb7e2eba7769d706d44e0ef91171b4ed
Homepage: http://dev.mysql.com/doc/connector-python/en/index.html
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu

$ apt-cache show python3-mysql.connector
Package: python3-mysql.connector
Priority: optional
Section: universe/python
Installed-Size: 385
Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
mailto:ubuntu-devel-disc...@lists.ubuntu.com
Original-Maintainer: Sandro Tosi mo...@debian.org
mailto:mo...@debian.org
Architecture: all
Source: mysql-connector-python
Version: 1.1.6-1
Depends: python3:any (= 3.3.2-2~)
Filename:

pool/universe/m/mysql-connector-python/python3-mysql.connector_1.1.6-1_all.deb
Size: 64870
MD5sum: 461208ed1b89d516d6f6ce43c003a173
SHA1: bd439c4057824178490b402ad6c84067e1e2884e
SHA256: 487af52b98bc5f048faf4dc73420eff20b75a150e1f92c82de2ecdd4671659ae
Description-en: pure Python implementation of MySQL Client/Server
protocol (Python3)
  MySQL driver written in Python which does not depend on MySQL C client
  libraries and implements the DB API v2.0 specification (PEP-249).
  .
  MySQL Connector/Python is implementing the MySQL Client/Server
protocol
  completely in Python. This means you don't have to compile
anything or MySQL
  (client library) doesn't even have to be installed on the machine.
  .
  This package contains the Python 3 version of mysql.connector.
Description-md5: 4bca3815f5856ddf4a629b418ec76c8f
Homepage: http://dev.mysql.com/doc/connector-python/en/index.html
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu


Ronald Bradford

Web Site: http://ronaldbradford.com http://ronaldbradford.com/
LinkedIn: http://www.linkedin.com/in/ronaldbradford
Twitter: @RonaldBradford http://twitter.com/ronaldbradford
Skype: RonaldBradford
GTalk:  Ronald.Bradford



On Thu, May 7, 2015 at 9:39 PM, Mike Bayer mba...@redhat.com
mailto:mba...@redhat.com wrote:



On 5/7/15 5:32 PM, Thomas Goirand wrote:

If there are really fixes and features we

need in Py2K then of course we have to either convince
MySQLdb to merge
them or switch to mysqlclient.


Given the no reply in 6 months I think that's enough to
say it: mysql-python is a dangerous package with a
non-responsive upstream. That's always bad, and 

Re: [openstack-dev] [all] Replace mysql-python with mysqlclient

2015-05-08 Thread Mike Bayer



On 5/8/15 10:41 AM, Ronald Bradford wrote:
I guess I may have spoken too soon. 
https://wiki.openstack.org/wiki/PyMySQL_evaluation states   Oracle 
refuses to publish MySQL-connector-Python on Pypi, which is critical 
to the Openstack infrastructure.


I am unclear when this statement was made and who is involved in this 
discussion.  As I have contacts in the MySQL engineering and Oracle 
Corporation product development teams I will endeavor to seek a more 
current and definitive response and statement.


I made that statement.   I and others have been in contact for many 
months with Andrew Rist as well as Geert Vanderkelen regarding this 
issue without any result.  We all preferred mysql-connector originally 
but as time has dragged on and I've sent a few messages to Andrew and 
others that Openstack is essentially going to give up on their driver to 
no result,  we've all gotten more involved with PyMySQL, it has come out 
as the better driver overall.PyMySQL is written by the same author 
of the mysqlclient driver that it looks like we are all switching to 
regardless (Django has already recommended this to their userbase).


PyMySQL also has very straightforward source code, performs better in 
tests, and doesn't have weird decisions like deciding to make a huge 
backwards-incompatible change to return bytearrays and not bytes in Py3K 
raw mode 
(http://dev.mysql.com/doc/relnotes/connector-python/en/news-2-0-0.html).


PyMySQL also is easily accessible as a project with very fast support 
via Github; several of us have been able to improve PyMySQL via pull 
requests quickly and without issue, and the maintainer even made me a 
member of the project so I can even commit fixes directly if I 
wanted.I don't know that Oracle's ownership of MySQL-connector would 
be comfortable with these things, and the only way to get support is 
through Oracle's large and cumbersome bug tracker.







Regards

Ronald



On Fri, May 8, 2015 at 10:33 AM, Ronald Bradford 
m...@ronaldbradford.com mailto:m...@ronaldbradford.com wrote:


Has anybody considered the native python connector for MySQL that
supports Python 3.

Here are the Ubuntu Packages.


$ apt-get show python-mysql.connector
E: Invalid operation show
rbradfor@rubble:~$ apt-cache show python-mysql.connector
Package: python-mysql.connector
Priority: optional
Section: universe/python
Installed-Size: 386
Maintainer: Ubuntu Developers
ubuntu-devel-disc...@lists.ubuntu.com
mailto:ubuntu-devel-disc...@lists.ubuntu.com
Original-Maintainer: Sandro Tosi mo...@debian.org
mailto:mo...@debian.org
Architecture: all
Source: mysql-connector-python
Version: 1.1.6-1
Replaces: mysql-utilities ( 1.3.5-2)
Depends: python:any (= 2.7.5-5~), python:any ( 2.8)
Breaks: mysql-utilities ( 1.3.5-2)
Filename:

pool/universe/m/mysql-connector-python/python-mysql.connector_1.1.6-1_all.deb
Size: 67196
MD5sum: 22b2cb35cf8b14ac0bf4493b0d676adb
SHA1: de626403e1b14f617e9acb0a6934f044fae061c7
SHA256:
99e34f67d085c28b49eb8145c281deaa6d2b2a48d741e6831e149510087aab94
Description-en: pure Python implementation of MySQL Client/Server
protocol
 MySQL driver written in Python which does not depend on MySQL C
client
 libraries and implements the DB API v2.0 specification (PEP-249).
 .
 MySQL Connector/Python is implementing the MySQL Client/Server
protocol
 completely in Python. This means you don't have to compile
anything or MySQL
 (client library) doesn't even have to be installed on the machine.
Description-md5: bb7e2eba7769d706d44e0ef91171b4ed
Homepage: http://dev.mysql.com/doc/connector-python/en/index.html
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu

$ apt-cache show python3-mysql.connector
Package: python3-mysql.connector
Priority: optional
Section: universe/python
Installed-Size: 385
Maintainer: Ubuntu Developers
ubuntu-devel-disc...@lists.ubuntu.com
mailto:ubuntu-devel-disc...@lists.ubuntu.com
Original-Maintainer: Sandro Tosi mo...@debian.org
mailto:mo...@debian.org
Architecture: all
Source: mysql-connector-python
Version: 1.1.6-1
Depends: python3:any (= 3.3.2-2~)
Filename:

pool/universe/m/mysql-connector-python/python3-mysql.connector_1.1.6-1_all.deb
Size: 64870
MD5sum: 461208ed1b89d516d6f6ce43c003a173
SHA1: bd439c4057824178490b402ad6c84067e1e2884e
SHA256:
487af52b98bc5f048faf4dc73420eff20b75a150e1f92c82de2ecdd4671659ae
Description-en: pure Python implementation of MySQL Client/Server
protocol (Python3)
 MySQL driver written in Python which does not depend on MySQL C
client
 libraries and implements the DB API v2.0 specification (PEP-249).
 .
 MySQL Connector/Python is implementing the MySQL Client/Server
protocol
 completely in Python. This means you don't have to compile
 

Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Sean Dague
On 05/08/2015 02:48 PM, Robert Collins wrote:
 On 8 May 2015 at 23:23, Sean Dague s...@dague.net wrote:
 On 05/08/2015 07:13 AM, Robert Collins wrote:
 
 The resolver I have doesn't preserve the '1b' feature at all at this
 point, and we're going to need to find a way to separate out 'I want
 X' from 'I want X and I know better than you', which will let folk get
 into tasty tasty trouble (like we're in now).

 Gotcha, so, yes, so the subtleties of pip were lost here.

 Instead of using another tool, could we make a version of this job pull
 and use the prerelease version of your pip code. Then we can run the
 same tests and fix them in a non voting job against this code that has
 not yet released.
 
 Thats certainly possible too. Upside: if it works we know it works in
 pip. Downside, we'll be tracking something that is in active
 development and late-prototype /early alpha stage. This is likely to
 be in pip 8.0 (7.0 should be out inside of a week).

I'm fine with that, we'll make it non-voting and just ask people to
actually look at results when it fails (once we've gotten it working
somewhat regularly). The throughput on requirements isn't so high that
we can't spend some time on manual inspection.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient works with keystone sessions?

2015-05-08 Thread Jay Reslock
Interestingit is definitely a service endpoint mismatch.

UI:

http://10.25.17.63:8004/v1/dac1095f448d476e9990046331415cf6

keystoneclient.services.list():

http://10.25.17.63:35357/v3/services/e0a18f2f4b574c75ba56823964a7d7eb

What can I do to make these match up correctly?

On Fri, May 8, 2015 at 4:22 PM Jay Reslock jresl...@gmail.com wrote:

 Hi Jamie,

 How do I see the service catalog that I am getting back?

 On Fri, May 8, 2015 at 3:25 AM Jamie Lennox jamielen...@redhat.com
 wrote:



 - Original Message -
  From: Jay Reslock jresl...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Friday, May 8, 2015 7:42:50 AM
  Subject: Re: [openstack-dev] [heat][python-heatclient] Does
 python-heatclient works with keystone sessions?
 
  Thanks very much to both of you for your help!
 
  I was able to get to another error now about EndpointNotFound. I will
  troubleshoot more and review the bugs mentioned by Sergey.
 
  -Jason

 It's nice to see people using sessions for this sort of script. Just as a
 pointer EndpointNotFound generally means that it couldn't find a url for
 the service you wanted in the service catalog. Have a look at the catalog
 you're getting and make sure the heat entry matches what it should, you may
 have to change the service_type or interface to match.

  On Thu, May 7, 2015 at 5:34 PM Sergey Kraynev  skray...@mirantis.com 
  wrote:
 
 
 
  Hi Jay.
 
  AFAIK, it works, but we can have some minor issues. There several
 atches on
  review to improve it:
 
 
 https://review.openstack.org/#/q/status:open+project:openstack/python-heatclient+branch:master+topic:improve-sessionclient,n,z
 
  Also as I remember we really had bug mentioned by you, but fix was
 merged.
  Please look:
  https://review.openstack.org/#/c/160431/1
  https://bugs.launchpad.net/python-heatclient/+bug/1427310
 
  Which version of client do you use? Try to use code from master, it
 should
  works.
  Also one note: the best place for such questions is
  openst...@lists.openstack.org or http://ask.openstack.org/ . And of
 course
  channel #heat in IRC.
 
  Regards,
  Sergey.
 
  On 7 May 2015 at 23:43, Jay Reslock  jresl...@gmail.com  wrote:
 
 
 
  Hi,
  This is my first mail to the group. I hope I set the subject correctly
 and
  that this hasn't been asked already. I searched archives and did not see
  this question asked or answered previously.
 
  I am working on a client thing that uses the python-keystoneclient and
  python-heatclient api bindings to set up an authenticated session and
 then
  use that session to talk to the heat service. This doesn't work for
 heat but
  does work for other services such as nova and sahara. Is this because
  sessions aren't supported in the heatclient api yet?
 
  sample code:
 
  https://gist.github.com/jreslock/a525abdcce53ca0492a7
 
  I'm using fabric to define tasks so I can call them via another tool.
 When I
  run the task I get:
 
  TypeError: Client() takes at least 1 argument (0 given)
 
  The documentation does not say anything about being able to pass
 session to
  the heatclient but the others seem to work. I just want to know if this
 is
  intended/expected behavior or not.
 
  -Jason
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo] disabling pypy unit test jobs for oslo

2015-05-08 Thread Doug Hellmann
The jobs running unit tests under pypy are failing for several Oslo
libraries for reasons that have nothing to do with the libraries
themselves, as far as I can tell (they pass locally). I have proposed
a change to mark the jobs as non-voting [1] until someone can fix
them, but we need a volunteer to look at the failure and understand why
they fail.

Does anyone want to step up to do that? If we don't have a volunteer in
the next couple of weeks, I'll go ahead and remove the jobs so we can
use those test nodes for other jobs.

Doug

[1] https://review.openstack.org/181547

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core

2015-05-08 Thread melanie witt
On May 8, 2015, at 10:15, John Garbutt j...@johngarbutt.com wrote:

 Melanie, welcome to nova-core  :)

Thank you everyone, for your generous support. I am truly humbled.

I am thrilled to join nova-core and excited to work with all of you. :)

-melanie (irc: melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal to configure Oslo libraries

2015-05-08 Thread Colleen Murphy
On Fri, May 8, 2015 at 12:50 PM, Mathieu Gagné mga...@iweb.com wrote:

 On 2015-05-07 4:19 PM, Emilien Macchi wrote:
 
  Proposals
  =
 
  #1 Creating puppet-oslo
  ... and having oslo::messaging::rabbitmq, oslo::messaging::qpid, ...,
  oslo::logging, etc.
 http://git.openstack.org/cgit/stackforge/puppet-openstacklib/refs/
  This module will be used only to configure actual Oslo libraries when we
  deploy OpenStack. To me, this solution is really consistent with how
  OpenStack works today and is scalable as soon we contribute Oslo
  configuration changes in this module.
 
  #2 Using puppet-openstacklib
  ... and having openstacklib::oslo::messaging::(...)
  A good thing is our modules already use openstacklib.
  But openstacklib does not configure OpenStack now, it creates some
  common defines  classes that are consumed in other modules.
 

 I prefer #1 due to oslo configs being specific to OpenStack versions.

 The goal of openstacklib is to (hopefully) be OpenStack version agnostic
 and be used only for code common across all *our* modules.

 That's why I suggest going with solution #1, unless someone comes with a
 solution to support multiple OpenStack versions in openstacklib without
 the use of stable branches.

puppet-openstacklib already has stable branches:
http://git.openstack.org/cgit/stackforge/puppet-openstacklib/refs/

I was not aware of any assumption that openstacklib would work for
different versions of our modules. I think this would be a difficult goal
to achieve. For example, the provider code in puppet-keystone is tightly
coupled with the code in puppet-openstacklib. If making puppet-openstacklib
version-agnostic is important (and I do think there would be value in it)
then maybe we should consider ripping out the provider backend code from
puppet-openstacklib and creating a puppet-openstackclient module.

Colleen


 --
 Mathieu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] VPNaaS IRC meeting next Tuesday May 12th

2015-05-08 Thread Sridhar Ramaswamy
Heads up. We are planning to host an IRC meeting coming Tuesday May 12th 1600
UTC. The main agenda item is to discuss Dynamic Multipoint VPN (DMVPN)
proposal for Liberty.

For more details refer to the vpn wiki,

https://wiki.openstack.org/wiki/Meetings/VPNaaS

- Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Meeting next week is back on

2015-05-08 Thread Kyle Mestery
Folks:

Doug has offered to run the weekly meeting in my abscense next week [1].
I'd like everyone to focus on the Design Summit sessions we have scheduled,
and for folks to brainstorm, especially those who are moderating sessions.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] RPC Asynchronous Communication

2015-05-08 Thread Doug Hellmann
Excerpts from Joe Gordon's message of 2015-05-07 17:43:06 -0700:
 On May 7, 2015 2:37 AM, Sahid Orentino Ferdjaoui 
 sahid.ferdja...@redhat.com wrote:
 
  Hi,
 
  The primary point of this expected discussion around asynchronous
  communication is to optimize performance by reducing latency.
 
  For instance the design used in Nova and probably other projects let
  able to operate ascynchronous operations from two way.
 
  1. When communicate between inter-services
  2. When communicate to the database
 
  1 and 2 are close since they use the same API but I prefer to keep a
  difference here since the high level layer is not the same.
 
  From Oslo Messaging point of view we currently have two methods to
  invoke an RPC:
 
Cast and Call: The first one is not bloking and will invoke a RPC
  without to wait any response while the second will block the
  process and wait for the response.
 
  The aim is to add new method which will return without to block the
  process an object let's call it Future which will provide some basic
  methods to wait and get a response at any time.
 
  The benefice from Nova will comes on a higher level:
 
  1. When communicate between services it will be not necessary to block
 the process and use this free time to execute some other
 computations.
 
 Isn't this what the use of green threads (and eventlet) is supposed to
 solve. Assuming my understanding is correct, and we can fix any issues
 without adding async oslo.messaging, then adding yet another async pattern
 seems like a bad thing.

Yes, this is what the various executors in the messaging library do,
including the eventlet-based executor we use by default.

Where are you seeing nova block on RPC calls?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty mid-cycle meetup

2015-05-08 Thread Michael Still
I thought I should let people know that we've had 14 people sign up
for the mid-cycle so far.

Michael

On Fri, May 8, 2015 at 3:55 PM, Michael Still mi...@stillhq.com wrote:
 As discussed at the Nova meeting this morning, we'd like to gauge
 interest in a mid-cycle meetup for the Liberty release.

 To that end, I've created the following eventbrite event like we have
 had for previous meetups. If you sign up, you're expressing interest
 in the event and if we decide there's enough interest to go ahead we
 will email you and let you know its safe to book travel and that
 you're ticket is now a real thing.

 To save you a few clicks, the proposed details are 21 July to 23 July,
 at IBM in Rochester, MN.

 So, I'd appreciate it if people could take a look at:

 
 https://www.eventbrite.com.au/e/openstack-nova-liberty-mid-cycle-developer-meetup-tickets-16908756546

 Thanks,
 Michael

 PS: I haven't added this to the wiki list of sprints because it might
 not happen. When the decision is final, I'll add it to the wiki if we
 decide to go ahead.

 --
 Rackspace Australia



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Robert Collins
On 8 May 2015 at 21:36, Andreas Jaeger a...@suse.com wrote:
 On 05/08/2015 10:12 AM, Andreas Jaeger wrote:

 On 05/08/2015 10:02 AM, Robert Collins wrote:

 I don't know if they are *intended* to be, but right now there is no
 set of versions that can be co-installed, of everything listed in
 global requirements.

 I don't have a full set of the conflicts (because I don't have a good
 automatic trace for 'why X is unresolvable' - its nontrivial).

 However right now:
 openstack-doc-tools=0.23
 and
 oslo.config=1.11.0


 We haven't imported yet the new requirements for oslo.config and
 released a new version of openstack-doc-tools. I'll take care of this,


 Fixed now - openstack-doc-tools 0.27 got released,

Thanks - we're now coinstallable. Yay.

ceilometermiddleware 0.1.0 is ok dep wise, master however is
problematic. So when it releases 0.1.0 will still be chosen :).

For folk interested, this is the current set the resolver finds:
aioeventlet(0.4), alembic(0.7.6), amqp(1.4.6), anyjson(0.3.3),
argcomplete(0.8.8), astroid(1.3.6), autobahn(0.10.4), babel(1.3),
backports.ssl-match-hostname(3.4.0.2), bashate(0.3.1),
beautifulsoup4(4.3.2), blockdiag(1.5.1), boto(2.38.0),
cassandra-driver(2.5.1), ceilometermiddleware(0.1.0),
certifi(2015.4.28), cliff(1.12.0), cliff-tablib(1.1), cmd2(0.6.8),
coinor.pulp(1.0.4), colorama(0.3.3), coverage(3.7.1), croniter(0.3.5),
ddt(1.0.0), debtcollector(0.4.0), decorator(3.4.2), demjson(2.2.2),
dib-utils(0.0.8), discover(0.4.0), diskimage-builder(0.1.44),
django(1.7.8), django-appconf(1.0.1), django-bootstrap-form(3.2),
django-compressor(1.5), django-nose(1.4),
django-openstack-auth(1.3.0), django-pyscss(2.0.2), dnspython(1.12.0),
doc8(0.3.4), docutils(0.9.1), dogpile.cache(0.5.6),
dogpile.core(0.4.1), ecdsa(0.13), elasticsearch(1.4.0),
eventlet(0.17.3), extras(0.0.3), falcon(0.1.10),
feedparser(5.2.0.post1), fixtures(1.2.0), flake8(2.2.4),
flask(0.10.1), funcparserlib(0.3.6), futures(3.0.1), gabbi(0.99.1),
glance-store(0.4.0), greenlet(0.4.6), hacking(0.10.1), happybase(0.9),
hgtools(6.3), httplib2(0.9.1), httpretty(0.8.6), ipaddr(2.1.11),
iso8601(0.1.10), itsdangerous(0.24), jinja2(2.7.3), jsonpatch(1.11),
jsonpath-rw(1.4.0), jsonpointer(1.9), jsonrpclib(0.1.3),
jsonschema(2.4.0), kafka-python(0.9.3), kazoo(2.0), kerberos(1.1.1),
keyring(5.3), keystonemiddleware(1.6.1), kombu(3.0.26), ldappool(1.0),
libvirt-python(1.2.15), linecache2(1.0.0), logilab-common(0.63.2),
logutils(0.3.3), lxml(3.4.4), mako(1.0.1), markupsafe(0.23),
mccabe(0.2.1), mock(1.0.1), mox(0.5.3), mox3(0.7.0),
msgpack-python(0.4.6), mysql-python(1.2.5), netifaces(0.10.4),
networkx(1.9.1), nodeenv(0.13.1), nose(1.3.6), nose-exclude(0.2.0),
nosehtmloutput(0.0.5), nosexcover(1.0.10), numpy(1.9.2),
oauthlib(0.7.2), openstack-doc-tools(0.27.0),
openstack.nose-plugin(0.11), openstackdocstheme(1.0.8),
ordereddict(1.1), os-apply-config(0.1.30), os-client-config(0.8.2),
os-cloud-config(0.2.6), os-collect-config(0.1.33),
os-net-config(0.1.3), os-refresh-config(0.1.10),
oslo.concurrency(1.9.0), oslo.context(0.3.0), oslo.db(1.9.0),
oslo.i18n(1.6.0), oslo.log(1.1.0), oslo.messaging(1.10.0),
oslo.middleware(1.2.0), oslo.policy(0.4.0), oslo.rootwrap(1.7.0),
oslo.serialization(1.5.0), oslo.utils(1.5.0),
oslo.versionedobjects(0.2.0), oslo.vmware(0.12.0), oslosphinx(2.5.0),
oslotest(1.6.0), osprofiler(0.3.0), paramiko(1.15.2), passlib(1.6.2),
paste(2.0.1), pastedeploy(1.5.2), pathlib(1.0.1), pecan(0.8.3),
pep8(1.5.7), pexpect(3.2), pillow(2.8.1), pint(0.6), pip(6.1.1),
ply(3.6), posix-ipc(1.0.0), prettytable(0.7.2), proboscis(1.2.6.0),
psutil(1.2.1), psycopg2(2.6), pulp(1.5.3), pyasn1-modules(0.0.5),
pycadf(0.9.0), pycrypto(2.6.1), pyeclib(1.0.7), pyflakes(0.8.1),
pyghmi(0.7.1), pygments(2.0.2), pykmip(0.3.1), pylint(1.4.1),
pymemcache(1.2.9), pymongo(2.8), pymysql(0.6.6), pyngus(1.2.0),
pyparsing(2.0.3), pysaml2(2.4.0), pyscss(1.3.4), pysendfile(2.0.1),
pysnmp(4.2.5), pysqlite(2.6.3), pystache(0.5.4),
python-barbicanclient(3.1.1), python-ceilometerclient(1.2.0),
python-cinderclient(1.2.1), python-dateutil(2.4.2),
python-designateclient(1.2.0), python-glanceclient(0.18.0),
python-heatclient(0.5.0), python-ironicclient(0.6.0),
python-keystoneclient(1.4.0), python-ldap(2.4.19),
python-marconiclient(0.0.2), python-memcached(1.54)
python-mimeparse(0.1.4), python-neutronclient(2.5.0),
python-novaclient(2.24.1), python-openstackclient(1.2.0),
python-openstacksdk(0.4.1), python-saharaclient(0.9.0),
python-subunit(1.1.0), python-swiftclient(2.4.0),
python-troveclient(1.1.0), python-zaqarclient(0.1.0),
pytidylib6(0.2.2), pytz(2015.2), pyudev(0.16.1), pyyaml(3.11),
pyzmq(14.6.0), qpid-python(0.26), redis(2.10.3), repoze.lru(0.6),
repoze.who(2.2), requests(2.7.0), requests-aws(0.1.6),
requests-kerberos(0.7.0), requests-mock(0.6.0), retrying(1.3.3),
rfc3986(0.2.1), routes(2.1), rtslib-fb(2.1.51), selenium(2.45.0),
semantic-version(2.4.1), seqdiag(0.9.5), simplegeneric(0.8.1),
simplejson(3.6.5), singledispatch(3.4.0.3), 

Re: [openstack-dev] [neutron] IPv4 transition/interoperation with IPv6

2015-05-08 Thread Andrew Ruthven
On Wed, 2015-05-06 at 02:46 -0400, Mike Spreitzer wrote:
 While I am a Neutron operator, I am also a customer of a lower layer
 network provider.  That network provider will happily give me a
 few /64.  How do I serve IPv6 subnets to lots of my tenants?  In the
 bad old v4 days this would be easy: a tenant puts all his stuff on his
 private networks and NATs (e.g., floating IP) his edge servers onto a
 public network --- no need to align tenant private subnets with public
 subnets.  But with no NAT for v6, there is no public/private
 distinction --- I can only give out the public v6 subnets that I am
 given.  Yes, NAT is bad.  But not being able to get your job done is
 worse. 

I would suggest that you talk to your network provider, or apply to your
local RIR to obtain Provider Independent address space. It should be
relatively trivial to obtain a significant amount of IPv6 address space.
Trying to make this work with only a few /64s is going to lead to a
world of pain.

Cheers,
Andrew 

-- 
Andrew Ruthven, Wellington, New Zealand
and...@etc.gen.nz | linux.conf.au 2015 
  New Zealand's only Cloud:   |  BeAwesome in Auckland, NZ
https://catalyst.net.nz/cloud | http://lca2015.linux.org.au



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] who is the ptl of trove?

2015-05-08 Thread Jeremy Stanley
On 2015-05-08 18:54:25 +0800 (+0800), Li Tianqing wrote:
[...]
 I argue that why we do not do this in upstream. For that most
 production do this.

Where? Do you mean recording these recommendations in deployment
documentation? Or setting it up that way in our integration tests?

 And if you do this you will find that there are many work need do.
 The community applies the laziest implementation.
[...]

It sounds like the community (in this case the operators deploying
and running Trove in production on large scales at various OpenStack
Foundation member companies) have this working. Or by the
community do you mean something else entirely? Perhaps DevStack?
That's not _meant_ to be a production deployment, just a means of
getting the service started as simply as possible so as to be able
to test patches for it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Taskflow 0.10.0 incompatible with NetApp NFS drivers

2015-05-08 Thread Bharat Kumar

GlusterFS CI job is still failing with the same issue.

I gave couple of rechecks on [1], after 
https://review.openstack.org/#/c/181288/ patch got merged.


But still GlusterFS CI job is failing with below error [2]:
ObjectDereferencedError: Can't emit change event for attribute 
'Volume.provider_location' - parent object of type Volume has been 
garbage collected.


Also I found the same behaviour with NetApp CI also.


[1] https://review.openstack.org/#/c/165424/
[2] 
http://logs.openstack.org/24/165424/6/check/check-tempest-dsvm-full-glusterfs-nv/f386477/logs/screen-c-vol.txt.gz



On 05/08/2015 10:21 AM, Joshua Harlow wrote:
Alright, it was as I had a hunch for, a small bug found in the new 
algorithm to make the storage layer 
copy-original,mutate-copy,save-copy,update-original (vs 
update-original,save-original) more reliable.


https://bugs.launchpad.net/taskflow/+bug/1452978 opened and a one line 
fix made @ https://review.openstack.org/#/c/181288/ to stop trying to 
copy task results (which was activating logic that must of caused the 
reference to drop out of existence and therefore the issue noted below).


Will get that released in 0.10.1 once it flushes through the pipeline.

Thanks alex for helping double check, if others want to check to 
that'd be nice, can make sure that's the root cause (overzealous usage 
of copy.copy, ha).


Overall I'd still *highly* recommend that the following still happen:

 One way to get around whatever the issue is would be to change the
 drivers to not update the object directly as it is not needed. But
 this should not fail. Perhaps a more proper fix is for the volume
 manager to not pass around sqlalchemy objects.

But that can be a later tweak that cinder does; using any taskflow 
engine that isn't the greenthreaded/threaded/serial engine will 
require results to be serializable, and therefore copyable, so that 
those results can go across IPC or MQ/other boundaries. Sqlalchemy 
objects won't fit either of these cases (obviously).


-Josh

Joshua Harlow wrote:

Are we sure this is taskflow? I'm wondering since those errors are more
from task code (which is in cinder) and the following seems to be a
general garbage collection issue (not connected to taskflow?):

'Exception during message handling: Can't emit change event for
attribute 'Volume.provider_location' - parent object of type Volume
has been garbage collected.'''

Or:

'''2015-05-07 22:42:51.142 17040 TRACE oslo_messaging.rpc.dispatcher
ObjectDereferencedError: Can't emit change event for attribute
'Volume.provider_location' - parent object of type Volume has been
garbage collected.'''

Alex Meade wrote:

So it seems that this will break a number of drivers, I see that
glusterfs does the same thing.

On Thu, May 7, 2015 at 10:29 PM, Alex Meade mr.alex.me...@gmail.com
mailto:mr.alex.me...@gmail.com wrote:

It appears that the release of taskflow 0.10.0 exposed an issue in
the NetApp NFS drivers. Something changed that caused the sqlalchemy
Volume object to be garbage collected even though it is passed into
create_volume()

An example error can be found in the c-vol logs here:

http://dcf901611175aa43f968-c54047c910227e27e1d6f03bb1796fd7.r95.cf5.rackcdn.com/57/181157/1/check/cinder-cDOT-NFS/0473c54/ 




One way to get around whatever the issue is would be to change the
drivers to not update the object directly as it is not needed. But
this should not fail. Perhaps a more proper fix is for the volume
manager to not pass around sqlalchemy objects.


+1



Something changed in taskflow, however, and we should just
understand if that has other impact.


I'd like to understand that also: the only one commit that touched this
stuff is https://github.com/openstack/taskflow/commit/227cf52 (which
basically ensured that a storage object copy is modified, then saved,
then the local object is updated vs updating the local object, and then
saving, which has problems/inconsistencies if the save fails).



-Alex


__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Warm Regards,
Bharat Kumar Kobagana
Software Engineer
OpenStack Storage – RedHat India
Mobile - +91 9949278005


Re: [openstack-dev] [cinder] Taskflow 0.10.0 incompatible with NetApp NFS drivers

2015-05-08 Thread Kerr, Andrew
The problem is in the version of taskflow that is downloaded from pypi by 
devstack.  You will need to wait until a new version 0.10.0 is available [1]

[1] https://pypi.python.org/pypi/taskflow/

Andrew Kerr
OpenStack QA
Cloud Solutions Group
NetApp

From: Bharat Kumar 
bharat.kobag...@redhat.commailto:bharat.kobag...@redhat.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, May 8, 2015 at 7:37 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Taskflow 0.10.0 incompatible with NetApp 
NFS drivers

GlusterFS CI job is still failing with the same issue.

I gave couple of rechecks on [1], after 
https://review.openstack.org/#/c/181288/ patch got merged.

But still GlusterFS CI job is failing with below error [2]:
ObjectDereferencedError: Can't emit change event for attribute 
'Volume.provider_location' - parent object of type Volume has been garbage 
collected.

Also I found the same behaviour with NetApp CI also.


[1] https://review.openstack.org/#/c/165424/
[2] 
http://logs.openstack.org/24/165424/6/check/check-tempest-dsvm-full-glusterfs-nv/f386477/logs/screen-c-vol.txt.gz


On 05/08/2015 10:21 AM, Joshua Harlow wrote:
Alright, it was as I had a hunch for, a small bug found in the new algorithm to 
make the storage layer copy-original,mutate-copy,save-copy,update-original (vs 
update-original,save-original) more reliable.

https://bugs.launchpad.net/taskflow/+bug/1452978 opened and a one line fix made 
@ https://review.openstack.org/#/c/181288/ to stop trying to copy task results 
(which was activating logic that must of caused the reference to drop out of 
existence and therefore the issue noted below).

Will get that released in 0.10.1 once it flushes through the pipeline.

Thanks alex for helping double check, if others want to check to that'd be 
nice, can make sure that's the root cause (overzealous usage of copy.copy, ha).

Overall I'd still *highly* recommend that the following still happen:

 One way to get around whatever the issue is would be to change the
 drivers to not update the object directly as it is not needed. But
 this should not fail. Perhaps a more proper fix is for the volume
 manager to not pass around sqlalchemy objects.

But that can be a later tweak that cinder does; using any taskflow engine that 
isn't the greenthreaded/threaded/serial engine will require results to be 
serializable, and therefore copyable, so that those results can go across IPC 
or MQ/other boundaries. Sqlalchemy objects won't fit either of these cases 
(obviously).

-Josh

Joshua Harlow wrote:
Are we sure this is taskflow? I'm wondering since those errors are more
from task code (which is in cinder) and the following seems to be a
general garbage collection issue (not connected to taskflow?):

'Exception during message handling: Can't emit change event for
attribute 'Volume.provider_location' - parent object of type Volume
has been garbage collected.'''

Or:

'''2015-05-07 22:42:51.142 17040 TRACE oslo_messaging.rpc.dispatcher
ObjectDereferencedError: Can't emit change event for attribute
'Volume.provider_location' - parent object of type Volume has been
garbage collected.'''

Alex Meade wrote:
So it seems that this will break a number of drivers, I see that
glusterfs does the same thing.

On Thu, May 7, 2015 at 10:29 PM, Alex Meade 
mr.alex.me...@gmail.commailto:mr.alex.me...@gmail.com
mailto:mr.alex.me...@gmail.commailto:mr.alex.me...@gmail.com wrote:

It appears that the release of taskflow 0.10.0 exposed an issue in
the NetApp NFS drivers. Something changed that caused the sqlalchemy
Volume object to be garbage collected even though it is passed into
create_volume()

An example error can be found in the c-vol logs here:

http://dcf901611175aa43f968-c54047c910227e27e1d6f03bb1796fd7.r95.cf5.rackcdn.com/57/181157/1/check/cinder-cDOT-NFS/0473c54/


One way to get around whatever the issue is would be to change the
drivers to not update the object directly as it is not needed. But
this should not fail. Perhaps a more proper fix is for the volume
manager to not pass around sqlalchemy objects.

+1


Something changed in taskflow, however, and we should just
understand if that has other impact.

I'd like to understand that also: the only one commit that touched this
stuff is https://github.com/openstack/taskflow/commit/227cf52 (which
basically ensured that a storage object copy is modified, then saved,
then the local object is updated vs updating the local object, and then
saving, which has problems/inconsistencies if the save fails).


-Alex


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

Re: [openstack-dev] [nova] [cinder] API service in service group API

2015-05-08 Thread Dulko, Michal
Are there a blueprint or spec for that? Or is this currently just an open idea?

Can you explain what exactly such idea makes easier in versioning? 

From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Friday, May 8, 2015 2:14 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] [cinder] API service in service group API

In the case of cinder, there is a proposal to add it in, since it makes some of 
the versioning work easier
On 8 May 2015 15:08, Dulko, Michal michal.du...@intel.com wrote:
Hi,

I wonder why nova-api or cinder-api aren't present service group API of each 
project:

mdulko:devstack/ (master) $ cinder service-list
+--+---+--+-+---++-+
|      Binary      |              Host             | Zone |  Status | State |   
      Updated_at         | Disabled Reason |
+--+---+--+-+---++-+
|  cinder-backup   |       mdulko-VirtualBox       | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |        -        |
| cinder-scheduler |       mdulko-VirtualBox       | nova | enabled |   up  | 
2015-05-08T11:58:49.00 |        -        |
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-1 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |        -        |
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-2 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |        -        |
+--+---+--+-+---++-+

Are there any technical limitations to include API services there? Use case is 
that when service dies during request processing - it leaves some garbage in 
the DB and quotas. This could be cleaned up by another instance of a service. 
For that aforementioned instance would need to know if service that was 
processing the request is down.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Please, stick to [Fuel][Plugins] tag for plugins

2015-05-08 Thread Irina Povolotskaya
Hi to all,

Please, use* [Fuel][Plugins]* tag for questions/announcements on Fuel
Plugins.

Applying [Fuel] with asking about Fuel Plugins leads to mixing up Fuel and
plugins-related issues.

Let's split these large topics for better communication and for quicker
replies.

Note, that this recommendation is present in Fuel Plugins wiki page [1].

Thanks!

[1] https://wiki.openstack.org/wiki/Fuel/Plugins#Channels_of_communication

-- 
Best regards,

Irina

PI Team Technical Writer
skype: ira_live
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [cinder] API service in service group API

2015-05-08 Thread Dulko, Michal
Hi,

I wonder why nova-api or cinder-api aren't present service group API of each 
project:

mdulko:devstack/ (master) $ cinder service-list
+--+---+--+-+---++-+
|  Binary  |  Host | Zone |  Status | State |   
  Updated_at | Disabled Reason |
+--+---+--+-+---++-+
|  cinder-backup   |   mdulko-VirtualBox   | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
| cinder-scheduler |   mdulko-VirtualBox   | nova | enabled |   up  | 
2015-05-08T11:58:49.00 |-|
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-1 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-2 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
+--+---+--+-+---++-+

Are there any technical limitations to include API services there? Use case is 
that when service dies during request processing - it leaves some garbage in 
the DB and quotas. This could be cleaned up by another instance of a service. 
For that aforementioned instance would need to know if service that was 
processing the request is down.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] API service in service group API

2015-05-08 Thread Duncan Thomas
In the case of cinder, there is a proposal to add it in, since it makes
some of the versioning work easier
On 8 May 2015 15:08, Dulko, Michal michal.du...@intel.com wrote:

 Hi,

 I wonder why nova-api or cinder-api aren't present service group API of
 each project:

 mdulko:devstack/ (master) $ cinder service-list

 +--+---+--+-+---++-+
 |  Binary  |  Host | Zone |  Status |
 State | Updated_at | Disabled Reason |

 +--+---+--+-+---++-+
 |  cinder-backup   |   mdulko-VirtualBox   | nova | enabled |
  up  | 2015-05-08T11:58:50.00 |-|
 | cinder-scheduler |   mdulko-VirtualBox   | nova | enabled |
  up  | 2015-05-08T11:58:49.00 |-|
 |  cinder-volume   | mdulko-VirtualBox@lvmdriver-1 | nova | enabled |
  up  | 2015-05-08T11:58:50.00 |-|
 |  cinder-volume   | mdulko-VirtualBox@lvmdriver-2 | nova | enabled |
  up  | 2015-05-08T11:58:50.00 |-|

 +--+---+--+-+---++-+

 Are there any technical limitations to include API services there? Use
 case is that when service dies during request processing - it leaves some
 garbage in the DB and quotas. This could be cleaned up by another instance
 of a service. For that aforementioned instance would need to know if
 service that was processing the request is down.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Keystone] Rehashing the Pecan/Falcon/other WSGI debate

2015-05-08 Thread Flavio Percoco

On 07/05/15 19:19 -0500, Dolph Mathews wrote:

   We didn't pick Falcon because Kurt was Marconi's PTL and Falcon's
   maintainer. The main reason it was picked was related to performance
   first[0] and time (We didn't/don't have enough resources to even think
   of porting the API) and at this point, I believe it's not even going
   to be considered anymore in the short future.


I'm just going to pipe up and say that's a terribly shallow reason for choosing
a web framework, and I think it's silly and embarrassing that there's not a
stronger community preference for more mature frameworks. I take that as a sign
that most of our developer community is coming from non-Python backgrounds,
which is fine, but this whole conversation has always felt like a plague
of Not-Invented-Here, which baffles me.


Not sure how to parse your email but, FWIW, the community did what was
necessary to promote Pecan and the team decided to stick with Falcon.

I don't believe performance and good fit for your use-case are shallow
reasons to pick a framework.

Most of the projects are using Pecan and it works very well for them
and I believe, as I mentioned in my previous email, it's the framework
projects should default to.

Flavio


   There were lots of discussions around this, there were POCs and team
   work. I think it's fair to say that the team didn't blindly *ignored*
   what was recommended as the community framework but it picked what
   worked best for the service.

   [0] https://wiki.openstack.org/wiki/Zaqar/pecan-evaluation



   pecan is a wsgi framework written by Dreamhost that eventually
   moved
   itself into stackforge to better enable collaboration with our
   community
   after we settled on it as the API for things moving forward.

   Since the decision that new REST apis should be written in Pecan,
   the
   following projects have adopted it:

   openstack:
   barbican
   ceilometer
   designate
   gnocchi
   ironic
   ironic-python-agent
   kite
   magnum
   storyboard
   tuskar

   stackforge:
   anchor
   blazar
   cerberus
   cloudkitty
   cue
   fuel-ostf
   fuel-provision
   graffiti
   libra
   magnetodb
   monasca-api
   mistral
   octavia
   poppy
   radar
   refstack
   solum
   storyboard
   surveil
   terracotta

   On the other hand, the following use falcon:

   stachtach-quincy
   zaqar



   To me this is a strong indicator that pecan will see more eyes and
   possibly be more open to improvement to meet the general need.


   +1


   That means that for all of the moaning and complaining, there is
   essentially one thing that uses it - the project that was started
   by the
   person who wrote it and has since quit.

   I'm sure it's not perfect - but the code is in stackforge - I'm
   sure we
   can improve it if there is something missing. OTOH - if we're going
   to
   go back down this road, I'd think it would be more useful to maybe
   look
   at flask or something else that has a large following in the python
   community at large to try to reduce the amount of special we are.



   +1


   Please, lets not go back down this road, not yet at least. :)




   But honestly - I think it matters almost not at all, which is why I
   keep
   telling people to just use pecan ... basically, the argument is not
   worth it.


   +1, go with Pecan if your requirements are not like Zaqar's.
   Contribute to Pecan and make it better.

   Flavio

   --
   @flaper87
   Flavio Percoco



--
@flaper87
Flavio Percoco


pgpk4hx7XYxDc.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] global-requirements not co-installable

2015-05-08 Thread Sean Dague
On 05/08/2015 07:13 AM, Robert Collins wrote:
 On 8 May 2015 at 22:54, Sean Dague s...@dague.net wrote:
 I'm slightly confused how we got there, because we do try to install
 everything all at once in the test jobs -
 http://logs.openstack.org/83/181083/1/check/check-requirements-integration-dsvm/4effcf7/console.html#_2015-05-07_17_49_26_699

 And it seemed to work, you can find similar lines in previous changes as
 well. That was specifically added as a check for these kinds of issues.
 Is this a race in the resolution?
 
 What resolution :).
 
 So what happens with pip install -r
 /opt/stack/new/requirements/global-requirements.txt is that the
 constraints in that file are all immediately put into pip's state,
 including oslo.config = 1.11.0, and then all other constraints that
 reference to oslo.config are simply ignored. this is 1b (and 2a) on
 https://github.com/pypa/pip/issues/988.
 
 IOW we haven't been testing what we've thought we've been testing.
 What we've been testing is that 'python setup.py install X' for X in
 global-requirements.txt works, which sadly doesn't tell us a lot at
 all.
 
 So, as I have a working (but unpolished) resolver, when I try to do
 the same thing, it chews away at the problem and concludes that no, it
 can't do it - because its no longer ignoring the additional
 constraints.
 
 To get out of the hole, we might consider using pip-compile now as a
 warning job - if it can succeed we'll be able to be reasonably
 confident that pip itself will succeed once the resolver is merged.
 
 The resolver I have doesn't preserve the '1b' feature at all at this
 point, and we're going to need to find a way to separate out 'I want
 X' from 'I want X and I know better than you', which will let folk get
 into tasty tasty trouble (like we're in now).

Gotcha, so, yes, so the subtleties of pip were lost here.

Instead of using another tool, could we make a version of this job pull
and use the prerelease version of your pip code. Then we can run the
same tests and fix them in a non voting job against this code that has
not yet released.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on Saturday 2015-05-09 at 1600 UTC

2015-05-08 Thread Elizabeth K. Joseph
On Wed, May 6, 2015 at 11:49 AM, Elizabeth K. Joseph
l...@princessleia.com wrote:
 On Tue, Apr 14, 2015 at 2:57 PM, James E. Blair cor...@inaugust.com wrote:
 On Saturday, May 9 at 16:00 UTC Gerrit will be unavailable for about 4
 hours while we upgrade to the latest release of Gerrit: version 2.10.

 We are currently running Gerrit 2.8 so this is an upgrade across two
 major releases of Gerrit.  The release notes for both versions are here:

   
 https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.10.html
   
 https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.9.html

 If you have any questions about the upgrade, please feel free to reply
 here or contact us in #openstack-infra on Freenode.

 Just a quick reminder that this upgrade is coming up this Saturday,
 May 9th, starting at 16:00 UTC.

 During this upgrade we anticipate that Gerrit will be unavailable for
 about 4 hours.

And just one more reminder, the Gerrit downtime for this upgrade will
begin in just over 16 hours.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Backing up and restoring lost secrets

2015-05-08 Thread Christopher N Solis
Hello.

I'm wondering what happens when barbican fails or crashes.
What would need to be backed up in order to restore barbican back to a
previously functional state?

Regards,

  CHRIS SOLIS__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev