Re: [openstack-dev] [stacktach] [oslo] stachtach - kombu - pika ??

2015-03-12 Thread gordon chung
 We're going to be adding support for consuming from and writing to Kafka as 
 well and will likely use a kafka-specific library for that too. 
is the plan to add this support to oslo.messaging? i believe there is interest 
from the oslo.messaging team in supporting Kafka and in addition to adding a 
Kafka publisher in ceilometer, there is suppose to be a bp to add support to 
oslo.messaging.
cheers,
gord  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-12 Thread Thierry Carrez
Ed Leafe wrote:
 [...]
 So what is production-ready? And how would you trust any such
 designation? I think that it should be the responsibility of groups
 outside of OpenStack development to make that call.

We discussed that particular point at the Ops Summit: how to describe
and objectively define maturity ? The developers (or the members of the
TC) are obviously not the best placed to make such a call, the
information needs to come from downstream users.

We concluded that maturity is difficult to define in a single tag, and
that everyone will have a different definition for it. What we need to
do is define and apply a number of tags that help downstream consumers
assess the degree of maturity of a project, for their own definition of
maturity.

For example we discussed the possibility of an operationability (ew)
tag, that would be applied to projects which indicate some operational
maturity (like not requiring you to manually push rows in database to
run it). We discussed availability (packaging in open source
distributions) and popularity (as reported in surveys and/or public
cloud deployments).

A working group will be put in place with operators to further refine
these concepts

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] PTL Candidacy

2015-03-12 Thread Serg Melikyan
Hi folks,

I'd like to announce my candidacy as PTL for Murano [1].

I was handling PTL responsibilities in this release cycle so far,
after Ruslan Kamaldinov (who was handling them in Juno cycle) hand
over them to me on OpenStack Summit in Paris. I am working on Murano
since it's kick-off two years ago [2][3].

As a PTL of Murano I'll continue my work on making Murano better with
each day and become project of choice for building own Application
Catalogs  Marketplaces for private  public clouds on OpenStack. I
will focus on building great environment for contributors and work on
relationships built around the project itself not only around the
features needed by contributors.

[1] http://wiki.openstack.org/wiki/Murano
[2] http://stackalytics.com/report/contribution/murano/90
[3] http://stackalytics.com/?
release=kilometric=commitsproject_type=stackforgemodule=murano-group

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-12 Thread Livnat Peer
Congratulations, well deserved !


On 03/11/2015 07:54 PM, Kyle Mestery wrote:
 On Wed, Mar 4, 2015 at 1:42 PM, Kyle Mestery mest...@mestery.com
 mailto:mest...@mestery.com wrote:
 
 I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron
 as evidence by his stats [1]. Ihar is the Oslo liaison for Neutron,
 he's been doing a great job keeping Neutron current there. He's
 already a critical reviewer for all the Neutron repositories. In
 addition, he's a stable maintainer. Ihar makes himself available in
 IRC, and has done a great job working with the entire Neutron team.
 His reviews are thoughtful and he really takes time to work with
 code submitters to ensure his feedback is addressed.
 
 I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core
 reviewers are especially beholden to this responsibility. I'd also
 like to point out and reinforce that +1/-1 reviews are super useful,
 and I encourage everyone to continue reviewing code across Neutron
 as well as the other OpenStack projects, regardless of your status
 as a core reviewer on these projects.
 
 Existing Neutron cores, please vote +1/-1 on this proposal to add
 Ihar to the core reviewer team.
 
 Thanks!
 Kyle
 
 [1] http://stackalytics.com/report/contribution/neutron-group/90
 
 
 It's been a week, and Ihar has received 11 +1 votes. I'd like to welcome
 Ihar as the newest Neutron Core Reviewer!
 
 Thanks,
 Kyle
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Kevin Benton
The biggest disconnect in the model seems to be that Neutron assumes
you want self service networking. Most of these deploys don't. Or even
more importantly, they live in an organization where that is never
going to be an option.

Neutron provider networks is close, except it doesn't provide for
floating IP / NAT.

Why don't shared networks work in these cases? The workflow here would be
that there is a admin tenant responsible for creating the networks and
setting up the neutron router and floating IP pools, etc. Then tenants
would attach their VMs to the shared networks.

On Wed, Mar 11, 2015 at 5:59 AM, Sean Dague s...@dague.net wrote:

 The last couple of days I was at the Operators Meetup acting as Nova
 rep for the meeting. All the sessions were quite nicely recorded to
 etherpads here - https://etherpad.openstack.org/p/PHL-ops-meetup

 There was both a specific Nova session -
 https://etherpad.openstack.org/p/PHL-ops-nova-feedback as well as a
 bunch of relevant pieces of information in other sessions.

 This is an attempt for some summary here, anyone else that was in
 attendance please feel free to correct if I'm interpreting something
 incorrectly. There was a lot of content there, so this is in no way
 comprehensive list, just the highlights that I think make the most
 sense for the Nova team.

 =
  Nova Network - Neutron
 =

 This remains listed as the #1 issue from the Operator Community on
 their burning issues list
 (https://etherpad.openstack.org/p/PHL-ops-burning-issues L18). During
 the tags conversation we straw polled the audience
 (https://etherpad.openstack.org/p/PHL-ops-tags L45) and about 75% of
 attendees were over on neutron already. However those on Nova Network
 we disproportionally the largest clusters and longest standing
 OpenStack users.

 Of those on nova-network about 1/2 had no interest in being on
 Neutron (https://etherpad.openstack.org/p/PHL-ops-nova-feedback
 L24). Some of the primary reasons were the following:

 - Complexity concerns - neutron has a lot more moving parts
 - Performance concerns - nova multihost means there is very little
   between guests and the fabric, which is really important for the HPC
   workload use case for OpenStack.
 - Don't want OVS - ovs adds additional complexity, and performance
   concerns. Many large sites are moving off ovs back to linux bridge
   with neutron because they are hitting OVS scaling limits (especially
   if on UDP) - (https://etherpad.openstack.org/p/PHL-ops-OVS L142)

 The biggest disconnect in the model seems to be that Neutron assumes
 you want self service networking. Most of these deploys don't. Or even
 more importantly, they live in an organization where that is never
 going to be an option.

 Neutron provider networks is close, except it doesn't provide for
 floating IP / NAT.

 Going forward: I think the gap analysis probably needs to be revisited
 with some of the vocal large deployers. I think we assumed the
 functional parity gap was closed with DVR, but it's not clear in it's
 current format it actually meets the n-net multihost users needs.

 ===
  EC2 going forward
 ===

 Having a sustaninable EC2 is of high interest to the operator
 community. Many large deploys have some users that were using AWS
 prior to using OpenStack, or currently are using both. They have
 preexisting tooling for that.

 There didn't seem to be any objection to the approach of an external
 proxy service for this function -
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L111). Mostly
 the question is timing, and the fact that no one has validated the
 stackforge project. The fact that we landed everything people need to
 run this in Kilo is good, as these production deploys will be able to
 test it for their users when they upgrade.

 
  Burning Nova Features/Bugs
 

 Hierarchical Projects Quotas
 

 Hugely desired feature by the operator community
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L116). Missed
 Kilo. This made everyone sad.

 Action: we should queue this up as early Liberty priority item.

 Out of sync Quotas
 --

 https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63

 The quotas code is quite racey (this is kind of a known if you look at
 the bug tracker). It was actually marked as a top soft spot during
 last fall's bug triage -

 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html

 There is an operator proposed spec for an approach here -
 https://review.openstack.org/#/c/161782/

 Action: we should make a solution here a top priority for enhanced
 testing and fixing in Liberty. Addressing this would remove a lot of
 pain from ops.

 Reporting on Scheduler Fails
 

 Apparently, some time recently, we stopped logging scheduler fails
 above DEBUG, and that 

Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-12 Thread Tidwell, Ryan
I agree with dropping support for the wildcards.  It can always be revisited at 
later. I agree that being locked into backward compatibility with a design that 
we really haven't thought through is a good thing to avoid.  Most importantly 
(to me anyway) is that this will help in getting subnet allocation completed 
for Kilo. We can iterate on it later, but at least the base functionality will 
be there.

-Ryan

-Original Message-
From: Carl Baldwin [mailto:c...@ecbaldwin.net] 
Sent: Tuesday, March 10, 2015 11:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][neutron] Best API for generating subnets 
from pool

On Tue, Mar 10, 2015 at 12:24 PM, Salvatore Orlando sorla...@nicira.com wrote:
 I guess that frustration has now become part of the norm for Openstack.
 It is not the first time I frustrate people because I ask to reconsider
 decisions approved in specifications.

I'm okay revisiting decisions.  It is just the timing that is difficult.

 This is probably bad behaviour on my side. Anyway, I'm not suggesting to go
 back to the drawing board, merely trying to get larger feedback, especially
 since that patch should always have had the ApiImpact flag.

It did have the ApiImpact flag since PS1 [1].

 Needless to say, I'm happy to proceed with things as they've been agreed.

I'm happy to discuss and I value your input very highly.  I was just
hoping that it had come at a better time to react.

 There is nothing intrinsically wrong with it - in the sense that it does not
 impact the functional behaviour of the system.
 My comment is about RESTful API guidelines. What we pass to/from the API
 endpoint is a resource, in this case the subnet being created.
 You expect gateway_ip to be always one thing - a gateway address, whereas
 with the wildcarded design it could be an address or an incremental counter
 within a range, but with the counter being valid only in request objects.
 Differences in entities between requests and response are however fairly
 common in RESTful APIs, so if the wildcards sastisfy a concrete and valid
 use case I will stop complaining, but I'm not sure I see any use case for
 wildcarded gateways and allocation pools.

Let's drop the use case and the wildcards as we've discussed.

 Also, there might also be backward-compatible ways of switching from one
 approach to another, in which case I'm happy to keep things as they are and
 relieve Ryan from yet another worry.

I think dropping the use case for now allows us the most freedom and
doesn't commit us to supporting backward compatibility for a decision
that may end up proving to be a mistake in API design.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-12 Thread Sean Dague
On 03/10/2015 08:52 PM, Joe Gordon wrote:
 
 On Tue, Mar 10, 2015 at 5:09 PM, Alan Pevec ape...@gmail.com
 mailto:ape...@gmail.com wrote:
 
  The wheel has been removed from PyPI and anyone installing testtools
  1.7.0 now will install from source which works fine.
 
 On stable/icehouse devstack fails[*] with
 pkg_resources.VersionConflict: (unittest2 0.5.1
 (/usr/lib/python2.7/dist-packages),
 Requirement.parse('unittest2=1.0.0'))
 when installing testtools 1.7.0
 
 unittest2 is not capped in stable/icehouse requirements, why is it not
 upgraded by pip?
 
 
 Tracking bug: https://bugs.launchpad.net/devstack/+bug/1430592

So, we can work around this in devstack, but it seems like there is a
more fundamental bug here that setup project isn't following dependencies.

Is this just another 'pip is drunk' issue in it not actually satisfying
requirements?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stacktach] [oslo] stachtach - kombu - pika ??

2015-03-12 Thread Joshua Harlow

Hi all,

I saw the following on 
https://etherpad.openstack.org/p/PHL-ops-rabbit-queue and was wondering 
if there was more explanation of why?


The StackTach team is switching from Kombu to Pika (at the 
recommendation of core rabbitmq devs). Hopefully oslo-messaging will do 
the same.


I'm wondering why/what?

Pika seems to be less supported, has less support for things other than 
rabbitmq, and seems less developed (it lacks python 3.3 support apparently).


What's the details on this idea listed there?

Any stachtack folks got any more details?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-12 Thread Kyle Mestery
On Tue, Mar 10, 2015 at 11:46 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Tue, Mar 10, 2015, at 12:29 PM, Russell Bryant wrote:
  The TC is in the middle of implementing a fairly significant change in
  project governance.  You can find an overview from Thierry on the
  OpenStack blog [1].
 
  Part of the change is to recognize more projects as being part of the
  OpenStack community.  Another critical part was replacing the integrated
  release with a set of tags.  A project would be given a tag if it meets
  some defined set of criteria.
 
  I feel that we're at a very vulnerable part of this transition.  We've
  abolished the incubation process and integrated release.  We've
  established a fairly low bar for new projects [2].  However, we have not
  yet approved *any* tags other than the one that reflects which projects
  are included in the final integrated release (Kilo) [3].  Despite the
  previously discussed challenges with the integrated release,
  it did at least mean that a project has met a very useful set of
  criteria [4].
 
  We now have several new project proposals.  However, I propose not
  approving any new projects until we have a tagging system that is at
  least far enough along to represent the set of criteria that we used to
  apply to all OpenStack projects (with exception for ones we want to
  consciously drop).  Otherwise, I think it's a significant setback to our
  project governance as we have yet to provide any useful way to navigate
  the growing set of projects.
 
  The resulting set of tags doesn't have to be focused on replicating our
  previous set of criteria.  The focus must be on what information is
  needed by various groups of consumers and tags are a mechanism to
  implement that.  In any case, we're far from that point because today we
  have nothing.
 
  I can't think of any good reason to rush into approving projects in the
  short term.  If we're not able to work out this rich tagging system in a
  reasonable amount of time, then maybe the whole approach is broken and
  we need to rethink the whole approach.

 I think we made it pretty clear that we would be taking approvals
 slowly, and that we might not approve any new projects before the
 summit, precisely for the reasons you state here. I have found the
 submitted proposals

 Right, but we want it to be clear we're not approving new project
proposals at this point so everyone is on the same page.


 
  Thanks,
 
  [1]
  http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
  [2]
  http://governance.openstack.org/reference/new-projects-requirements.html
  [3] http://governance.openstack.org/reference/tags/index.html
  [4]
 
 http://governance.openstack.org/reference/incubation-integration-requirements.html
 
  --
  Russell Bryant
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat] Autoscaling parameters blueprint

2015-03-12 Thread ELISHA, Moshe (Moshe)
I am familiar of the removal policies. Thanks!

Our use case for parameters on scale out is as follows:

Every server has a unique index that identifies it.
The first server has an index of 1, the second has an index of 2, etc.
The index of each server must exist prior to the configuration phase of the 
server.

This use case is an outcome of a virtualization process for an NFV application.
In the past this application was scaled manually by adding physical cards into 
slots - the index is the slot number.
In order to allow a smooth and fast transition of the app into the cloud - the 
requirement is to stay with the same architecture.


The current suggested solution is as follows:

The HOT will be created with an AutoScalingGroup and two ScalingPolicies for 
scale out and scale in.
Like many other NFV applications, this application also has a Life Cycle 
Manager of its own that monitors and decides when to scale.
When scale is needed, the LCM will invoke the alarm_url exposed by these 
ScalingPolicies while providing the server index for the newly created server.

The index is just one example of a parameter needed at scale out - there can be 
others.
Much more design is needed when the desired_capacity  1 or  the 
scaling_adjustment  1 or in percentage but let's first agree that the use case 
is OK.


-Original Message-
From: Steven Hardy [mailto:sha...@redhat.com] 
Sent: Wednesday, March 11, 2015 12:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][heat] Autoscaling parameters blueprint

On Wed, Mar 11, 2015 at 09:01:04AM +, ELISHA, Moshe (Moshe) wrote:
Hey,
 
 
 
Can someone please share the current status of the Autoscaling signals to
allow parameter passing for UserData blueprint -
 https://blueprints.launchpad.net/heat/+spec/autoscaling-parameters.

This is quite old, and subsequent discussions have happened which indicate a 
slightly different approach, e.g this thread here where I discuss approaches to 
signalling an AutoScalingGroup to remove a specific group member.  As Angus has 
noted, ResourceGroup already allows this via a different interface.

http://lists.openstack.org/pipermail/openstack-dev/2014-December/053447.html

We have a very concrete use case that require passing parameters on scale
out.
 
What is the best way to revive this blueprint?

Probably the first thing is to provide a more detailed description of your 
use-case.

I'll try to revive the AutoScalingGroup signal patch mentioned in the thread 
above this week, it's been around for a while and is probably needed for any 
interface where we pass data in to influence AutoScalingGroup adjustment 
behaviour asynchronously (e.g not via the template definition).

https://review.openstack.org/#/c/143496/

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Sean Dague
The last couple of days I was at the Operators Meetup acting as Nova
rep for the meeting. All the sessions were quite nicely recorded to
etherpads here - https://etherpad.openstack.org/p/PHL-ops-meetup

There was both a specific Nova session -
https://etherpad.openstack.org/p/PHL-ops-nova-feedback as well as a
bunch of relevant pieces of information in other sessions.

This is an attempt for some summary here, anyone else that was in
attendance please feel free to correct if I'm interpreting something
incorrectly. There was a lot of content there, so this is in no way
comprehensive list, just the highlights that I think make the most
sense for the Nova team.

=
 Nova Network - Neutron
=

This remains listed as the #1 issue from the Operator Community on
their burning issues list
(https://etherpad.openstack.org/p/PHL-ops-burning-issues L18). During
the tags conversation we straw polled the audience
(https://etherpad.openstack.org/p/PHL-ops-tags L45) and about 75% of
attendees were over on neutron already. However those on Nova Network
we disproportionally the largest clusters and longest standing
OpenStack users.

Of those on nova-network about 1/2 had no interest in being on
Neutron (https://etherpad.openstack.org/p/PHL-ops-nova-feedback
L24). Some of the primary reasons were the following:

- Complexity concerns - neutron has a lot more moving parts
- Performance concerns - nova multihost means there is very little
  between guests and the fabric, which is really important for the HPC
  workload use case for OpenStack.
- Don't want OVS - ovs adds additional complexity, and performance
  concerns. Many large sites are moving off ovs back to linux bridge
  with neutron because they are hitting OVS scaling limits (especially
  if on UDP) - (https://etherpad.openstack.org/p/PHL-ops-OVS L142)

The biggest disconnect in the model seems to be that Neutron assumes
you want self service networking. Most of these deploys don't. Or even
more importantly, they live in an organization where that is never
going to be an option.

Neutron provider networks is close, except it doesn't provide for
floating IP / NAT.

Going forward: I think the gap analysis probably needs to be revisited
with some of the vocal large deployers. I think we assumed the
functional parity gap was closed with DVR, but it's not clear in it's
current format it actually meets the n-net multihost users needs.

===
 EC2 going forward
===

Having a sustaninable EC2 is of high interest to the operator
community. Many large deploys have some users that were using AWS
prior to using OpenStack, or currently are using both. They have
preexisting tooling for that.

There didn't seem to be any objection to the approach of an external
proxy service for this function -
(https://etherpad.openstack.org/p/PHL-ops-nova-feedback L111). Mostly
the question is timing, and the fact that no one has validated the
stackforge project. The fact that we landed everything people need to
run this in Kilo is good, as these production deploys will be able to
test it for their users when they upgrade.


 Burning Nova Features/Bugs


Hierarchical Projects Quotas


Hugely desired feature by the operator community
(https://etherpad.openstack.org/p/PHL-ops-nova-feedback L116). Missed
Kilo. This made everyone sad.

Action: we should queue this up as early Liberty priority item.

Out of sync Quotas
--

https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63

The quotas code is quite racey (this is kind of a known if you look at
the bug tracker). It was actually marked as a top soft spot during
last fall's bug triage -
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html

There is an operator proposed spec for an approach here -
https://review.openstack.org/#/c/161782/

Action: we should make a solution here a top priority for enhanced
testing and fixing in Liberty. Addressing this would remove a lot of
pain from ops.

Reporting on Scheduler Fails


Apparently, some time recently, we stopped logging scheduler fails
above DEBUG, and that behavior also snuck back into Juno as well
(https://etherpad.openstack.org/p/PHL-ops-nova-feedback L78). This
has made tracking down root cause of failures far more difficult.

Action: this should hopefully be a quick fix we can get in for Kilo
and backport.

=
 Additional Interesting Bits
=

Rabbit
--

There was a whole session on Rabbit -
https://etherpad.openstack.org/p/PHL-ops-rabbit-queue

Rabbit is a top operational concern for most large sites. Almost all
sites have a restart everything that talks to rabbit script because
during rabbit ha opperations queues tend to blackhole.

All other queue systems OpenStack supports are worse than Rabbit (from

Re: [openstack-dev] [neutron][lbaas] v2 vendor drivers

2015-03-12 Thread Kyle Mestery
On Tue, Mar 10, 2015 at 12:08 PM, Doug Wiegley doug...@parksidesoftware.com
 wrote:

 Hi all,

 LBaaS v2 is going out in Kilo, and we have quite a few vendor drivers
 ready to merge, but most are waiting for the tempest tests/job to be done
 before they can satisfy their third-party CI requirements.  Because that
 job is likely to be done so close to FF for Kilo, I am proposing that we
 begin merging vendor drivers for lbaas v2 with the following guidelines:

 - There must be a CI from that vendor posting in the neutron-lbaas repo,
 running either the v1 tempest suite, or installing devstack/v2 and exiting.
 - The CI *must* start running the lbaas v2 tempest jobs within 1 week of
 the openstack lbaasv2 job merging and voting in the neutron-lbaas gate.

 I expect that vendor CIs can start modeling their jobs after the
 experimental tempest job [1] within the next few days (depends on how long
 the experimental queue takes to run a few times.)

 Ack on this plan, it all sounds good to me Doug!

Kyle


 Thanks,
 doug

 [1] https://review.openstack.org/#/c/161432/
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-12 Thread Russell Bryant
On 03/10/2015 12:47 PM, Doug Hellmann wrote:
 
 
 On Tue, Mar 10, 2015, at 12:46 PM, Doug Hellmann wrote:
 I think we made it pretty clear that we would be taking approvals
 slowly, and that we might not approve any new projects before the
 summit, precisely for the reasons you state here. I have found the
 submitted proposals 
 
 Oops
 
 I have found the existing applications useful for thinking about what
 tags we need, and what other criteria we might be missing (Joe's
 proposal to add a team employer diversity requirement is one example).

That's a good example of a requirement we already had but haven't yet
replaced.

I don't think there's consensus about how far off we are from being able
to approve projects.  If everyone thinks of course, we have a ton of
work to do before we can consider approving the first one, then great,
but I didn't think that was the case.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] proposing rameshg87 to ironic-core

2015-03-12 Thread David Shrewsbury
+1

 On Mar 9, 2015, at 6:03 PM, Devananda van der Veen devananda@gmail.com 
 wrote:
 
 Hi all,
 
 I'd like to propose adding Ramakrishnan (rameshg87) to ironic-core.
 
 He's been consistently providing good code reviews, and been in the top five 
 active reviewers for the last 90 days and top 10 for the last 180 days. Two 
 cores have recently approached me to let me know that they, too, find his 
 reviews valuable.
 
 Furthermore, Ramakrishnan has made significant code contributions to Ironic 
 over the last year. While working primarily on the iLO driver, he has also 
 done a lot of refactoring of the core code, touched on several other drivers, 
 and maintains the proliantutils library on stackforge. All in all, I feel 
 this demonstrates a good and growing knowledge of the codebase and 
 architecture of our project, and feel he'd be a valuable member of the core 
 team.
 
 Stats, for those that want them, are below the break.
 
 Best Regards,
 Devananda
 
 
 
 http://stackalytics.com/?release=allmodule=ironic-groupuser_id=rameshg87 
 http://stackalytics.com/?release=allmodule=ironic-groupuser_id=rameshg87
 
 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt 
 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
 http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt 
 http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat] Autoscaling parameters blueprint

2015-03-12 Thread Pavlo Shchelokovskyy
Hi,

as you have a separate monitoring solution (not Ceilometer), it seems you
can use ResourceGroup instead of AutoscalingGroup and issue heatclient/rest
calls to do a stack-update with desired size of the group when needed. The
group members will be numbered, and as already said you can also control
the removal. One possible downside is that AFAIR the numbers would not be
reused on subsequent scale-down/ups.

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Wed, Mar 11, 2015 at 2:46 PM, ELISHA, Moshe (Moshe) 
moshe.eli...@alcatel-lucent.com wrote:

 I am familiar of the removal policies. Thanks!

 Our use case for parameters on scale out is as follows:

 Every server has a unique index that identifies it.
 The first server has an index of 1, the second has an index of 2, etc.
 The index of each server must exist prior to the configuration phase of
 the server.

 This use case is an outcome of a virtualization process for an NFV
 application.
 In the past this application was scaled manually by adding physical cards
 into slots - the index is the slot number.
 In order to allow a smooth and fast transition of the app into the cloud -
 the requirement is to stay with the same architecture.


 The current suggested solution is as follows:

 The HOT will be created with an AutoScalingGroup and two ScalingPolicies
 for scale out and scale in.
 Like many other NFV applications, this application also has a Life Cycle
 Manager of its own that monitors and decides when to scale.
 When scale is needed, the LCM will invoke the alarm_url exposed by these
 ScalingPolicies while providing the server index for the newly created
 server.

 The index is just one example of a parameter needed at scale out - there
 can be others.
 Much more design is needed when the desired_capacity  1 or  the
 scaling_adjustment  1 or in percentage but let's first agree that the use
 case is OK.


 -Original Message-
 From: Steven Hardy [mailto:sha...@redhat.com]
 Sent: Wednesday, March 11, 2015 12:39 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][heat] Autoscaling parameters blueprint

 On Wed, Mar 11, 2015 at 09:01:04AM +, ELISHA, Moshe (Moshe) wrote:
 Hey,
 
 
 
 Can someone please share the current status of the Autoscaling
 signals to
 allow parameter passing for UserData blueprint -
  https://blueprints.launchpad.net/heat/+spec/autoscaling-parameters.

 This is quite old, and subsequent discussions have happened which indicate
 a slightly different approach, e.g this thread here where I discuss
 approaches to signalling an AutoScalingGroup to remove a specific group
 member.  As Angus has noted, ResourceGroup already allows this via a
 different interface.


 http://lists.openstack.org/pipermail/openstack-dev/2014-December/053447.html

 We have a very concrete use case that require passing parameters on
 scale
 out.
 
 What is the best way to revive this blueprint?

 Probably the first thing is to provide a more detailed description of your
 use-case.

 I'll try to revive the AutoScalingGroup signal patch mentioned in the
 thread above this week, it's been around for a while and is probably needed
 for any interface where we pass data in to influence AutoScalingGroup
 adjustment behaviour asynchronously (e.g not via the template definition).

 https://review.openstack.org/#/c/143496/

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-12 Thread Carl Baldwin
On Tue, Mar 10, 2015 at 12:06 PM, Ryan Moats rmo...@us.ibm.com wrote:
 While I'd personally like to see this be restricted (Carl's position), I
 know
 of at least one existence proof where management applications are doing
 precisely what Gabriel is suggesting - reusing the same address range to
 minimize the configuration differences.

Okay, I see the problem.  I'm not particularly thrilled about this
situation but I see the problem.  Can we deprecate this mis-feature?
Is there hope that we can stop this madness some day?  :)

Regardless, it sounds like we need to support this for at least some
time in the future.  So, here is what I propose:

When talking with external IPAM to get a subnet, Neutron will pass
both the cidr as the primary identifier and the subnet_id as an
alternate identifier.  External systems that do not allow overlap can
simply ignore the alternate identifier.  Salvatore, I'm mostly
speaking to you when I ask this but would welcome others' opinions
too.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Horizon][Keystone] Failed to set up keystone v3 api for horizon

2015-03-12 Thread Trelohan Christophe
Hello,

Does user_id replaced in cloud_admin rule is id of cloud_admin user ?
I don't think you can log in with cloud_admin user in horizon, it seems that 
without project created for an user, you can't log in
in horizon.

I'm also interested in this, I also followed the mentioned article, but when I 
try to login with admin user in default domain,
I have the same error (not authorized to list_projects). Both with horizon and 
Rest API.




De : Lei Zhang [mailto:zhang.lei@gmail.com] 
Envoyé : jeudi 12 mars 2015 03:33
À : openstack; OpenStack Development Mailing List
Objet : [Openstack] [Horizon][Keystone] Failed to set up keystone v3 api for 
horizon

is there anyone tryed this and successfully?

On Mon, Mar 9, 2015 at 4:25 PM, Lei Zhang zhang.lei@gmail.com wrote:
Hi guys,

I am setting up the keytone v3 api. Now I meet a issue about the `cloud_admin` 
policy.

Base on the http://www.florentflament.com/blog/setting-keystone-v3-domains.html 
article, I modify the cloud_admin policy to 

```
cloud_admin: rule:admin_required and 
domain_id:ef0d30167f744401a0cbfcc938ea7d63,
```

But the cloud_admin don't work as expected. I failed to open all the identity 
panel ( like http://host/horizon/identity/domains/)
Horizon tell me Error: Unable to retrieve project list.
And keystone log warning:  

```
2015-03-09 16:00:06.423 9415 DEBUG keystone.policy.backends.rules [-] enforce 
identity:list_user_projects: {'is_delegated_auth': False, 'access_token_id': 
None, 'user_id': u'6433222efd78459bb70ad9adbcfac418', 'roles': [u'_member_', 
u'admin'], 'trustee_id': None, 'trustor_id': None, 'consumer_id': None, 
'token': KeystoneToken (audit_id=DWsSa6yYSWi0ht9E7q4uhw, 
audit_chain_id=w_zLBBeFQ82KevtJrdKIJw) at 0x7f4503fab3c8, 'project_id': 
u'4d170baaa89b4e46b239249eb5ec6b00', 'trust_id': None}, enforce 
/usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py:100
2015-03-09 16:00:06.061 9410 WARNING keystone.common.wsgi [-] You are not 
authorized to perform the requested action: identity:list_projects (Disable 
debug mode to suppress these details.) 
```

​I make some debug and found that, the root cause is that the `context` 
variable in keystone has no `domain_id` field( like the above keystone log). So 
the `cloud_admin` rule failed.​ if i change the `cloud_admin` to following. It 
works as expected. 

```
cloud_admin: rule:admin_required and 
user_id:6433222efd78459bb70ad9adbcfac418,
```

I found that in the keystone code[0], the domain_id only exist when it is a 
domain scope. But i believe that the horizon login token is a project one( I am 
not very sure this)

```
    if token.project_scoped:
        auth_context['project_id'] = token.project_id
    elif token.domain_scoped:
        auth_context['domain_id'] = token.domain_id
    else:
        LOG.debug('RBAC: Proceeding without project or domain scope')

```

Is it a bug? or some wrong configuration? 


Following is my configuration.


```
# /etc/keystone/keystone.conf
[DEFAULT]
debug=true
verbose=true
log_dir=/var/log/keystone
[assignment]
driver = keystone.assignment.backends.sql.Assignment 
[database]
connection=mysql://:@controller/keystone
[identity]
driver=keystone.identity.backends.sql.Identity
[memcache]
servers=controller1:11211,controller2:11211,controller3:1121
[token]
provider=keystone.token.providers.uuid.Provider
```

```
# /etc/openstack-dashboard/local_settings.py ( partly )
POLICY_FILES_PATH = /etc/openstack-dashboard/
POLICY_FILES = {
    'identity': 'keystone_policy.json',
}
OPENSTACK_HOST = 127.0.0.1
OPENSTACK_KEYSTONE_URL = http://%s:5000/v3; % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = _member_
OPENSTACK_API_VERSIONS = {
     data_processing: 1.1,
     identity: 3,
     volume: 2
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'admin'
``` 

​[0] 
https://github.com/openstack/keystone/blob/master/keystone/common/authorization.py#L58​

-- 
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l




-- 
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Duncan Thomas
ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
/usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
DeprecationWarning: The oslo namespace package is deprecated. Please use
oslo_i18n instead.
  from oslo import i18n
/opt/stack/cinder/cinder/openstack/common/policy.py:98: DeprecationWarning:
The oslo namespace package is deprecated. Please use oslo_config instead.
  from oslo.config import cfg
/opt/stack/cinder/cinder/openstack/common/policy.py:99: DeprecationWarning:
The oslo namespace package is deprecated. Please use oslo_serialization
instead.
  from oslo.serialization import jsonutils
/opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The oslo
namespace package is deprecated. Please use oslo_messaging instead.
  from oslo import messaging
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fileutils.py:22:
DeprecationWarning: The oslo namespace package is deprecated. Please use
oslo_utils instead.
  from oslo.utils import excutils


What are normal, none developer users supposed to do with such warnings,
other than:
a) panic or b) Assume openstack is beta quality and then panic

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Keystone] Failed to set up keystone v3 api for horizon

2015-03-12 Thread Lin Hua Cheng
Hi,

The 'cloud_admin' policy file requires domain-scoped to work to work.

Horizon does not currently support domain scope token yet. So yes, it is a
gap in horizon at the moment.

There are on-going patches to address this in horizon:
- https://review.openstack.org/#/c/141153/
- https://review.openstack.org/#/c/148082/

Dan (esp) prepared a nicely written document on this should eventually work.

-Lin

On Wed, Mar 11, 2015 at 7:33 PM, Lei Zhang zhang.lei@gmail.com wrote:

 is there anyone tryed this and successfully?

 On Mon, Mar 9, 2015 at 4:25 PM, Lei Zhang zhang.lei@gmail.com wrote:

 Hi guys,

 I am setting up the keytone v3 api. Now I meet a issue about the
 `cloud_admin` policy.

 Base on the
 http://www.florentflament.com/blog/setting-keystone-v3-domains.html
 article, I modify the cloud_admin policy to

 ```
 cloud_admin: rule:admin_required and
 domain_id:ef0d30167f744401a0cbfcc938ea7d63,
 ```

 But the cloud_admin don't work as expected. I failed to open all the
 identity panel ( like http://host/horizon/identity/domains/)
 Horizon tell me Error: Unable to retrieve project list.
 And keystone log warning:

 ```
 2015-03-09 16:00:06.423 9415 DEBUG keystone.policy.backends.rules [-]
 enforce identity:list_user_projects: {'is_delegated_auth': False,
 'access_token_id': None, 'user_id': u'6433222efd78459bb70ad9adbcfac418',
 'roles': [u'_member_', u'admin'], 'trustee_id': None, 'trustor_id': None,
 'consumer_id': None, 'token': KeystoneToken
 (audit_id=DWsSa6yYSWi0ht9E7q4uhw, audit_chain_id=w_zLBBeFQ82KevtJrdKIJw) at
 0x7f4503fab3c8, 'project_id': u'4d170baaa89b4e46b239249eb5ec6b00',
 'trust_id': None}, enforce
 /usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py:100
 2015-03-09 16:00:06.061 9410 WARNING keystone.common.wsgi [-] You are not
 authorized to perform the requested action: identity:list_projects (Disable
 debug mode to suppress these details.)
 ```

 ​I make some debug and found that, the root cause is that the `context`
 variable in keystone has no `domain_id` field( like the above keystone
 log). So the `cloud_admin` rule failed.​ if i change the `cloud_admin` to
 following. It works as expected.

 ```
 cloud_admin: rule:admin_required and user_id:
 6433222efd78459bb70ad9adbcfac418,
 ```

 I found that in the keystone code[0], the domain_id only exist when it is
 a domain scope. But i believe that the horizon login token is a project
 one( I am not very sure this)

 ```
 if token.project_scoped:
 auth_context['project_id'] = token.project_id
 elif token.domain_scoped:
 auth_context['domain_id'] = token.domain_id
 else:
 LOG.debug('RBAC: Proceeding without project or domain scope')

 ```

 Is it a bug? or some wrong configuration?


 Following is my configuration.


 ```
 # /etc/keystone/keystone.conf
 [DEFAULT]
 debug=true
 verbose=true
 log_dir=/var/log/keystone
 [assignment]
 driver = keystone.assignment.backends.sql.Assignment
 [database]
 connection=mysql://:@controller/keystone
 [identity]
 driver=keystone.identity.backends.sql.Identity
 [memcache]
 servers=controller1:11211,controller2:11211,controller3:1121
 [token]
 provider=keystone.token.providers.uuid.Provider
 ```

 ```
 # /etc/openstack-dashboard/local_settings.py ( partly )
 POLICY_FILES_PATH = /etc/openstack-dashboard/
 POLICY_FILES = {
 'identity': 'keystone_policy.json',
 }
 OPENSTACK_HOST = 127.0.0.1
 OPENSTACK_KEYSTONE_URL = http://%s:5000/v3; % OPENSTACK_HOST
 OPENSTACK_KEYSTONE_DEFAULT_ROLE = _member_
 OPENSTACK_API_VERSIONS = {
  data_processing: 1.1,
  identity: 3,
  volume: 2
 }
 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'admin'
 ```

 ​[0]
 https://github.com/openstack/keystone/blob/master/keystone/common/authorization.py#L58
 ​

 --
 Lei Zhang
 Blog: http://xcodest.me
 twitter/weibo: @jeffrey4l




 --
 Lei Zhang
 Blog: http://xcodest.me
 twitter/weibo: @jeffrey4l

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Prefix delegation using dibbler client

2015-03-12 Thread John Davidge (jodavidg)
I am pleased to say that we are now in a good position with this patch.
The necessary DHCPv6 client changes have been made available in the latest
release of Dibbler (1.0.1) and we’re getting some much appreciated
assistance from Ihar and Thomas in having this new version packaged for
wide availability - thanks guys.

We’ve updated the patch to include tests and it's been brought up-to-date
with the latest L3 agent refactoring changes. It is no longer WIP and we
would very much appreciate your reviews and feedback.

https://review.openstack.org/#/c/158697/


Cheers,

John




On 24/02/2015 17:40, John Davidge (jodavidg) jodav...@cisco.com wrote:

Hello all,

We now have a work-in-progress patch up for review:

https://review.openstack.org/#/c/158697/


Feedback on our approach is much appreciated.

Many thanks,

John Davidge
OpenStack@Cisco




On 20/02/2015 18:28, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Those are good news!

I commented on the pull request. Briefly, we can fetch from git, but
would prefer an official release.

Thanks,
/Ihar

On 02/19/2015 11:26 PM, Robert Li (baoli) wrote:
 Hi Kyle, Ihar,
 
 It looks promising to have our patch upstreamed. Please take a look
 at this pull request
 
https://github.com/tomaszmrugalski/dibbler/pull/26#issuecomment-75144912
.
 Most importantly, Tomek asked if it’s sufficient to have the code
 up in his master branch. I guess you guys may be able to help
 answer that question since I’m not familiar with openstack release
 process.
 
 Cheers, Robert
 
 On 2/13/15, 12:16 PM, Kyle Mestery mest...@mestery.com
 mailto:mest...@mestery.com wrote:
 
 On Fri, Feb 13, 2015 at 10:57 AM, John Davidge (jodavidg)
 jodav...@cisco.com mailto:jodav...@cisco.com wrote:
 
 Hi Ihar,
 
 To answer your questions in order:
 
 1. Yes, you are understanding the intention correctly. Dibbler
 doesn¹t currently support client restart, as doing so causes all
 existing delegated prefixes to be released back to the PD server.
 All subnets belonging to the router would potentially receive a new
 cidr every time a subnet is added/removed.
 
 2. Option 2 cannot be implemented using the current version of
 dibbler, but it can be done using the version we have modified.
 Option 3 could possibly be done with the current version of
 dibbler, but with some major limitations - only one single router
 namespace would be supported.
 
 Once the dibbler changes linked below are reviewed and finalised we
 will only need to merge a single patch into the upstream dibbler
 repo. No further patches are anticipated.
 
 Yes, you are correct that dibbler is not needed unless prefix
 delegation is enabled by the deployer. It is intended as an
 optional feature that can be easily disabled (and probably will be
 by default). A test to check for the correct dibbler version would
 certainly be necessary.
 
 Testing in the gate will be an issue until the new version of
 dibbler is merged and packaged in the various distros. I¹m not sure
 if there is a way to avoid this problem, unless we have devstack
 install from our updated repo while we wait.
 
 To me, this seems like a pretty huge problem. We can't expect
 distributions to package side-changes to upstream projects. The
 correct way to solve this problem is to work to get the changes
 required in the dependent packages upstream into those projects
 first (dibbler, in this case), and then propose the changes into
 Neutron to make use of those changes. I don't see how we can
 proceed with this work until the issues around dibbler has been
 resolved.
 
 
 John Davidge OpenStack@Cisco
 
 
 
 
 On 13/02/2015 16:01, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Thanks for the write-up! See inline.
 
 On 02/13/2015 04:34 PM, Robert Li (baoli) wrote:
 Hi,
 
 while trying to integrate dibbler client with neutron to support
 PD, we countered a few issues with the dibbler client (and
 server). With a neutron router, we have the qg-xxx interface that
 is connected to the public network, on which a dhcp server is
 running on the delegating router. For each subnet with PD
 enabled, a router port will be created in the neutron router. As
 a result, a new PD request will be sent that asks for a prefix
 from the delegating router. Keep in mind that the subnet is added
 into the router dynamically.
 
 We thought about the following options:
 
 1. use a single dibbler client to support the above requirement.
 This means, the client should be able to accept new requests on
 the fly either through configuration reload or other interfaces.
 Unfortunately, dibbler client doesn¹t support it.
 
 Sorry for my ignorance on PD implementation (I will definitely look
 at it the next week), but what does this entry above mean? Do you
 want a single dibbler instance running per router serving all
 subnets plugged into it? And you want to get configuration updates
 when a new subnet is plugged in, or removed from the 

Re: [openstack-dev] Avoiding regression in project governance

2015-03-12 Thread Georgy Okrokvertskhov
Some clarification about Murano:

3. *Maybe*. Not sure about the scope, it is fairly broad and there may be
some open ended corners, such as some references to billing. On the other
hand an application catalog sounds really useful and like a measured
progression for OpenStack as a whole. Murano may overlap with glance's
stated mission of  To provide a service where users can upload and
discover data assets that are meant to be used with other services, like
images for Nova and templates for Heat.

Glance mission was changed with active help from Murano team side as Murano
needs to have a storage for Applications definitions. That is why one of
the Murano engineers right now is working on landing Artifact repository
which was initially drafted in Murano team and then re-architected to be
useful for other projects like Heat and Nova. Once artifacts are landed in
Kilo release[all patches are on review] Murano will use Glance to store all
packages and related objects, so there will be no any overlap with Glance.

Murano also relies heavily on the Mistral service which is still in
stackforge itself.
This is a wrong perception. Murano currently does not use Mistral at all.
It will use it once cross project initiative Congress-Murano-Mistral will
be implemented. Right now Murano works without Mistral installed. Murano
will use congress and Mistral to offload some of the logic outside of
Murano to have very simple life-cycle workflows controlled by policies and
Mistral flows.

Thanks
Gosha

On Wed, Mar 11, 2015 at 7:28 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-03-10 23:00:16 + (+), Devananda van der Veen wrote:
  Many of those requirements were subjective (well tested, well
  documented, etc) and had to be evaluated by the TC. Are these the
  sort of tags you're referring to? If so, and if the TC delegated
  responsibility to manage the application of those tags (say, QA
  team manages the 'well-tested' tag), would that be sufficient?
 [...]

 Yep, that's exactly what I'm hoping for. But without those in place
 yet I worry that we'll end up turning away lots of new requests for
 help from projects coming forward thinking they're suddenly entitled
 by virtue of being OpenStack and not really have any common
 criteria to point them at so that they can work toward getting more
 priority.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-12 Thread Mike Bayer
Hello Cinder -

I’d like to note that for issue
https://bugs.launchpad.net/oslo.db/+bug/1417018, no solution that actually
solves the problem for Cinder is scheduled to be committed anywhere. The
patch I proposed for oslo.db is on hold, and the patch proposed for
oslo.incubator in the service code will not fix this issue for Cinder, it
will only make it fail harder and faster.

I’ve taken myself off as the assignee on this issue, as someone on the
Cinder team should really propose the best fix of all which is to call
engine.dispose() when first entering a new child fork. Related issues are
already being reported, such as
https://bugs.launchpad.net/cinder/+bug/1430859. Right now Cinder is very
unreliable on startup and this should be considered a critical issue.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Boris Bobrov
On Thursday 12 March 2015 12:24:57 Duncan Thomas wrote:
 ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
 /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
 DeprecationWarning: The oslo namespace package is deprecated. Please use
 oslo_i18n instead.
   from oslo import i18n
 /opt/stack/cinder/cinder/openstack/common/policy.py:98: DeprecationWarning:
 The oslo namespace package is deprecated. Please use oslo_config instead.
   from oslo.config import cfg
 /opt/stack/cinder/cinder/openstack/common/policy.py:99: DeprecationWarning:
 The oslo namespace package is deprecated. Please use oslo_serialization
 instead.
   from oslo.serialization import jsonutils
 /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The oslo
 namespace package is deprecated. Please use oslo_messaging instead.
   from oslo import messaging
 /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fi
 leutils.py:22: DeprecationWarning: The oslo namespace package is
 deprecated. Please use oslo_utils instead.
   from oslo.utils import excutils
 
 
 What are normal, none developer users supposed to do with such warnings,
 other than:
 a) panic or b) Assume openstack is beta quality and then panic

Non developer users are supposed to file a bug, leave installation and usage 
to professional devops who know how to handle logs or and stop using non-
stable branch.

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-12 Thread Mike Bayer


Mike Perez thin...@gmail.com wrote:

 On 11:49 Wed 11 Mar , Walter A. Boring IV wrote:
 We have this patch in review currently.   I think this one should
 'fix' it no?
 
 Please review.
 
 https://review.openstack.org/#/c/163551/
 
 Looks like it to me. Would appreciate a +1 from Mike Bayer before we push this
 through. Thanks for all your time on this Mike.

I have a question there, since I don’t know the scope of “Base”, that this
“Base” constructor is generally called once per Python process. It’s OK if it’s
called a little more than that, but if it’s called on like every service
request or something, then those engine.dispose() calls are not the right
approach, you’d instead just turn off pooling altogether, because otherwise
you’re spending tons of time creating and destroying connection pools that
aren’t even used as pools.   you want the “engine” to be re-used across
requests and everything else as much as possible, *except* across process
boundaries.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kolla] PTL Candidacy

2015-03-12 Thread Steven Dake (stdake)
I am running for PTL for the Kolla project.  I have been executing in an 
unofficial PTL capacity for the project for the Kilo cycle, but I feel it is 
important for our community to have an elected PTL and have asked Angus 
Salkeld, who has no outcome in the election, to officiate the election [1].

For the Kilo cycle our community went from zero LOC to a fully working 
implementation of most of the services based upon Kubernetes as the backend.  
Recently I led the effort to remove Kubernetes as a backend and provide 
container contents, building, and management on bare metal using docker-compose 
which is nearly finished.  At the conclusion of Kilo, it should be possible 
from one shell script to start an AIO full deployment of all of the current 
OpenStack git-namespaced services using containers built from RPM packaging.

For Liberty, I’d like to take our community and code to the next level.  Since 
our containers are fairly solid, I’d like to integrate with existing projects 
such as TripleO, os-ansible-deployment, or Fuel.  Alternatively the community 
has shown some interest in creating a multi-node HA-ified installation 
toolchain.

I am deeply committed to leading the community where the core developers want 
the project to go, wherever that may be.

I am strongly in favor of adding HA features to our container architecture.

I would like to add .deb package support and from-source support to our docker 
container build system.

I would like to implement a reference architecture where our containers can be 
used as a building block for deploying a reference platform of 3 controller 
nodes, ~100 compute nodes, and ~10 storage nodes.

I am open to expanding our scope to address full deployment, but would prefer 
to merge our work with one or more existing upstreams such as TripelO, 
os-ansible-deployment, and Fuel.

Finally I want to finish the job on functional testing, so all of our 
containers are functionally checked and gated per commit on Fedora, CentOS, and 
Ubuntu.

I am experienced as a PTL, leading the Heat Orchestration program from zero LOC 
through OpenStack integration for 3 development cycles.  I write code as a PTL 
and was instrumental in getting the Magnum Container Service code-base kicked 
off from zero LOC where Adrian Otto serves as PTL.  My past experiences include 
leading Corosync from zero LOC to a stable building block of High Availability 
in Linux.  Prior to that I was part of a team that implemented Carrier Grade 
Linux.  I have a deep and broad understanding of open source, software 
development, high performance team leadership, and distributed computing.

I would be pleased to serve as PTL for Kolla for the Liberty cycle and welcome 
your vote.

Regards
-steve

[1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package

2015-03-12 Thread Donald Stufft
Is it using an old version of setuptools? Like 0.6.28.

 On Mar 11, 2015, at 11:28 AM, Timothy Swanson (tiswanso) tiswa...@cisco.com 
 wrote:
 
 I don’t have any solution just chiming in that I see the same error with 
 devstack pulled from master on a new ubuntu trusty VM created last night.
 
 'pip install —upgrade wheel’ indicates:
 Requirement already up-to-date: wheel in 
 /usr/local/lib/python2.7/dist-packages
 
 Haven’t gotten it cleared up.
 
 Thanks,
 
 Tim
 
 On Mar 2, 2015, at 2:11 AM, Smigiel, Dariusz dariusz.smig...@intel.com 
 mailto:dariusz.smig...@intel.com wrote:
 
 
   
 From: yuntong [mailto:yuntong...@gmail.com mailto:yuntong...@gmail.com]
 Sent: Monday, March 2, 2015 7:35 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package
 
 Hello,
 I got an error when try to ./stack.sh as:
 2015-03-02 05:58:20.692 | net.ipv4.ip_local_reserved_ports = 35357,35358
 2015-03-02 05:58:20.959 | New python executable in tmp-venv-NoMO/bin/python
 2015-03-02 05:58:22.056 | Installing setuptools, pip...done.
 2015-03-02 05:58:22.581 | ERROR: 'pip wheel' requires the 'wheel' package. 
 To fix this, run: pip install wheel
 
 After pip install wheel, got same error.
 In [2]: wheel.__path__
 Out[2]: ['/usr/local/lib/python2.7/dist-packages/wheel']
 In [5]: pip.__path__
 Out[5]: ['/usr/local/lib/python2.7/dist-packages/pip']
 
 $ which python
 /usr/bin/python
 
 As above, the wheel can be imported successfully,
 any hints ?
 
 Thanks.
 
 
 Did you try pip install –upgrade wheel ?
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Enabling VM post-copy live migration

2015-03-12 Thread Luis Tomas

Hi,

As part of an European (FP7) project, named ORBIT 
(http://www.orbitproject.eu/), I'm working on including the possibility 
of live-migrating VMs in OpenStack in a post-copy mode.
This way of live-migrating VMs basically moves the computation right 
away to the destination and then the VM starts working from there, while 
still copying the memory from the source to the new location of the VM. 
That way the memory pages are only copied as if the VM modifies them, 
they are already in the destination host. This basically ensures that 
migrations finish regardless of what the VM is doing, i.e., even 
extremely memory intensive VMs. Therefore removing the problem of having 
VMs hanging on in migrating state forever (as discussed in previous 
mails, e.g., 
http://lists.openstack.org/pipermail/openstack-dev/2015-February/055725.html).


So far, I have included and tested this new functionality at the JUNO 
version, and the code modifications can be found in the github 
repository of the project (branch named post-copy):
- https://github.com/orbitfp7/nova/tree/post-copy -- mainly 
enabling the possibility of using the libvirt post-copy flag (libvirt 
driver.py). Note post-copy migration is not using tunneling as LibVirt 
patch for that is not yet ready.
- https://github.com/orbitfp7/python-novaclient/tree/post-copy -- 
adding the possibility of using the post-copy mode when triggering the 
migration: nova live-migration [--block-migrate] [--post-copy] VM_ID
- https://github.com/orbitfp7/horizon/tree/post-copy -- include a 
checkbox in the live-migration panel to perform the migration in 
post-copy mode. (like the one for enabling block-migration)


To be able to live-migrate VMs in a post-copy way, I'm relying on some 
kernel+qemu+libvirt modifications, not yet merged upstream (but in their 
way to it), also available at the project github:

- Kernel: https://lkml.org/lkml/2015/3/5/576
- Qemu: https://github.com/orbitfp7/qemu/tree/wp3-postcopy
- LibVirt: https://github.com/orbitfp7/libvirt/tree/wp3-postcopy

If this is a nice feature to have in future versions of OpenStack, I'm 
happy to adapt the code for the next release (the one after KILO). Any 
comments are really welcome.


Best regards,
Luis

--
---
Dr. Luis Tomás
Postdoctoral Researcher
Department of Computing Science
Umeå University
l...@cs.umu.se
www.cloudresearch.se
www8.cs.umu.se/~luis


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Sean Dague
On 03/11/2015 02:48 PM, Joe Gordon wrote:
 Out of sync Quotas
 --
 
 https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63
 
 The quotas code is quite racey (this is kind of a known if you look at
 the bug tracker). It was actually marked as a top soft spot during
 last fall's bug triage -
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html
 
 There is an operator proposed spec for an approach here -
 https://review.openstack.org/#/c/161782/
 
 Action: we should make a solution here a top priority for enhanced
 testing and fixing in Liberty. Addressing this would remove a lot of
 pain from ops.
 
 
 To help us better track quota bugs I created a quotas tag:
 
 https://bugs.launchpad.net/nova/+bugs?field.tag=quotas
 
 Next step is re-triage those bugs: mark fixed bugs as fixed, deduplicate
 bugs etc.

Thanks Joe! Any other help from folks in consolidating those bugs would
be highly appreciated.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread John Griffith
On Thu, Mar 12, 2015 at 3:41 AM, Boris Bobrov bbob...@mirantis.com wrote:

 On Thursday 12 March 2015 12:24:57 Duncan Thomas wrote:
  ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
  /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
  DeprecationWarning: The oslo namespace package is deprecated. Please use
  oslo_i18n instead.
from oslo import i18n
  /opt/stack/cinder/cinder/openstack/common/policy.py:98:
 DeprecationWarning:
  The oslo namespace package is deprecated. Please use oslo_config instead.
from oslo.config import cfg
  /opt/stack/cinder/cinder/openstack/common/policy.py:99:
 DeprecationWarning:
  The oslo namespace package is deprecated. Please use oslo_serialization
  instead.
from oslo.serialization import jsonutils
  /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The oslo
  namespace package is deprecated. Please use oslo_messaging instead.
from oslo import messaging
 
 /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fi
  leutils.py:22: DeprecationWarning: The oslo namespace package is
  deprecated. Please use oslo_utils instead.
from oslo.utils import excutils
 
 
  What are normal, none developer users supposed to do with such warnings,
  other than:
  a) panic or b) Assume openstack is beta quality and then panic

 Non developer users are supposed to file a bug, leave installation and
 usage
 to professional devops who know how to handle logs or and stop using non-
 stable branch.

 ​I think the problem is that in some cases (particularly those being
emitted from oslo libs) this doesn't really make sense for anybody outside
of project dev team.  In other words, packaged and released but the message
is there; there's nothing the Operator or anybody at that point is going to
do about it.

This may or may not have anything to do with stable branch.  I think it's a
valid point that some messages like the one pointed out by Duncan are
perhaps not really great to have in the release code.  Whether it
could/should be tied to something like the existing debug or verbose flags
in the conf files, or even a new item like dev level logging?

Maybe not a terribly big deal, but I can see the point being made with
respect to the impression it gives etc.
​


 --
 С наилучшими пожеланиями,
 Boris

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Duncan Thomas
So, assuming that all of the oslo depreciations aren't going to be fixed
before release, we want every user out there to file a bug, for something
we know about at release time? This seems to be a very broken model...

On 12 March 2015 at 11:41, Boris Bobrov bbob...@mirantis.com wrote:

 On Thursday 12 March 2015 12:24:57 Duncan Thomas wrote:
  ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
  /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
  DeprecationWarning: The oslo namespace package is deprecated. Please use
  oslo_i18n instead.
from oslo import i18n
  /opt/stack/cinder/cinder/openstack/common/policy.py:98:
 DeprecationWarning:
  The oslo namespace package is deprecated. Please use oslo_config instead.
from oslo.config import cfg
  /opt/stack/cinder/cinder/openstack/common/policy.py:99:
 DeprecationWarning:
  The oslo namespace package is deprecated. Please use oslo_serialization
  instead.
from oslo.serialization import jsonutils
  /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The oslo
  namespace package is deprecated. Please use oslo_messaging instead.
from oslo import messaging
 
 /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fi
  leutils.py:22: DeprecationWarning: The oslo namespace package is
  deprecated. Please use oslo_utils instead.
from oslo.utils import excutils
 
 
  What are normal, none developer users supposed to do with such warnings,
  other than:
  a) panic or b) Assume openstack is beta quality and then panic

 Non developer users are supposed to file a bug, leave installation and
 usage
 to professional devops who know how to handle logs or and stop using non-
 stable branch.

 --
 С наилучшими пожеланиями,
 Boris

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Boris Bobrov
On Thursday 12 March 2015 12:59:10 Duncan Thomas wrote:
 So, assuming that all of the oslo depreciations aren't going to be fixed
 before release

What makes you think that?

In my opinion it's just one component's problem. These particular deprecation 
warnings are a result of still on-going migration from oslo.package to 
oslo_package. Ironically, all components except oslo have already moved to 
the new naming scheme.

I think that these warnings are just a single, not systemic problem.

 for something we know about at release time?

For bugs we know about at release time we have bugreports. Filing a bug is 
pretty easy ;) https://bugs.launchpad.net/oslo.db/+bug/1431268
 
 On 12 March 2015 at 11:41, Boris Bobrov bbob...@mirantis.com wrote:
  On Thursday 12 March 2015 12:24:57 Duncan Thomas wrote:
   ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
   /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
   DeprecationWarning: The oslo namespace package is deprecated. Please
   use oslo_i18n instead.
   
 from oslo import i18n
   
   /opt/stack/cinder/cinder/openstack/common/policy.py:98:
  DeprecationWarning:
   The oslo namespace package is deprecated. Please use oslo_config
   instead.
   
 from oslo.config import cfg
   
   /opt/stack/cinder/cinder/openstack/common/policy.py:99:
  DeprecationWarning:
   The oslo namespace package is deprecated. Please use oslo_serialization
   instead.
   
 from oslo.serialization import jsonutils
   
   /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The
   oslo namespace package is deprecated. Please use oslo_messaging
   instead.
   
 from oslo import messaging
  
  /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/
  fi
  
   leutils.py:22: DeprecationWarning: The oslo namespace package is
   deprecated. Please use oslo_utils instead.
   
 from oslo.utils import excutils
   
   What are normal, none developer users supposed to do with such
   warnings, other than:
   a) panic or b) Assume openstack is beta quality and then panic
  
  Non developer users are supposed to file a bug, leave installation and
  usage
  to professional devops who know how to handle logs or and stop using non-
  stable branch.
  
  --
  С наилучшими пожеланиями,
  Boris
  
  _
  _ OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Icehouse 2014.1.4 freeze exceptions

2015-03-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/11/2015 12:21 PM, Alan Pevec wrote:
 Hi,
 
 next Icehouse stable point release 2014.1.4 has been slipping last
 few weeks due to various gate issues, see Recently closed section
 in https://etherpad.openstack.org/p/stable-tracker for details. 
 Branch looks good enough now to push the release tomorrow
 (Thursdays are traditional release days) and I've put freeze -2s on
 the open reviews. I'm sorry about the short freeze period but
 branch was effectively frozen last two weeks due to gate issues and
 further delay doesn't make sense. Attached is the output from the
 stable_freeze script for thawing after tags are pushed.
 
 At the same time I'd like to propose following freeze exceptions
 for the review by stable-maint-core:
 
 * https://review.openstack.org/144714 - Eventlet green threads not 
 released back to pool Justification: while not OSSA fix, it does
 have SecurityImpact tag
 
 * https://review.openstack.org/163035 - [OSSA 2015-005] Websocket 
 Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259) 
 Justification: pending merge on master and juno
 

For OSSA patch, there seems to be some concerns and issues with the
patch that was developed under embargo. It seems it will take more
time than expected to merge it in master. It may mean we will actually
miss the backport for 2014.1.4.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAXlsAAoJEC5aWaUY1u57Fa8IAOYzNnim6mivd+Ch8WjCWtZv
dsqjyYfD0RpNlYNG8yEn+ppKTEAOrZ7AmvqqFK8Rkpkq7vOodvNn28FCkC7/QAk6
TLXshUy+Ugnm2lNu7VqbY9BuurJIVHwXMeYWJ5/aUKrIwnaMeZnYZ6GG1kH325+k
UtTh/9Tg1adVcDAb5crA6nfOGWQUVSC+7E9sxTR1vjuFiHK9u9hnzbOpBcygg3t0
Ukfa4FZCcpf5pNqMEAT9Ue9iuvmLnPi2puts3gTDdM/KfMX1DQj9KBq8b8Klmi9z
euLM44vdjVdcyKrytRCuSOv/NmJ+WKuO69TrDdW1mA/So0xYSaCRanqE+n7IRUc=
=yBm2
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Enabling VM post-copy live migration

2015-03-12 Thread John Garbutt
On 12 March 2015 at 08:41, Luis Tomas l...@cs.umu.se wrote:
 Hi,

 As part of an European (FP7) project, named ORBIT
 (http://www.orbitproject.eu/), I'm working on including the possibility of
 live-migrating VMs in OpenStack in a post-copy mode.
 This way of live-migrating VMs basically moves the computation right away to
 the destination and then the VM starts working from there, while still
 copying the memory from the source to the new location of the VM. That way
 the memory pages are only copied as if the VM modifies them, they are
 already in the destination host. This basically ensures that migrations
 finish regardless of what the VM is doing, i.e., even extremely memory
 intensive VMs. Therefore removing the problem of having VMs hanging on in
 migrating state forever (as discussed in previous mails, e.g.,
 http://lists.openstack.org/pipermail/openstack-dev/2015-February/055725.html).

 So far, I have included and tested this new functionality at the JUNO
 version, and the code modifications can be found in the github repository of
 the project (branch named post-copy):
 - https://github.com/orbitfp7/nova/tree/post-copy -- mainly enabling
 the possibility of using the libvirt post-copy flag (libvirt driver.py).
 Note post-copy migration is not using tunneling as LibVirt patch for that
 is not yet ready.
 - https://github.com/orbitfp7/python-novaclient/tree/post-copy --
 adding the possibility of using the post-copy mode when triggering the
 migration: nova live-migration [--block-migrate] [--post-copy] VM_ID
 - https://github.com/orbitfp7/horizon/tree/post-copy -- include a
 checkbox in the live-migration panel to perform the migration in post-copy
 mode. (like the one for enabling block-migration)

 To be able to live-migrate VMs in a post-copy way, I'm relying on some
 kernel+qemu+libvirt modifications, not yet merged upstream (but in their way
 to it), also available at the project github:
 - Kernel: https://lkml.org/lkml/2015/3/5/576
 - Qemu: https://github.com/orbitfp7/qemu/tree/wp3-postcopy
 - LibVirt: https://github.com/orbitfp7/libvirt/tree/wp3-postcopy

Before merging the code in Nova, we usually like the dependent
features to be released by the respective projects.

Ideally we would like it to be easy to run that on some distro so
people could test/use the feature fairly easily.

 If this is a nice feature to have in future versions of OpenStack, I'm happy
 to adapt the code for the next release (the one after KILO). Any comments
 are really welcome.

It sounds like something that doesn't need an API call, as its a
deployer choice if they have support for this new live-migrate mode.
Is that true?

Although maybe it has a substantial runtime penalty as a page read
miss causes a fetch across the network, making it a user choice? Or do
you only start the fetch mode at the point you detect a failure to
merge using the regular live-migrate mode?

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need input on possible API change for bug #1420848

2015-03-12 Thread Chen CH Ji
FYI :)
you may take a look at doc/source/devref/api_plugins.rst which was merged
recently
you can take a look at
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058493.html
and its follow up discussion


Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Chris Friesen chris.frie...@windriver.com
To: openstack-dev@lists.openstack.org
Date:   03/11/2015 11:44 PM
Subject:Re: [openstack-dev] [nova] need input on possible API change
for bug #1420848




I can see how to do a v2 extension following the example given for
extended_services.py and extended_services_delete.py.  That seems to be
working now.

I'm not at all clear on how to go about doing the equivalent for v2.1.
Does
that use the api/openstack/compute/plugins/v3/ subdirectory?   Is it
possible to
do the equivalent to the v2 extended_services.py (where the code in
api/openstack/compute/plugins/v3/services.py checks to see if the other
extension is enabled) or do I have to write a whole new extension that
builds on
the output of api/openstack/compute/plugins/v3/services.py?

Thanks,
Chris


On 03/11/2015 09:51 AM, Chen CH Ji wrote:
 I would think a on v2 extension is needed
 for v2.1 , mircoversion is a way but not very sure it's needed.

 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC

 Inactive hide details for Chris Friesen ---03/11/2015 04:35:01 PM---Hi,
I'm
 working on bug #1420848 which addresses the issue tChris Friesen
---03/11/2015
 04:35:01 PM---Hi, I'm working on bug #1420848 which addresses the issue
that doing a

 From: Chris Friesen chris.frie...@windriver.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 03/11/2015 04:35 PM
 Subject: [openstack-dev] [nova] need input on possible API change for bug
#1420848








 Hi,

 I'm working on bug #1420848 which addresses the issue that doing a
 service-disable followed by a service-enable against a down compute
node
 will result in the compute node going up for a while, possibly causing
delays
 to operations that try to schedule on that compute node.

 The proposed solution is to add a new reported_at field in the DB
schema to
 track when the service calls _report_state().

 The backend is straightforward, but I'm trying to figure out the best way
to
 represent this via the REST API response.

 Currently we response includes an updated_at property, which maps
directly to
 the auto-updated updated_at field in the database.

 Would it be acceptable to just put the reported_at value (if it exists)
in the
 updated_at property?  Logically the reported_at value is just a
 determination of when the service updated its own state, so an argument
could be
 made that this shouldn't break anything.

 Otherwise, by my reading of
 
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK
 it
 seems like if I wanted to add a new reported_at property I would need
to do it
 via an API extension.

 Anyone have opinions?

 Chris


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo FeatureFreeze is March 19th, FeatureProposalFreeze has happened

2015-03-12 Thread John Garbutt
On 11 March 2015 at 12:51, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Not 100% sure that I understand. This is for the BP¹s and specs that were
 approved for Kilo.

Basically, yes. No change from previous releases here AFAIK.

It shouldn't affect bug fixes, unless they violate one of the freezes
(like string freeze).

The definitions of each freeze are linked from the release status page:
https://wiki.openstack.org/wiki/Kilo_Release_Schedule

 Bug fixes if I understand are going to be reviewed
 until the release of Kilo.

Basically, yes.

 Is launchpad not a sufficient source for highlighting bugs?

Not sure I understand the question.

The idea of the launchpad bug tag of kilo-rc-potential is to stop
things that stop us releasing kilo. Things targetted for kilo-3 are
basically saying don't ship kilo-3 without fixing this. Everything is
just like previous releases.

We have the trivial patch list, that lives here, but thats purely an
addition to our primary tracking:
https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

Thanks,
John


 On 3/11/15, 2:13 PM, John Garbutt j...@johngarbutt.com wrote:

Hi,

Just a quick update on where we are at with the release:
https://wiki.openstack.org/wiki/Kilo_Release_Schedule

Please help review all the code we want to merge before FeatureFreeze:
https://etherpad.openstack.org/p/kilo-nova-priorities-tracking
https://launchpad.net/nova/+milestone/kilo-3

Please note, 19th March is:
kilo-3 release, General FeatureFreeze, StringFreeze, DepFreeze
(that includes all high priority items)

Generally patches that don't look likely to merge by 19th March are
likely to get deferred around 17th March, to make sure we get kilo-3
out the door.

As ever, there may be exceptions, if we really need them, but they
will have to be reviewed by ttx for their impact on the overall
integrated release of kilo. More details will be shared nearer the
time.

A big ask at this time, is to highlight any release critical bugs that
we need to fixe before we can release kilo (and that involves testing
things). We are likely to use the kilo-rc-potential tag to track those
bugs, more details on that soon.

Any questions, please do ask (here or on IRC).

Thanks,
johnthetubaguy

PS
Just a reminder you are now free to submit specs for liberty. Specs
where there is no clear agreement will be the best candidates for a
discussion slot at the summit. Again more on that later.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Neutron] IDSaaS: Request for features

2015-03-12 Thread Mario Tejedor González
I am copying this email from the openstack general list as I have not got
any responses from there yet. If any of you has a user profile besides
being a developer I would be very interested in your opinion.

Hello, I am a MSc student. I am developing an IDSaaS openstack-plugin for
my project and I would appreciate your opinions on the following:



1.  Do you agree that an IDAaaS plugin would be useful and, if so, what do
you see as the primary benefits to Cloud tenants? I will focus on 1, maybe
2, separate IDS software during my project, but the plugin should be as
software agnostic so drivers for other software can be developed afterwards.



2.  What IDS software would you recommend for this purpose? Specifically I
need to ensure that the plugin accommodates important core features so if
there are vital features please let me know.



3.  What features would you consider to be an important priority when
developing an IDSaaS plugin on openstack?



If you could spare 5 minutes (just five I promise), I have a few more very
short questions in the following survey link:


https://www.surveymonkey.com/s/89BDG72


https://www.surveymonkey.com/s/89BDG72
These survey questions allow you to suggest features you might like to have
in the plugin (Think of it like a request list!). The benefit of this is
that I will prioritize these features for development.

Thank you very much for your time and interest.

--
Mario
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VXLAN with single-NIC compute nodes: Avoiding the MTU pitfalls

2015-03-12 Thread Fredy Neeser


On 11.03.2015 19:31, Ian Wells wrote:
On 11 March 2015 at 04:27, Fredy Neeser fredy.nee...@solnet.ch 
mailto:fredy.nee...@solnet.ch wrote:


7: br-ex.1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UNKNOWN group default
link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 http://192.168.1.14/24 brd
192.168.1.255 scope global br-ex.1
   valid_lft forever preferred_lft forever

8: br-ex.12: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc
noqueue state UNKNOWN group default
link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 http://192.168.1.14/24 brd
192.168.1.255 scope global br-ex.12
   valid_lft forever preferred_lft forever


I find it hard to believe that you want the same address configured on 
*both* of these interfaces - which one do you think will be sending 
packets?


Ian, thanks for your feedback!

I did choose the same address for the two interfaces, for three reasons:

1.  Within my home single-LAN (underlay) environment, traffic is 
switched, and VXLAN traffic is confined to VLAN 12, so there is never a 
conflict between IP 192.168.1.14 on VLAN 1 and the same IP on VLAN 12.
OTOH, for a more scalable VXLAN setup (with multiple underlays and L3 
routing in between), I would like to use different IPs for br-ex.1 and 
br-ex.12 -- for example by using separate subnets

  192.168.1.0/26  for VLAN 1
  192.168.12.0/26  for VLAN 12
However, I'm not quite there yet (see 3.).

2.  I'm using policy routing on my hosts to steer VXLAN traffic (UDP 
dest. port 4789) to interface br-ex.12 --  all other traffic from 
192.168.1.14 is source routed from br-ex.1, presumably because br-ex.1 
is a lower-numbered interface than br-ex.12  (?) -- interesting question 
whether I'm relying here on the order in which I created these two 
interfaces.


  [root@langrain ~]# ip a
  ...
  7: br-ex.1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN group default

  link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
  inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.1
 valid_lft forever preferred_lft forever
  8: br-ex.12: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc noqueue 
state UNKNOWN group default

  link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
  inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.12
 valid_lft forever preferred_lft forever

3.  It's not clear to me how to setup multiple nodes with packstack if a 
node's tunnel IP does not equal its admin IP (or the OpenStack API IP in 
case of a controller node).  With packstack, I can only specify the 
compute node IPs through CONFIG_COMPUTE_HOSTS. Presumably, these IPs are 
used for both packstack deployment (admin IP) and for configuring the 
VXLAN tunnel IPs (local_ip and remote_ip parameters).  How would I 
specify different IPs for these purposes? (Recall that my hosts have a 
single NIC).



In any case, native traffic on bridge br-ex is sent via br-ex.1 (VLAN 
1), which is also the reason the Neutron gateway port qg-XXX needs to be 
an access port for VLAN 1 (tag: 1).   VXLAN traffic is sent from 
br-ex.12 on all compute nodes.  See the 2 cases below:



Case 1. Max-size ping from compute node 'langrain' (192.168.1.14) to 
another host on same LAN
 = Native traffic sent from br-ex.1; no traffic sent from 
br-ex.12


[fn@langrain ~]$ ping -M do -s 1472 -c 1 192.168.1.54
PING 192.168.1.54 (192.168.1.54) 1472(1500) bytes of data.
1480 bytes from 192.168.1.54: icmp_seq=1 ttl=64 time=0.766 ms

[root@langrain ~]# tcpdump -n -i br-ex.1 dst 192.168.1.54
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-ex.1, link-type EN10MB (Ethernet), capture size 65535 bytes
10:32:37.666572 IP 192.168.1.14  192.168.1.54: ICMP echo request, id 
10432, seq 1, length 1480
10:32:42.673665 ARP, Request who-has 192.168.1.54 tell 192.168.1.14, 
length 28



Case 2: Max-size ping from a guest1 (10.0.0.1) on compute node 
'langrain' (192.168.1.14)
 to a guest2 (10.0.0.3) on another compute node 
(192.168.1.21) via VXLAN tunnel.

 Guests are on the same virtual network 10.0.0.0/24
 = Encapsulated traffic sent from br-ex.12; no traffic 
sent from br-ex.1


$ ping -M do -s 1472 -c 1 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 1472(1500) bytes of data.
1480 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=2.22 ms

[root@langrain ~]# tcpdump -n -i br-ex.12
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-ex.12, link-type EN10MB (Ethernet), capture size 65535 bytes

11:02:56.916265 IP 192.168.1.14.47872  192.168.1.21.4789: VXLAN, flags 
[I] (0x08), vni 10

ARP, Request who-has 10.0.0.3 tell 10.0.0.1, length 28
11:02:56.916991 IP 192.168.1.21.51408  192.168.1.14.4789: VXLAN, flags 
[I] (0x08), vni 10

ARP, Reply 10.0.0.3 is-at fa:16:3e:e6:e1:c8, length 28
11:02:56.917282 IP 

Re: [openstack-dev] [Openstack] [Horizon][Keystone] Failed to set up keystone v3 api for horizon

2015-03-12 Thread Lei Zhang
I created project for the user admin(6433222efd78459bb70ad9adbcfac418).

The token horizon is past is a project scope token. So it can not passed
the cloud_admin rule.
I changed the rule to the admin_id is a little trick, and it works.

On Thu, Mar 12, 2015 at 5:16 PM, Trelohan Christophe 
ctrelo...@voyages-sncf.com wrote:

 Hello,

 Does user_id replaced in cloud_admin rule is id of cloud_admin user ?
 I don't think you can log in with cloud_admin user in horizon, it seems
 that without project created for an user, you can't log in
 in horizon.

 I'm also interested in this, I also followed the mentioned article, but
 when I try to login with admin user in default domain,
 I have the same error (not authorized to list_projects). Both with horizon
 and Rest API.




 De : Lei Zhang [mailto:zhang.lei@gmail.com]
 Envoyé : jeudi 12 mars 2015 03:33
 À : openstack; OpenStack Development Mailing List
 Objet : [Openstack] [Horizon][Keystone] Failed to set up keystone v3 api
 for horizon

 is there anyone tryed this and successfully?

 On Mon, Mar 9, 2015 at 4:25 PM, Lei Zhang zhang.lei@gmail.com wrote:
 Hi guys,

 I am setting up the keytone v3 api. Now I meet a issue about the
 `cloud_admin` policy.

 Base on the
 http://www.florentflament.com/blog/setting-keystone-v3-domains.html
 article, I modify the cloud_admin policy to

 ```
 cloud_admin: rule:admin_required and
 domain_id:ef0d30167f744401a0cbfcc938ea7d63,
 ```

 But the cloud_admin don't work as expected. I failed to open all the
 identity panel ( like http://host/horizon/identity/domains/)
 Horizon tell me Error: Unable to retrieve project list.
 And keystone log warning:

 ```
 2015-03-09 16:00:06.423 9415 DEBUG keystone.policy.backends.rules [-]
 enforce identity:list_user_projects: {'is_delegated_auth': False,
 'access_token_id': None, 'user_id': u'6433222efd78459bb70ad9adbcfac418',
 'roles': [u'_member_', u'admin'], 'trustee_id': None, 'trustor_id': None,
 'consumer_id': None, 'token': KeystoneToken
 (audit_id=DWsSa6yYSWi0ht9E7q4uhw, audit_chain_id=w_zLBBeFQ82KevtJrdKIJw) at
 0x7f4503fab3c8, 'project_id': u'4d170baaa89b4e46b239249eb5ec6b00',
 'trust_id': None}, enforce
 /usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py:100
 2015-03-09 16:00:06.061 9410 WARNING keystone.common.wsgi [-] You are not
 authorized to perform the requested action: identity:list_projects (Disable
 debug mode to suppress these details.)
 ```

 ​I make some debug and found that, the root cause is that the `context`
 variable in keystone has no `domain_id` field( like the above keystone
 log). So the `cloud_admin` rule failed.​ if i change the `cloud_admin` to
 following. It works as expected.

 ```
 cloud_admin: rule:admin_required and
 user_id:6433222efd78459bb70ad9adbcfac418,
 ```

 I found that in the keystone code[0], the domain_id only exist when it is
 a domain scope. But i believe that the horizon login token is a project
 one( I am not very sure this)

 ```
 if token.project_scoped:
 auth_context['project_id'] = token.project_id
 elif token.domain_scoped:
 auth_context['domain_id'] = token.domain_id
 else:
 LOG.debug('RBAC: Proceeding without project or domain scope')

 ```

 Is it a bug? or some wrong configuration?


 Following is my configuration.


 ```
 # /etc/keystone/keystone.conf
 [DEFAULT]
 debug=true
 verbose=true
 log_dir=/var/log/keystone
 [assignment]
 driver = keystone.assignment.backends.sql.Assignment
 [database]
 connection=mysql://:@controller/keystone
 [identity]
 driver=keystone.identity.backends.sql.Identity
 [memcache]
 servers=controller1:11211,controller2:11211,controller3:1121
 [token]
 provider=keystone.token.providers.uuid.Provider
 ```

 ```
 # /etc/openstack-dashboard/local_settings.py ( partly )
 POLICY_FILES_PATH = /etc/openstack-dashboard/
 POLICY_FILES = {
 'identity': 'keystone_policy.json',
 }
 OPENSTACK_HOST = 127.0.0.1
 OPENSTACK_KEYSTONE_URL = http://%s:5000/v3; % OPENSTACK_HOST
 OPENSTACK_KEYSTONE_DEFAULT_ROLE = _member_
 OPENSTACK_API_VERSIONS = {
  data_processing: 1.1,
  identity: 3,
  volume: 2
 }
 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'admin'
 ```

 ​[0]
 https://github.com/openstack/keystone/blob/master/keystone/common/authorization.py#L58
 ​

 --
 Lei Zhang
 Blog: http://xcodest.me
 twitter/weibo: @jeffrey4l




 --
 Lei Zhang
 Blog: http://xcodest.me
 twitter/weibo: @jeffrey4l




-- 
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB

2015-03-12 Thread Leo Y
What does it mean under if that notification is lost, the agent will
eventually resynchronize? Is it proven/guaranteed? By what means?
Can you, please the process with more details? Or point me to resources
that describe it.

Thank you


On Mon, Mar 9, 2015 at 2:11 AM, Kevin Benton blak...@gmail.com wrote:

 Port changes will result in an update message being sent on the AMQP
 message bus. When the agent receives it, it will affect the communications
 then. If that notification is lost, the agent will eventually resynchronize.

 So during normal operations, the change should take effect within a few
 seconds.

 On Sat, Mar 7, 2015 at 4:10 AM, Leo Y minh...@gmail.com wrote:

 Hello,

 What happens when neutron DB is updated to change network settings (e.g.
 via Dashboard or manually) when there are communication sessions opened in
 compute nodes. Does it influence those sessions? When the update is
 propagated to compute nodes?

 Thank you

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Leo
-
I enjoy the massacre of ads. This sentence will slaughter ads without a
messy bloodbath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Keystone] Failed to set up keystone v3 api for horizon

2015-03-12 Thread Lei Zhang
Hi Lin,

This two PS is what I wanted. Thx a lot.

btw, is it possible that these PS finished in Kilo?

On Thu, Mar 12, 2015 at 5:41 PM, Lin Hua Cheng os.lch...@gmail.com wrote:

 Hi,

 The 'cloud_admin' policy file requires domain-scoped to work to work.

 Horizon does not currently support domain scope token yet. So yes, it is a
 gap in horizon at the moment.

 There are on-going patches to address this in horizon:
 - https://review.openstack.org/#/c/141153/
 - https://review.openstack.org/#/c/148082/

 Dan (esp) prepared a nicely written document on this should eventually
 work.

 -Lin

 On Wed, Mar 11, 2015 at 7:33 PM, Lei Zhang zhang.lei@gmail.com
 wrote:

 is there anyone tryed this and successfully?

 On Mon, Mar 9, 2015 at 4:25 PM, Lei Zhang zhang.lei@gmail.com
 wrote:

 Hi guys,

 I am setting up the keytone v3 api. Now I meet a issue about the
 `cloud_admin` policy.

 Base on the
 http://www.florentflament.com/blog/setting-keystone-v3-domains.html
 article, I modify the cloud_admin policy to

 ```
 cloud_admin: rule:admin_required and
 domain_id:ef0d30167f744401a0cbfcc938ea7d63,
 ```

 But the cloud_admin don't work as expected. I failed to open all the
 identity panel ( like http://host/horizon/identity/domains/)
 Horizon tell me Error: Unable to retrieve project list.
 And keystone log warning:

 ```
 2015-03-09 16:00:06.423 9415 DEBUG keystone.policy.backends.rules [-]
 enforce identity:list_user_projects: {'is_delegated_auth': False,
 'access_token_id': None, 'user_id': u'6433222efd78459bb70ad9adbcfac418',
 'roles': [u'_member_', u'admin'], 'trustee_id': None, 'trustor_id': None,
 'consumer_id': None, 'token': KeystoneToken
 (audit_id=DWsSa6yYSWi0ht9E7q4uhw, audit_chain_id=w_zLBBeFQ82KevtJrdKIJw) at
 0x7f4503fab3c8, 'project_id': u'4d170baaa89b4e46b239249eb5ec6b00',
 'trust_id': None}, enforce
 /usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py:100
 2015-03-09 16:00:06.061 9410 WARNING keystone.common.wsgi [-] You are
 not authorized to perform the requested action: identity:list_projects
 (Disable debug mode to suppress these details.)
 ```

 ​I make some debug and found that, the root cause is that the `context`
 variable in keystone has no `domain_id` field( like the above keystone
 log). So the `cloud_admin` rule failed.​ if i change the `cloud_admin` to
 following. It works as expected.

 ```
 cloud_admin: rule:admin_required and user_id:
 6433222efd78459bb70ad9adbcfac418,
 ```

 I found that in the keystone code[0], the domain_id only exist when it
 is a domain scope. But i believe that the horizon login token is a project
 one( I am not very sure this)

 ```
 if token.project_scoped:
 auth_context['project_id'] = token.project_id
 elif token.domain_scoped:
 auth_context['domain_id'] = token.domain_id
 else:
 LOG.debug('RBAC: Proceeding without project or domain scope')

 ```

 Is it a bug? or some wrong configuration?


 Following is my configuration.


 ```
 # /etc/keystone/keystone.conf
 [DEFAULT]
 debug=true
 verbose=true
 log_dir=/var/log/keystone
 [assignment]
 driver = keystone.assignment.backends.sql.Assignment
 [database]
 connection=mysql://:@controller/keystone
 [identity]
 driver=keystone.identity.backends.sql.Identity
 [memcache]
 servers=controller1:11211,controller2:11211,controller3:1121
 [token]
 provider=keystone.token.providers.uuid.Provider
 ```

 ```
 # /etc/openstack-dashboard/local_settings.py ( partly )
 POLICY_FILES_PATH = /etc/openstack-dashboard/
 POLICY_FILES = {
 'identity': 'keystone_policy.json',
 }
 OPENSTACK_HOST = 127.0.0.1
 OPENSTACK_KEYSTONE_URL = http://%s:5000/v3; % OPENSTACK_HOST
 OPENSTACK_KEYSTONE_DEFAULT_ROLE = _member_
 OPENSTACK_API_VERSIONS = {
  data_processing: 1.1,
  identity: 3,
  volume: 2
 }
 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'admin'
 ```

 ​[0]
 https://github.com/openstack/keystone/blob/master/keystone/common/authorization.py#L58
 ​

 --
 Lei Zhang
 Blog: http://xcodest.me
 twitter/weibo: @jeffrey4l




 --
 Lei Zhang
 Blog: http://xcodest.me
 twitter/weibo: @jeffrey4l

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Glance] Nitpicking in code reviews

2015-03-12 Thread Daniel P. Berrange
On Thu, Mar 12, 2015 at 09:07:30AM -0500, Flavio Percoco wrote:
 On 11/03/15 15:06 -1000, John Bresnahan wrote:
 FWIW I agree with #3 and #4 but not #1 and #2.  Spelling is an easy enough
 thing to get right and speaks to the quality standard to which the product
 is held even in commit messages and comments (consider the 'broken window
 theory').  Of course everyone makes mistakes (I am a terrible speller) but
 correcting a spelling error should be a trivial matter.  If a reviewer
 notices a spelling error I would expect them to point it.
 
 I'd agree depending on the status of the patch. If the patch has
 already 2 +2s and someone blocks it because of a spelling error, then
 the cost of fixing it, running the CI jobs and getting the reviews
 again is higher than living with a simple typo.

Also remember that submitting patches which fix typos is a great way
for new contributors to gain a ATC and thus qualify for free design
summit pass, so it is good to leave plenty of typos around :-P

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency runtime dependency on fixtures/testtools

2015-03-12 Thread Brant Knudson
On Thu, Mar 12, 2015 at 7:20 AM, Davanum Srinivas dava...@gmail.com wrote:

 Alan,

 We are debating this on:
 https://review.openstack.org/#/c/157135/

 Please hop on :)
 -- dims

 On Thu, Mar 12, 2015 at 5:28 AM, Alan Pevec ape...@gmail.com wrote:
  Hi,
 
  hijacking this thread to point out something that feels wrong in the
  dependency chain which jumped out:
 
  Colecting testtools=0.9.22 (from
 fixtures=0.3.14-oslo.concurrency=1.4.1-keystone==2015.1.dev395)
 
  fixtures is imported in oslo_concurrency/fixture/lockutils.py but
  that's not really used at _runtime_
 
 
  Cheers,
  Alan
 


And it's also being discussed in keystone for deps for non-default
features: https://review.openstack.org/#/c/162360/

-- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][nfv] is there any reason neutron.allow_duplicate_networks should not be True by default?

2015-03-12 Thread Matt Riedemann



On 3/11/2015 7:23 PM, Ian Wells wrote:

On 11 March 2015 at 10:56, Matt Riedemann mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:

While looking at some other problems yesterday [1][2] I stumbled
across this feature change in Juno [3] which adds a config option
allow_duplicate_networks to the [neutron] group in nova. The
default value is False, but according to the spec [4] neutron allows
this and it's just exposing a feature available in neutron via nova
when creating an instance (create the instance with 2 ports from the
same network).

My question then is why do we have a config option to toggle a
feature that is already supported in neutron and is really just
turning a failure case into a success case, which is generally
considered OK by our API change guidelines [5].

I'm wondering if there is anything about this use case that breaks
other NFV use cases, maybe something with SR-IOV / PCI?  If not, I
plan on pushing a change to deprecate the option in Kilo and remove
it in Liberty with the default being to allow the operation.


This was all down to backward compatibility.

Nova didn't allow two interfaces on the same Neutron network.  We tried
to change this by filing a bug, and the patches got rejected because the
original behaviour was claimed to be intentional and desirable.  (It's
not clear that it was intentional behaviour because it was never
documented, but the same lack of documented intent meant it's also not
clear it was a bug, so the situation was ambiguous.)

Eventually it was fixed as new functionality using a spec [1] so that
the change and reasoning could be clearly described, and because of the
previous concerns, Racha, who implemented the spec, additionally chose
to use a config item to preserve the original behaviour unless the new
one was explicitly requested.
--
Ian.

[1]
https://review.openstack.org/#/c/97716/5/specs/juno/nfv-multiple-if-1-net.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



For anyone following along, we're deprecating the 
allow_duplicated_networks option in Kilo and will remove it in Liberty 
(and just make it work like this by default):


https://review.openstack.org/#/c/163581/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nitpicking in code reviews

2015-03-12 Thread Flavio Percoco

On 11/03/15 15:06 -1000, John Bresnahan wrote:
FWIW I agree with #3 and #4 but not #1 and #2.  Spelling is an easy 
enough thing to get right and speaks to the quality standard to which 
the product is held even in commit messages and comments (consider the 
'broken window theory').  Of course everyone makes mistakes (I am a 
terrible speller) but correcting a spelling error should be a trivial 
matter.  If a reviewer notices a spelling error I would expect them to 
point it.


I'd agree depending on the status of the patch. If the patch has
already 2 +2s and someone blocks it because of a spelling error, then
the cost of fixing it, running the CI jobs and getting the reviews
again is higher than living with a simple typo.

Process and rules are good but we must evaluate them in a case by case
basis to make sure we're not blocking important work on things that
are not that relevant after all.



On 3/11/15 2:22 PM, Kuvaja, Erno wrote:

Hi all,

Following the code reviews lately I’ve noticed that we (the fan club
seems to be growing on weekly basis) have been growing culture of
nitpicking [1] and bikeshedding [2][3] over almost every single change.

Seriously my dear friends, following things are not worth of “-1” vote
if even a comment:

1)Minor spelling errors on commit messages (as long as the message comes
through and flags are not misspelled).

2)Minor spelling errors on comments (docstrings and documentation is
there and there, but comments, come-on).

3)Used syntax that is functional, readable and does not break
consistency but does not please your poem bowel.

4)Other things you “just did not realize to check if they were there”.
After you have gone through the whole change go and look your comments
again and think twice if your concern/question/whatsoever was addressed
somewhere else than where your first intuition would have dropped it.

We have relatively high volume for glance at the moment and this
nitpicking and bikeshedding does not help anyone. At best it just
tightens nerves and breaks our group. Obviously if there is “you had ONE
job” kind of situations or there is relatively high amount of errors
combined with something serious it’s reasonable to ask fix the typos on
the way as well. The reason being need to increase your statistics,
personal perfectionist nature or actually I do not care what; just stop
or go and do it somewhere else.



Thanks for bringing all this up, Erno. I've been seeing the same
pattern for all the points you've mentioned above. It's a good
reminder for people to treat each patch individually so we avoid
making our process and rules a pain for everyone.

Flavio



Love and pink ponies,

-Erno

[1] www.urbandictionary.com/define.php?term=nitpicking
http://www.urbandictionary.com/define.php?term=nitpicking

[2] http://bikeshed.com

[3] http://en.wiktionary.org/wiki/bikeshedding



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpkIr4COquU9.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need input on possible API change for bug #1420848

2015-03-12 Thread Christopher Yeoh
FWIW I think we need to consider that the API is completely froxen for the
V2 API (so this freeze does not apply to v2.1 microversions) except under
very serious circumstances and only very high priority bug fixes and only
apply this to a suitable microversion bump. We really want to get rid of
the V2 API code asap anyway.  You can after all request a version number of
a per method basis as long as you are talking to v2.1. So the only forced
upgrade is the v2-v2.1 transition and those apis should be identical

On Fri, Mar 13, 2015 at 12:02 AM, Christopher Yeoh cbky...@gmail.com
wrote:




 On Thu, Mar 12, 2015 at 11:19 PM, Christopher Yeoh cbky...@gmail.com
 wrote:

 On Wed, 11 Mar 2015 09:32:11 -0600
 Chris Friesen chris.frie...@windriver.com wrote:

 
  Hi,
 
  I'm working on bug #1420848 which addresses the issue that doing a
  service-disable followed by a service-enable against a down
  compute node will result in the compute node going up for a while,
  possibly causing delays to operations that try to schedule on that
  compute node.
 
  The proposed solution is to add a new reported_at field in the DB
  schema to track when the service calls _report_state().
 
  The backend is straightforward, but I'm trying to figure out the best
  way to represent this via the REST API response.
 
  Currently we response includes an updated_at property, which maps
  directly to the auto-updated updated_at field in the database.
 
  Would it be acceptable to just put the reported_at value (if it
  exists) in the updated_at property?  Logically the reported_at
  value is just a determination of when the service updated its own
  state, so an argument could be made that this shouldn't break
  anything.
 


 So i think this is the critical point here is this a backwards
 compatibly API change or not. Would reporing reported_at in updated_at
 cause *anyone* any pain. For this reason I think it has to go through
 as a nova spec (and if you think its not going to cause pain get some
 people from the mailing list +1 it as backwards API change because it
 always has been a bug. If that is the conculsion and you just reuse
 updated_at

 then the procedure would be:


 - Add it to v2 and no v2 extension required
 - Add it to v2.1 without an extension
 - No change required to in terms of microversions because it is lready
   in the base v2.1 code

 If you go the reported_at route the and there no changed to updated_at

 but the fix is consiered a new feature rather than a bug fix then I think
 we should seriously consider if it should be fixed in V2 at all because the
 v2 api is basically frozen and we can just add it as a microversion (don't
 even need to to support it in v2.1), just  api microversion

 In which case the documents that Kevin pointed to should help - if you
 have any problems catch me on irc or on return email


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need input on possible API change for bug #1420848

2015-03-12 Thread Christopher Yeoh
On Thu, Mar 12, 2015 at 11:19 PM, Christopher Yeoh cbky...@gmail.com
wrote:

 On Wed, 11 Mar 2015 09:32:11 -0600
 Chris Friesen chris.frie...@windriver.com wrote:

 
  Hi,
 
  I'm working on bug #1420848 which addresses the issue that doing a
  service-disable followed by a service-enable against a down
  compute node will result in the compute node going up for a while,
  possibly causing delays to operations that try to schedule on that
  compute node.
 
  The proposed solution is to add a new reported_at field in the DB
  schema to track when the service calls _report_state().
 
  The backend is straightforward, but I'm trying to figure out the best
  way to represent this via the REST API response.
 
  Currently we response includes an updated_at property, which maps
  directly to the auto-updated updated_at field in the database.
 
  Would it be acceptable to just put the reported_at value (if it
  exists) in the updated_at property?  Logically the reported_at
  value is just a determination of when the service updated its own
  state, so an argument could be made that this shouldn't break
  anything.
 


 So i think this is the critical point here is this a backwards
 compatibly API change or not. Would reporing reported_at in updated_at
 cause *anyone* any pain. For this reason I think it has to go through
 as a nova spec (and if you think its not going to cause pain get some
 people from the mailing list +1 it as backwards API change because it
 always has been a bug. If that is the conculsion and you just reuse
 updated_at

then the procedure would be:


 - Add it to v2 and no v2 extension required
 - Add it to v2.1 without an extension
 - No change required to in terms of microversions because it is lready
   in the base v2.1 code

 If you go the reported_at route the and there no changed to updated_at

but the fix is consiered a new feature rather than a bug fix then I think
we should seriously consider if it should be fixed in V2 at all because the
v2 api is basically frozen and we can just add it as a microversion (don't
even need to to support it in v2.1), just  api microversion

In which case the documents that Kevin pointed to should help - if you have
any problems catch me on irc or on return email


  Otherwise, by my reading of
  
 https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK
 
  it seems like if I wanted to add a new reported_at property I would
  need to do it via an API extension.
 
  Anyone have opinions?
 
  Chris
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] proper way to extend index() and add a new property?

2015-03-12 Thread Chris Friesen


I'm looking for some technical advice on API extensions.

I want to add a new field to the output of the nova service-list command, 
which currently maps to ServiceController.index().


For the v2 API this seems straightforward, I can add a new extension and in the 
existing function I can call self.ext_mgr.is_loaded() to see if my extension is 
loaded, similar to the existing code that checks for 'os-extended-services-delete'.


For v2.1 (ie the plugins in the v3 directory) the extension management seems to 
be different though.  As far as I can tell I can only extend the controller and 
create a new index() that takes as arguments the request and the output of the 
existing index() function.


The problem I have with this is that the existing index() function returns a 
dict with one key per service in the cluster, which could be quite a few.  It 
seems highly inefficient to look up all the services *again* and extract the 
single value from each that I care about and add it to the output of the 
original index() function.  The best I can do for efficiency is O(n + n * log(n)).


Is there a better way to handle this?  Maybe a way to modify the existing v2.1 
index() function to check whether the new extension is loaded and add in the new 
field based on that (similar to how it's done in v2)?


Thanks,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB

2015-03-12 Thread Assaf Muller


- Original Message -
  However, I briefly looked through the L2 agent code and didn't see a
  periodic task to resync the port information to protect from a neutron
  server that failed to send a notification because it crashed or lost its
  amqp connection. The L3 agent has a period sync routers task that helps in
  this regard.

The L3 agent periodic sync is only if the full_sync flag was turned on, which
is a result of an error.

  Maybe another neutron developer more familiar with the L2
  agent can chime in here if I'm missing anything.
 
 i don't think you are missing anything.
 periodic sync would be a good improvement.
 
 YAMAMAOTO Takashi
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A question on static routes on Neutron router

2015-03-12 Thread Kevin Benton
Yes, the extra routes extension allows IP addresses from any of the
networks connected to the router.

I see in the code that send_redirects is set to 0 so it will not generate
ICMP redirect messages in the case you mentioned. I don't see anything
obviously preventing the forwarding to a next hop on the same subnet, but
you would have to try it out to be 100% sure.

On Thu, Mar 12, 2015 at 5:25 PM, NAPIERALA, MARIA H mn1...@att.com wrote:

  Can a static/extra route on Neutron router point to an internal/tenant
 subnet interface as the next-hop?
 If yes, can Neutron router forward packets received from a host on an
 attached subnet and matching on a configured static route back to the same
 subnet (to a different host)?

 Appreciate the help answering the questions.

 Maria



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A question on static routes on Neutron router

2015-03-12 Thread Maria Napierala
 
 
 
 Yes, the extra routes extension allows IP addresses from any of the 
networks connected to the router.
 I see in the code that send_redirects is set to 0 so it will not generate 
ICMP redirect messages in the case you mentioned. I don't see anything 
obviously preventing the forwarding to a next hop on the same subnet, but 
you would have to try it out to be 100% sure.
 
 
 On Thu, Mar 12, 2015 at 5:25 PM, NAPIERALA, MARIA H mn1921-
60p5jsux...@public.gmane.org wrote:
 
 
 
 
 
 
 
 
 Can a static/extra route on Neutron router point to an internal/tenant 
subnet interface as the next-hop?  
 If yes, can Neutron router forward packets received from a host on an 
attached subnet and matching on a configured static route back to the same 
subnet (to a different host)?
  
 Appreciate the help answering the questions.
 
  
 Maria 
  
  
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-request-
ZwoEplunGu0gQVYkTtqAhA@public.gmane.orgorg?
subject:unsubscribehttp://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-dev
 
 
 -- Kevin Benton
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-request@...?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Thanks much for the reply. I have looked at the code for the extra route 
extension and there was nothing preventing the static route next-hop to be 
on an internal subnet. However, all the discussion or expamples I found 
were about the external subnets. So, I wanted to make sure.. I guess this 
is probably where it is pimarily being used.

Regarding the second question, at least the router should not send ICMP 
redirect (thanks for pointing it out). I will try it.

Maria



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB

2015-03-12 Thread Kevin Benton
If there are any errors on the agent connecting to the message bus or
retrieving messages, an exception will be thrown in the main rpc_loop,
which will be caught and a sync flag will be set to true, which will
trigger the sync on the next loop.

However, I briefly looked through the L2 agent code and didn't see a
periodic task to resync the port information to protect from a neutron
server that failed to send a notification because it crashed or lost its
amqp connection. The L3 agent has a period sync routers task that helps in
this regard. Maybe another neutron developer more familiar with the L2
agent can chime in here if I'm missing anything.

On Thu, Mar 12, 2015 at 6:19 AM, Leo Y minh...@gmail.com wrote:

 What does it mean under if that notification is lost, the agent will
 eventually resynchronize? Is it proven/guaranteed? By what means?
 Can you, please the process with more details? Or point me to resources
 that describe it.

 Thank you


 On Mon, Mar 9, 2015 at 2:11 AM, Kevin Benton blak...@gmail.com wrote:

 Port changes will result in an update message being sent on the AMQP
 message bus. When the agent receives it, it will affect the communications
 then. If that notification is lost, the agent will eventually resynchronize.

 So during normal operations, the change should take effect within a few
 seconds.

 On Sat, Mar 7, 2015 at 4:10 AM, Leo Y minh...@gmail.com wrote:

 Hello,

 What happens when neutron DB is updated to change network settings (e.g.
 via Dashboard or manually) when there are communication sessions opened in
 compute nodes. Does it influence those sessions? When the update is
 propagated to compute nodes?

 Thank you


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,
 Leo
 -
 I enjoy the massacre of ads. This sentence will slaughter ads without a
 messy bloodbath

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB

2015-03-12 Thread Kevin Benton
Yeah, I was making a bad assumption for the l2 and l3. Sorry about that. It
sounds like we don't have any protection against servers failing to send
notifications.
On Mar 12, 2015 7:41 PM, Assaf Muller amul...@redhat.com wrote:



 - Original Message -
   However, I briefly looked through the L2 agent code and didn't see a
   periodic task to resync the port information to protect from a neutron
   server that failed to send a notification because it crashed or lost
 its
   amqp connection. The L3 agent has a period sync routers task that
 helps in
   this regard.

 The L3 agent periodic sync is only if the full_sync flag was turned on,
 which
 is a result of an error.

   Maybe another neutron developer more familiar with the L2
   agent can chime in here if I'm missing anything.
 
  i don't think you are missing anything.
  periodic sync would be a good improvement.
 
  YAMAMAOTO Takashi
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB

2015-03-12 Thread YAMAMOTO Takashi
 However, I briefly looked through the L2 agent code and didn't see a
 periodic task to resync the port information to protect from a neutron
 server that failed to send a notification because it crashed or lost its
 amqp connection. The L3 agent has a period sync routers task that helps in
 this regard. Maybe another neutron developer more familiar with the L2
 agent can chime in here if I'm missing anything.

i don't think you are missing anything.
periodic sync would be a good improvement.

YAMAMAOTO Takashi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-12 Thread Zhi Yan Liu
I'd prefer 1400UTC.

zhiyan

On Mon, Mar 9, 2015 at 4:07 AM, Nikhil Komawar
nikhil.koma...@rackspace.com wrote:

 Hi all,


 Currently, we've alternating time for Glance meetings. Now, with the
 Daylight savings being implemented in some parts of the world, we're
 thinking of moving the meeting time to just one slot i.e. earlier in the
 day(night). This solves the original conflicting times issue that a subset
 of the individuals had; to add to that the schedule is less confusing and
 unified.


 So, the new proposal is:

 Glance meetings [1] to be conducted weekly on Thursdays at 1400UTC [2] on
 #openstack-meeting-4


 This would be implemented on Mar 19th, given there are no major objections.


 Please vote with +1/-1 here.


 [1] https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting

 [2] http://www.timeanddate.com/worldclock/fixedtime.html?hour=14min=0sec=0


 Thanks,
 -Nikhil

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency runtime dependency on fixtures/testtools

2015-03-12 Thread Ben Nemec
On 03/12/2015 09:18 AM, Brant Knudson wrote:
 On Thu, Mar 12, 2015 at 7:20 AM, Davanum Srinivas dava...@gmail.com wrote:
 
 Alan,

 We are debating this on:
 https://review.openstack.org/#/c/157135/

 Please hop on :)
 -- dims

 On Thu, Mar 12, 2015 at 5:28 AM, Alan Pevec ape...@gmail.com wrote:
 Hi,

 hijacking this thread to point out something that feels wrong in the
 dependency chain which jumped out:

 Colecting testtools=0.9.22 (from
 fixtures=0.3.14-oslo.concurrency=1.4.1-keystone==2015.1.dev395)

 fixtures is imported in oslo_concurrency/fixture/lockutils.py but
 that's not really used at _runtime_


 Cheers,
 Alan


 
 And it's also being discussed in keystone for deps for non-default
 features: https://review.openstack.org/#/c/162360/
 
 -- Brant

We actually have a spec open to discuss this for the Oslo libs.  Would
love to get more input on it: https://review.openstack.org/#/c/153966/

It's possible that will need to become a cross-project spec if we
determine that more projects need an optional deps policy (which is
sounding like the case).

-Ben


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.config 1.9.3 release

2015-03-12 Thread Doug Hellmann
The Oslo team is content to announce the release of:

oslo.config 1.9.3: Oslo Configuration API

For more details, please see the git log history below and:

http://launchpad.net/oslo.config/+milestone/1.9.3

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.config 1.9.2..1.9.3
---

3c51838 Switch to non-namespaced module imports

Diffstat (except docs and test files)
-

oslo_config/generator.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.db 1.7.0 release

2015-03-12 Thread Doug Hellmann
The Oslo team is content to announce the release of:

oslo.db 1.7.0: oslo.db library

For more details, please see the git log history below and:

http://launchpad.net/oslo.db/+milestone/1.7.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.db 1.6.0..1.7.0
---

464a3d0 Switch to non-namespaced module import - oslo_i18n
95d4543 Fix documented env variable for test connection
d0a0fdf Updated from global requirements

Diffstat (except docs and test files)
-

oslo_db/_i18n.py| 4 ++--
oslo_db/sqlalchemy/test_base.py | 2 +-
test-requirements-py2.txt   | 2 +-
test-requirements-py3.txt   | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/test-requirements-py2.txt b/test-requirements-py2.txt
index 34e18db..6ff3a1b 100644
--- a/test-requirements-py2.txt
+++ b/test-requirements-py2.txt
@@ -19 +19 @@ testtools=0.9.36,!=1.2.0
-tempest-lib=0.2.0
+tempest-lib=0.3.0
diff --git a/test-requirements-py3.txt b/test-requirements-py3.txt
index 03670e8..4290cc6 100644
--- a/test-requirements-py3.txt
+++ b/test-requirements-py3.txt
@@ -19 +19 @@ testtools=0.9.36,!=1.2.0
-tempest-lib=0.2.0
+tempest-lib=0.3.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Enabling VM post-copy live migration

2015-03-12 Thread John Garbutt
On 12 March 2015 at 12:26, Luis Tomas l...@cs.umu.se wrote:
 On 03/12/2015 12:34 PM, John Garbutt wrote:

 On 12 March 2015 at 08:41, Luis Tomas l...@cs.umu.se wrote:

 Hi,

 As part of an European (FP7) project, named ORBIT
 (http://www.orbitproject.eu/), I'm working on including the possibility
 of
 live-migrating VMs in OpenStack in a post-copy mode.
 This way of live-migrating VMs basically moves the computation right away
 to
 the destination and then the VM starts working from there, while still
 copying the memory from the source to the new location of the VM. That
 way
 the memory pages are only copied as if the VM modifies them, they are
 already in the destination host. This basically ensures that migrations
 finish regardless of what the VM is doing, i.e., even extremely memory
 intensive VMs. Therefore removing the problem of having VMs hanging on in
 migrating state forever (as discussed in previous mails, e.g.,

 http://lists.openstack.org/pipermail/openstack-dev/2015-February/055725.html).

 So far, I have included and tested this new functionality at the JUNO
 version, and the code modifications can be found in the github repository
 of
 the project (branch named post-copy):
  - https://github.com/orbitfp7/nova/tree/post-copy -- mainly
 enabling
 the possibility of using the libvirt post-copy flag (libvirt driver.py).
 Note post-copy migration is not using tunneling as LibVirt patch for
 that
 is not yet ready.
  - https://github.com/orbitfp7/python-novaclient/tree/post-copy --
 adding the possibility of using the post-copy mode when triggering the
 migration: nova live-migration [--block-migrate] [--post-copy] VM_ID
  - https://github.com/orbitfp7/horizon/tree/post-copy -- include a
 checkbox in the live-migration panel to perform the migration in
 post-copy
 mode. (like the one for enabling block-migration)

 To be able to live-migrate VMs in a post-copy way, I'm relying on some
 kernel+qemu+libvirt modifications, not yet merged upstream (but in their
 way
 to it), also available at the project github:
  - Kernel: https://lkml.org/lkml/2015/3/5/576
  - Qemu: https://github.com/orbitfp7/qemu/tree/wp3-postcopy
  - LibVirt: https://github.com/orbitfp7/libvirt/tree/wp3-postcopy

 Before merging the code in Nova, we usually like the dependent
 features to be released by the respective projects.

 Ideally we would like it to be easy to run that on some distro so
 people could test/use the feature fairly easily.

 Yes, that's why I proposed to target the version after kilo (or even the
 next to that one if need be)

Ah, cool. I just wanted to be explicit about that.

 If this is a nice feature to have in future versions of OpenStack, I'm
 happy
 to adapt the code for the next release (the one after KILO). Any comments
 are really welcome.

 It sounds like something that doesn't need an API call, as its a
 deployer choice if they have support for this new live-migrate mode.
 Is that true?

 Although maybe it has a substantial runtime penalty as a page read
 miss causes a fetch across the network, making it a user choice? Or do
 you only start the fetch mode at the point you detect a failure to
 merge using the regular live-migrate mode?


 I think it should be up to the user/admin what option to choose.
 Although post-copy ensures that the migration will finish, as you said, it
 could have some impact into the VM performance due to having to wait until a
 missing memory page is fetched. Anyway, I wouldn't say there is a
 substantial runtime penalty. In fact, the libvirt flag that we have included
 in OpenStack basically tries pre-copy first (normal live-migration), and
 after trying to copy all the memory once (first iteration), automatically
 changes to post-copy, meaning moving the VM cpu to the destination and only
 having to copy the remaining pages (the ones dirtied while doing the first
 copy iteration). This way the impact into the application performance is
 minimized.

Ah, thats what I was trying to describe and failed. Sounds good.

 On the other hand, post-copy has a downside. If by any chance the migration
 crash during the process, unlike pre-copy, you can not recover the VM as not
 the source nor the destination has a fully working VM at the time (part of
 the memory in the source, part of it at the destination).

Eek, good point.

 These are basically the reasons we considered for making it as an optional
 choice.

Totally make sense.

Only tip is to include that sort of information when you submit your
nova-spec, once those features are merged and released.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package

2015-03-12 Thread Timothy Swanson (tiswanso)
Hi guys,

I got to this again today and realized that I had copied a local.conf w/ 
OFFLINE=True”.  Removing that resolved the issue w/ the run of stack.sh.

Hope that helps!

—Tim

On Mar 11, 2015, at 12:41 PM, Donald Stufft 
don...@stufft.iomailto:don...@stufft.io wrote:

Is it using an old version of setuptools? Like 0.6.28.

On Mar 11, 2015, at 11:28 AM, Timothy Swanson (tiswanso) 
tiswa...@cisco.commailto:tiswa...@cisco.com wrote:

I don’t have any solution just chiming in that I see the same error with 
devstack pulled from master on a new ubuntu trusty VM created last night.

'pip install —upgrade wheel’ indicates:
Requirement already up-to-date: wheel in /usr/local/lib/python2.7/dist-packages

Haven’t gotten it cleared up.

Thanks,

Tim

On Mar 2, 2015, at 2:11 AM, Smigiel, Dariusz 
dariusz.smig...@intel.commailto:dariusz.smig...@intel.com wrote:



From: yuntong [mailto:yuntong...@gmail.com]
Sent: Monday, March 2, 2015 7:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package

Hello,
I got an error when try to ./stack.sh as:
2015-03-02 05:58:20.692 | net.ipv4.ip_local_reserved_ports = 35357,35358
2015-03-02 05:58:20.959 | New python executable in tmp-venv-NoMO/bin/python
2015-03-02 05:58:22.056 | Installing setuptools, pip...done.
2015-03-02 05:58:22.581 | ERROR: 'pip wheel' requires the 'wheel' package. To 
fix this, run: pip install wheel

After pip install wheel, got same error.
In [2]: wheel.__path__
Out[2]: ['/usr/local/lib/python2.7/dist-packages/wheel']
In [5]: pip.__path__
Out[5]: ['/usr/local/lib/python2.7/dist-packages/pip']

$ which python
/usr/bin/python

As above, the wheel can be imported successfully,
any hints ?

Thanks.

Did you try pip install –upgrade wheel ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Keystone] Test plan template

2015-03-12 Thread Adam Young
I posted a test plan temoplate for review.  WHile my template is 
specific to Keystone, I think that it will benefit from a wider review.


I did not see a comparable document elsewhere.  There are the qa specs, 
but those look more like feature proposals for QA infrastructure than 
for test plans.  Tempest doesn't seem to have a specs repo, although 
with the push top do functional testing in the individual projects, I 
suspect that these would not belong to tempest, either.



Here is my review;  https://review.openstack.org/#/c/163882/

I took the document that we use in house and attempted to streamline 
it.  This template is long due mostly to the guidance text,  which 
should be removed in the submitted document.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Icehouse 2014.1.4 freeze exceptions

2015-03-12 Thread Tristan Cacqueray
On 03/12/2015 07:33 AM, Ihar Hrachyshka wrote:
 For OSSA patch, there seems to be some concerns and issues with the
 patch that was developed under embargo. It seems it will take more
 time than expected to merge it in master. It may mean we will actually
 miss the backport for 2014.1.4.

The master review is now in the gate pipeline and should be merged anytime
soon. Considering the lengthy backlog for this bug, I would prefer having
people actually test the proposed solution before pushing the stable backports.

Can Paul (in CC) please have a look at the last patch set, as you are the one
who found the main drawback in the firsts iterations...


Regards,
Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-12 Thread Okuma, Wayne
My vote is for 1500.

-Original Message-
From: Hemanth Makkapati [mailto:hemanth.makkap...@rackspace.com] 
Sent: Thursday, March 12, 2015 8:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

+1 to consistent time. 
Both 1400 and 1500 work me.

-Hemanth

From: Ian Cordasco ian.corda...@rackspace.com
Sent: Wednesday, March 11, 2015 10:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

I have no opinions on the matter. Either 1400 or 1500 work for me. I think 
there are a lot of people asking for it to be at 1500 instead though.
Would anyone object to changing it to 1500 instead (as long as it is one 
consistent time for the meeting)?

On 3/11/15, 01:53, Inessa Vasilevskaya ivasilevsk...@mirantis.com
wrote:

+1


On Mon, Mar 9, 2015 at 2:43 AM, Alexander Tivelkov 
ativel...@mirantis.com wrote:

Works for me, but the previous one worked as well. So, consider my vote 
as +1 unless the majority is against this :)


--
Regards,
Alexander Tivelkov




On Mon, Mar 9, 2015 at 12:36 AM, Fei Long Wang 
feil...@catalyst.net.nz wrote:

Oh, it means 3:00AM for me :-(


On 09/03/15 09:07, Nikhil Komawar wrote:






Hi all,


Currently, we've alternating time for Glance meetings. Now, with the 
Daylight savings being implemented in some parts of the world, we're 
thinking of moving the meeting time to just one slot i.e. earlier in 
the day(night). This solves the original conflicting  times issue that 
a subset of the individuals had; to add to that the schedule is less 
confusing and unified.



So, the new proposal is:
Glance meetings [1] to be conducted
weekly on
Thursdays at 1400UTC [2] on
#openstack-meeting-4



This would be implemented on Mar 19th, given there are no major 
objections.


Please vote with +1/-1 here.



[1]
https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting
https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting
[2]
http://www.timeanddate.com/worldclock/fixedtime.html?hour=14min=0sec=
0 
http://www.timeanddate.com/worldclock/fixedtime.html?hour=14min=0sec
=0


Thanks,
-Nikhil






___
___ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.
openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers  Best regards,
Fei Long Wang (王飞龙)
---
---
Senior Cloud Software Engineer
Tel: +64-48032246 tel:%2B64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
---
---


___
___ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









___
___ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [nova] readout from Philly Operators Meetup

2015-03-12 Thread Tim Bell

 I completely agree with you - Sean and Joe.

 Since the argument was brought up I just wanted to point out that this quota 
 service thing is a bit of a unicorn at the moment, and it should not 
 distract from fixing and improving quota maangement  enforcement logic in 
 the various openstack projects.

 I wan't be able to introduce hierarchical quotas in neutron by the end of 
 Kilo, but I'll keep it on the roadmap for Liberty.

 Salvatore


Given the hierarchical quotas make the quota handling more complex (to ensure 
parent quotas are consistent as well), this would seem a good candidate for an 
oslo library. During the Nova quota discussions, there was much consideration 
for how things would work and it would be a great cause of confusion if each 
project has its own approach/semantics.

A central quota service would then be a later decision which would have less 
impact if the code for quotas is shared.

Tim

 On 12 March 2015 at 11:59, Sean Dague s...@dague.net wrote:
 On 03/11/2015 08:31 PM, Joe Gordon wrote:
 
 
  On Wed, Mar 11, 2015 at 4:07 PM, Ihar Hrachyshka ihrac...@redhat.com
  mailto:ihrac...@redhat.com wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 03/11/2015 07:48 PM, Joe Gordon wrote:
   Out of sync Quotas --
  
   https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63
  
   The quotas code is quite racey (this is kind of a known if you look
   at the bug tracker). It was actually marked as a top soft spot
   during last fall's bug triage -
   
  http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html
  
There is an operator proposed spec for an approach here -
   https://review.openstack.org/#/c/161782/
  
   Action: we should make a solution here a top priority for enhanced
   testing and fixing in Liberty. Addressing this would remove a lot
   of pain from ops.
  
  
   To help us better track quota bugs I created a quotas tag:
  
   https://bugs.launchpad.net/nova/+bugs?field.tag=quotas
  
   Next step is re-triage those bugs: mark fixed bugs as fixed,
   deduplicate bugs etc.
 
  (Being quite far from nova code, so ignore if not applicable)
 
  I would like to note that other services experience races in quota
  management too. Neutron has a spec approved to land in Kilo-3 that is
  designed to introduce a new quota enforcement mechanism that is
  expected to avoid (some of those) races:
 
  
  https://github.com/openstack/neutron-specs/blob/master/specs/kilo/better-quotas.rst
 
  I thought you may be interested in looking into it to apply similar
  ideas to nova.
 
 
  Working on a library for this hasn't been ruled out yet. But right now I
  am simply trying to figure out how to reproduce the issue, and nothing else.
 Right, I think assuming an architecture change will magically fix this
 without scenarios that expose the existing bugs seems overly optimistic.

 I think there is a short / medium term test / reproduce question here,
 then a longer term question about different architecture.

 -Sean

 --
 Sean Dague
 http://dague.net
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-12 Thread Hemanth Makkapati
+1 to consistent time. 
Both 1400 and 1500 work me.

-Hemanth

From: Ian Cordasco ian.corda...@rackspace.com
Sent: Wednesday, March 11, 2015 10:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

I have no opinions on the matter. Either 1400 or 1500 work for me. I think
there are a lot of people asking for it to be at 1500 instead though.
Would anyone object to changing it to 1500 instead (as long as it is one
consistent time for the meeting)?

On 3/11/15, 01:53, Inessa Vasilevskaya ivasilevsk...@mirantis.com
wrote:

+1


On Mon, Mar 9, 2015 at 2:43 AM, Alexander Tivelkov
ativel...@mirantis.com wrote:

Works for me, but the previous one worked as well. So, consider my vote
as +1 unless the majority is against this :)


--
Regards,
Alexander Tivelkov




On Mon, Mar 9, 2015 at 12:36 AM, Fei Long Wang
feil...@catalyst.net.nz wrote:

Oh, it means 3:00AM for me :-(


On 09/03/15 09:07, Nikhil Komawar wrote:






Hi all,


Currently, we've alternating time for Glance meetings. Now, with the
Daylight savings being implemented in some parts of the world, we're
thinking of moving the meeting time to just one slot i.e. earlier in the
day(night). This solves the original conflicting
 times issue that a subset of the individuals had; to add to that the
schedule is less confusing and unified.



So, the new proposal is:
Glance meetings [1] to be conducted
weekly on
Thursdays at 1400UTC [2] on
#openstack-meeting-4



This would be implemented on Mar 19th, given there are no major
objections.


Please vote with +1/-1 here.



[1]
https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting
https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting
[2]
http://www.timeanddate.com/worldclock/fixedtime.html?hour=14min=0sec=0
http://www.timeanddate.com/worldclock/fixedtime.html?hour=14min=0sec=0


Thanks,
-Nikhil






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.
openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246 tel:%2B64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] proposing rameshg87 to ironic-core

2015-03-12 Thread Devananda van der Veen
Without further ado, and since everyone (even though some haven't replied
here) has +1'd this, and since we could really use ramesh's +2's in the run
up to Kilo-3 and feature freeze, even without the customary waiting/voting
period being completely satisfied (after all, when we all agree, why wait a
week?), I'd like to officially welcome him to the core team.

-Devananda

On Tue, Mar 10, 2015 at 10:03 AM David Shrewsbury shrewsbury.d...@gmail.com
wrote:

 +1

 On Mar 9, 2015, at 6:03 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 Hi all,

 I'd like to propose adding Ramakrishnan (rameshg87) to ironic-core.

 He's been consistently providing good code reviews, and been in the top
 five active reviewers for the last 90 days and top 10 for the last 180
 days. Two cores have recently approached me to let me know that they, too,
 find his reviews valuable.

 Furthermore, Ramakrishnan has made significant code contributions to
 Ironic over the last year. While working primarily on the iLO driver, he
 has also done a lot of refactoring of the core code, touched on several
 other drivers, and maintains the proliantutils library on stackforge. All
 in all, I feel this demonstrates a good and growing knowledge of the
 codebase and architecture of our project, and feel he'd be a valuable
 member of the core team.

 Stats, for those that want them, are below the break.

 Best Regards,
 Devananda



 http://stackalytics.com/?release=allmodule=ironic-groupuser_id=rameshg87

 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
 http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.policy 0.3.1 release

2015-03-12 Thread Doug Hellmann
The Oslo team is content to announce the release of:

oslo.policy 0.3.1: RBAC policy enforcement library for OpenStack

For more details, please see the git log history below and:

http://launchpad.net/oslo.policy/+milestone/0.3.1

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.policy

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.policy 0.3.0..0.3.1
---

f9aaf11 Switch to non-namespaced module imports

Diffstat (except docs and test files)
-

oslo_policy/openstack/common/fileutils.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.concurrency 1.8.0 release

2015-03-12 Thread Doug Hellmann

The Oslo team is content to announce the release of:

oslo.concurrency 1.8.0: oslo.concurrency library

For more details, please see the git log history below and:

http://launchpad.net/oslo.concurrency/+milestone/1.8.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.concurrency 1.7.0..1.8.0


ff54256 Switch to non-namespaced module imports
94624a7 Remove py33 env from default tox list
46fcdd3 Add lockutils.get_lock_path() function

Diffstat (except docs and test files)
-

oslo_concurrency/lockutils.py  | 16 +++-
oslo_concurrency/openstack/common/fileutils.py |  2 +-
tox.ini|  2 +-
4 files changed, 34 insertions(+), 3 deletions(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-03-11 05:59:10 -0700:
 =
  Additional Interesting Bits
 =
 
 Rabbit
 --
 
 There was a whole session on Rabbit -
 https://etherpad.openstack.org/p/PHL-ops-rabbit-queue
 
 Rabbit is a top operational concern for most large sites. Almost all
 sites have a restart everything that talks to rabbit script because
 during rabbit ha opperations queues tend to blackhole.
 
 All other queue systems OpenStack supports are worse than Rabbit (from
 experience in that room).
 
 oslo.messaging  1.6.0 was a significant regression in dependability
 from the incubator code. It now seems to be getting better but still a
 lot of issues. (L112)
 
 Operators *really* want the concept in
 https://review.openstack.org/#/c/146047/ landed. (I asked them to
 provide such feedback in gerrit).
 

This reminded me that there are other options that need investigation.

A few of us have been looking at what it might take to use something
in between RabbitMQ and ZeroMQ for RPC and notifications. Some initial
forays into inspecting Gearman (which infra has successfully used for
quite some time as the backend of Zuul) look promising. A few notes:

* The Gearman protocol is crazy simple. There are currently 4 known gearman
  server implementations: Perl, Java, C, and Python (written and
  maintained by our own infra team). http://gearman.org/download/ for
  the others, and https://pypi.python.org/pypi/gear for the python one.

* Gearman has no pub/sub capability built in for 1:N comms. However, it
  is fairly straight forward to write workers that will rebroadcast
  messages to subscribers.

* Gearman's security model is not very rich. Mostly, if you have been
  authenticated to the gearman server (only the C server actually even
  supports any type of authentication, via SSL client certs), you can
  do whatever you want including consuming all the messages in a queue
  or filling up a queue with nonsense. This has been raised as a concern
  in the past and might warrant extra work to add support to the python
  server and/or add ACL support.

Part of our motivation for this is that some of us are going to be
deploying a cloud soon and none of us are excited about deploying and
supporting RabbitMQ. So we may be proposing specs to add Gearman as a
deployment option soon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Sean Dague
On 03/12/2015 12:47 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-03-11 05:59:10 -0700:
 =
  Additional Interesting Bits
 =

 Rabbit
 --

 There was a whole session on Rabbit -
 https://etherpad.openstack.org/p/PHL-ops-rabbit-queue

 Rabbit is a top operational concern for most large sites. Almost all
 sites have a restart everything that talks to rabbit script because
 during rabbit ha opperations queues tend to blackhole.

 All other queue systems OpenStack supports are worse than Rabbit (from
 experience in that room).

 oslo.messaging  1.6.0 was a significant regression in dependability
 from the incubator code. It now seems to be getting better but still a
 lot of issues. (L112)

 Operators *really* want the concept in
 https://review.openstack.org/#/c/146047/ landed. (I asked them to
 provide such feedback in gerrit).

 
 This reminded me that there are other options that need investigation.
 
 A few of us have been looking at what it might take to use something
 in between RabbitMQ and ZeroMQ for RPC and notifications. Some initial
 forays into inspecting Gearman (which infra has successfully used for
 quite some time as the backend of Zuul) look promising. A few notes:
 
 * The Gearman protocol is crazy simple. There are currently 4 known gearman
   server implementations: Perl, Java, C, and Python (written and
   maintained by our own infra team). http://gearman.org/download/ for
   the others, and https://pypi.python.org/pypi/gear for the python one.
 
 * Gearman has no pub/sub capability built in for 1:N comms. However, it
   is fairly straight forward to write workers that will rebroadcast
   messages to subscribers.
 
 * Gearman's security model is not very rich. Mostly, if you have been
   authenticated to the gearman server (only the C server actually even
   supports any type of authentication, via SSL client certs), you can
   do whatever you want including consuming all the messages in a queue
   or filling up a queue with nonsense. This has been raised as a concern
   in the past and might warrant extra work to add support to the python
   server and/or add ACL support.
 
 Part of our motivation for this is that some of us are going to be
 deploying a cloud soon and none of us are excited about deploying and
 supporting RabbitMQ. So we may be proposing specs to add Gearman as a
 deployment option soon.

I think experimentation of other models is good. There was some
conversation that maybe Kafka was a better model as well. However,
realize that services are quite chatty at this point and push pretty
large payloads through that bus. The HA story is also quite important,
because the underlying message architecture assumes reliable delivery
for some of the messages, and if they fall on the floor, you'll get
either leaked resources, or broken resources. It's actually the HA
recovery piece of Rabbit (and when it doesn't HA recover correctly)
that's seemingly the sharp edge most people are hitting.

So... experimentation is good, but also important to realize how much is
provided for by the infrastructure that's there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread John Griffith
On Thu, Mar 12, 2015 at 10:48 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Thu, Mar 12, 2015, at 05:24 AM, Duncan Thomas wrote:
  ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
  /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
  DeprecationWarning: The oslo namespace package is deprecated. Please use
  oslo_i18n instead.
from oslo import i18n
  /opt/stack/cinder/cinder/openstack/common/policy.py:98:
  DeprecationWarning:
  The oslo namespace package is deprecated. Please use oslo_config instead.
from oslo.config import cfg
  /opt/stack/cinder/cinder/openstack/common/policy.py:99:
  DeprecationWarning:
  The oslo namespace package is deprecated. Please use oslo_serialization
  instead.
from oslo.serialization import jsonutils
  /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The oslo
  namespace package is deprecated. Please use oslo_messaging instead.
from oslo import messaging
 
 /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fileutils.py:22:
  DeprecationWarning: The oslo namespace package is deprecated. Please use
  oslo_utils instead.
from oslo.utils import excutils
 

 Yay, the system is working as designed!

 Oslo froze early to prepare releases to integrate with the downstream
 projects. You found an issue and reported it. Dims and others worked on
 patches, and we're releasing new versions. All before your feature
 freeze, so you can adopt them.

 
  What are normal, none developer users supposed to do with such warnings,
  other than:
  a) panic or b) Assume openstack is beta quality and then panic

 Next time, please try to be less snide. It makes it difficult to take
 you seriously.

 Doug

 
  --
  Duncan Thomas
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Very cool!!  I think there may have been some misunderstanding here on how
this would all shake out, but yes as Doug and Dims pointed out this worked
great.  Thanks everyone!!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [nova] readout from Philly Operators Meetup

2015-03-12 Thread Sajeesh Cimson Sasi
Hi Salvatore,
   We had a short discussion on Hierarchical quota management 
in nova  somewhere in December. The spec was approved ,but the code couldn't 
make it to Kilo. I am trying to get it merged in Liberty. Implementation is 
over. Only test cases are pending.
   I have resubmitted the specs for L release.
   https://review.openstack.org/160605
   Kindly have a look at it.
  You had told about quota management via oslo and storing 
quota values in keystone.Whether any progress has happened towards that 
direction ?
best regards
   sajeesh

From: Tim Bell [tim.b...@cern.ch]
Sent: 12 March 2015 21:51:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] FW:  [nova] readout from Philly Operators Meetup

 I completely agree with you - Sean and Joe.

 Since the argument was brought up I just wanted to point out that this quota 
 service thing is a bit of a unicorn at the moment, and it should not 
 distract from fixing and improving quota maangement  enforcement logic in 
 the various openstack projects.

 I wan't be able to introduce hierarchical quotas in neutron by the end of 
 Kilo, but I'll keep it on the roadmap for Liberty.

 Salvatore


Given the hierarchical quotas make the quota handling more complex (to ensure 
parent quotas are consistent as well), this would seem a good candidate for an 
oslo library. During the Nova quota discussions, there was much consideration 
for how things would work and it would be a great cause of confusion if each 
project has its own approach/semantics.

A central quota service would then be a later decision which would have less 
impact if the code for quotas is shared.

Tim

 On 12 March 2015 at 11:59, Sean Dague s...@dague.net wrote:
 On 03/11/2015 08:31 PM, Joe Gordon wrote:
 
 
  On Wed, Mar 11, 2015 at 4:07 PM, Ihar Hrachyshka ihrac...@redhat.com
  mailto:ihrac...@redhat.com wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 03/11/2015 07:48 PM, Joe Gordon wrote:
   Out of sync Quotas --
  
   https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63
  
   The quotas code is quite racey (this is kind of a known if you look
   at the bug tracker). It was actually marked as a top soft spot
   during last fall's bug triage -
   
  http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html
  
There is an operator proposed spec for an approach here -
   https://review.openstack.org/#/c/161782/
  
   Action: we should make a solution here a top priority for enhanced
   testing and fixing in Liberty. Addressing this would remove a lot
   of pain from ops.
  
  
   To help us better track quota bugs I created a quotas tag:
  
   https://bugs.launchpad.net/nova/+bugs?field.tag=quotas
  
   Next step is re-triage those bugs: mark fixed bugs as fixed,
   deduplicate bugs etc.
 
  (Being quite far from nova code, so ignore if not applicable)
 
  I would like to note that other services experience races in quota
  management too. Neutron has a spec approved to land in Kilo-3 that is
  designed to introduce a new quota enforcement mechanism that is
  expected to avoid (some of those) races:
 
  
  https://github.com/openstack/neutron-specs/blob/master/specs/kilo/better-quotas.rst
 
  I thought you may be interested in looking into it to apply similar
  ideas to nova.
 
 
  Working on a library for this hasn't been ruled out yet. But right now I
  am simply trying to figure out how to reproduce the issue, and nothing else.
 Right, I think assuming an architecture change will magically fix this
 without scenarios that expose the existing bugs seems overly optimistic.

 I think there is a short / medium term test / reproduce question here,
 then a longer term question about different architecture.

 -Sean

 --
 Sean Dague
 http://dague.net
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Doug Hellmann


On Thu, Mar 12, 2015, at 05:24 AM, Duncan Thomas wrote:
 ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
 /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
 DeprecationWarning: The oslo namespace package is deprecated. Please use
 oslo_i18n instead.
   from oslo import i18n
 /opt/stack/cinder/cinder/openstack/common/policy.py:98:
 DeprecationWarning:
 The oslo namespace package is deprecated. Please use oslo_config instead.
   from oslo.config import cfg
 /opt/stack/cinder/cinder/openstack/common/policy.py:99:
 DeprecationWarning:
 The oslo namespace package is deprecated. Please use oslo_serialization
 instead.
   from oslo.serialization import jsonutils
 /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The oslo
 namespace package is deprecated. Please use oslo_messaging instead.
   from oslo import messaging
 /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fileutils.py:22:
 DeprecationWarning: The oslo namespace package is deprecated. Please use
 oslo_utils instead.
   from oslo.utils import excutils


Yay, the system is working as designed!

Oslo froze early to prepare releases to integrate with the downstream
projects. You found an issue and reported it. Dims and others worked on
patches, and we're releasing new versions. All before your feature
freeze, so you can adopt them.
 
 
 What are normal, none developer users supposed to do with such warnings,
 other than:
 a) panic or b) Assume openstack is beta quality and then panic

Next time, please try to be less snide. It makes it difficult to take
you seriously.

Doug

 
 -- 
 Duncan Thomas
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] proper way to extend index() and add a new property?

2015-03-12 Thread Chris Friesen

On 03/12/2015 08:47 AM, Chris Friesen wrote:


I'm looking for some technical advice on API extensions.

I want to add a new field to the output of the nova service-list command,
which currently maps to ServiceController.index().

For the v2 API this seems straightforward, I can add a new extension and in the
existing function I can call self.ext_mgr.is_loaded() to see if my extension is
loaded, similar to the existing code that checks for 
'os-extended-services-delete'.

For v2.1 (ie the plugins in the v3 directory) the extension management seems to
be different though.  As far as I can tell I can only extend the controller and
create a new index() that takes as arguments the request and the output of the
existing index() function.


Never mind, I've since learned about microversions and the fact that the v2 API 
is frozen. I think I'm good.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Keystone] Failed to set up keystone v3 api for horizon

2015-03-12 Thread Ali, Haneef
Horizon needs to support domain scoped token for this to work. I don’t think it 
is yet there.

https://review.openstack.org/#/c/148082/39
https://review.openstack.org/#/c/141153/

Thanks
Haneef

From: Lei Zhang [mailto:zhang.lei@gmail.com]
Sent: Wednesday, March 11, 2015 7:33 PM
To: openstack; OpenStack Development Mailing List
Subject: [openstack-dev] [Horizon][Keystone] Failed to set up keystone v3 api 
for horizon

is there anyone tryed this and successfully?

On Mon, Mar 9, 2015 at 4:25 PM, Lei Zhang 
zhang.lei@gmail.commailto:zhang.lei@gmail.com wrote:
Hi guys,

I am setting up the keytone v3 api. Now I meet a issue about the `cloud_admin` 
policy.

Base on the http://www.florentflament.com/blog/setting-keystone-v3-domains.html 
article, I modify the cloud_admin policy to

```
cloud_admin: rule:admin_required and 
domain_id:ef0d30167f744401a0cbfcc938ea7d63,
```

But the cloud_admin don't work as expected. I failed to open all the identity 
panel ( like 
http://host/horizon/identity/domains/http://%3chost%3e/horizon/identity/domains/)
Horizon tell me Error: Unable to retrieve project list.
And keystone log warning:

```
2015-03-09 16:00:06.423 9415 DEBUG keystone.policy.backends.rules [-] enforce 
identity:list_user_projects: {'is_delegated_auth': False, 'access_token_id': 
None, 'user_id': u'6433222efd78459bb70ad9adbcfac418', 'roles': [u'_member_', 
u'admin'], 'trustee_id': None, 'trustor_id': None, 'consumer_id': None, 
'token': KeystoneToken (audit_id=DWsSa6yYSWi0ht9E7q4uhw, 
audit_chain_id=w_zLBBeFQ82KevtJrdKIJw) at 0x7f4503fab3c8, 'project_id': 
u'4d170baaa89b4e46b239249eb5ec6b00', 'trust_id': None}, enforce 
/usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py:100
2015-03-09 16:00:06.061 9410 WARNING keystone.common.wsgi [-] You are not 
authorized to perform the requested action: identity:list_projects (Disable 
debug mode to suppress these details.)
```

​I make some debug and found that, the root cause is that the `context` 
variable in keystone has no `domain_id` field( like the above keystone log). So 
the `cloud_admin` rule failed.​ if i change the `cloud_admin` to following. It 
works as expected.

```
cloud_admin: rule:admin_required and 
user_id:6433222efd78459bb70ad9adbcfac418,
```

I found that in the keystone code[0], the domain_id only exist when it is a 
domain scope. But i believe that the horizon login token is a project one( I am 
not very sure this)

```
if token.project_scoped:
auth_context['project_id'] = token.project_id
elif token.domain_scoped:
auth_context['domain_id'] = token.domain_id
else:
LOG.debug('RBAC: Proceeding without project or domain scope')

```

Is it a bug? or some wrong configuration?


Following is my configuration.


```
# /etc/keystone/keystone.conf
[DEFAULT]
debug=true
verbose=true
log_dir=/var/log/keystone
[assignment]
driver = keystone.assignment.backends.sql.Assignment
[database]
connection=mysql://:@controller/keystone
[identity]
driver=keystone.identity.backends.sql.Identity
[memcache]
servers=controller1:11211,controller2:11211,controller3:1121
[token]
provider=keystone.token.providers.uuid.Provider
```

```
# /etc/openstack-dashboard/local_settings.py ( partly )
POLICY_FILES_PATH = /etc/openstack-dashboard/
POLICY_FILES = {
'identity': 'keystone_policy.json',
}
OPENSTACK_HOST = 127.0.0.1
OPENSTACK_KEYSTONE_URL = http://%s:5000/v3http://%25s:5000/v3 % 
OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = _member_
OPENSTACK_API_VERSIONS = {
 data_processing: 1.1,
 identity: 3,
 volume: 2
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'admin'
```

​[0] 
https://github.com/openstack/keystone/blob/master/keystone/common/authorization.py#L58​

--
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l



--
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Experimental API

2015-03-12 Thread Sampath, Lakshmi
We had a discussion with API WG today about what it means to be an 
EXPERIMENTAL API and here's the takeway from that discussion.

- API's can be experimental, but mark it clearly in the docs as such
- Experimental means a breaking change may be introduced
- Use /x1/ instead of /v1/  in the endpoint.

Thoughts/Suggestions?


Thanks
Lakshmi.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2015-03-12 10:04:57 -0700:
 
 On Thu, Mar 12, 2015, at 12:47 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-03-11 05:59:10 -0700:
   =
Additional Interesting Bits
   =
   
   Rabbit
   --
   
   There was a whole session on Rabbit -
   https://etherpad.openstack.org/p/PHL-ops-rabbit-queue
   
   Rabbit is a top operational concern for most large sites. Almost all
   sites have a restart everything that talks to rabbit script because
   during rabbit ha opperations queues tend to blackhole.
   
   All other queue systems OpenStack supports are worse than Rabbit (from
   experience in that room).
   
   oslo.messaging  1.6.0 was a significant regression in dependability
   from the incubator code. It now seems to be getting better but still a
   lot of issues. (L112)
   
   Operators *really* want the concept in
   https://review.openstack.org/#/c/146047/ landed. (I asked them to
   provide such feedback in gerrit).
   
  
  This reminded me that there are other options that need investigation.
  
  A few of us have been looking at what it might take to use something
  in between RabbitMQ and ZeroMQ for RPC and notifications. Some initial
  forays into inspecting Gearman (which infra has successfully used for
  quite some time as the backend of Zuul) look promising. A few notes:
  
  * The Gearman protocol is crazy simple. There are currently 4 known
  gearman
server implementations: Perl, Java, C, and Python (written and
maintained by our own infra team). http://gearman.org/download/ for
the others, and https://pypi.python.org/pypi/gear for the python one.
  
  * Gearman has no pub/sub capability built in for 1:N comms. However, it
is fairly straight forward to write workers that will rebroadcast
messages to subscribers.
  
  * Gearman's security model is not very rich. Mostly, if you have been
authenticated to the gearman server (only the C server actually even
supports any type of authentication, via SSL client certs), you can
do whatever you want including consuming all the messages in a queue
or filling up a queue with nonsense. This has been raised as a concern
in the past and might warrant extra work to add support to the python
server and/or add ACL support.
  
  Part of our motivation for this is that some of us are going to be
  deploying a cloud soon and none of us are excited about deploying and
  supporting RabbitMQ. So we may be proposing specs to add Gearman as a
  deployment option soon.
 
 That sounds really intriguing, and I look forward to reading it and
 learning more about gearman.
 
 Be forewarned that oslo.messaging is pretty badly understaffed right
 now. Most of the original contributors have moved on, either to other
 parts of OpenStack or out of the community entirely. We can use more
 messaging experts to help with reviews and improvements. 
 

Noted, and subscribed in gertty. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [metadata] metadata service when NOT using name space

2015-03-12 Thread Wanjing Xu
Assaf
Thanks for replying.  I have been playing around metadata service to make sure 
our product is not breaking it.  If namespace is really needed if we want 
metadata service, then we need to know about it and document it in our product.
Thanks and Regards! Wanjing

 Date: Wed, 11 Mar 2015 22:38:09 -0400
 From: amul...@redhat.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron] [metadata] metadata service when NOT 
 using name space
 
 Can you explain why are you not using namespaces? (I'm really curious).
 
 I've been thinking of proposing to deprecate that option only for the simple
 truth that it's not tested and we have no idea if it works anymore, or if 
 anyone
 actually uses it.
 
 - Original Message -
  When use_namespaces is True, there will be a namespace metadata proxy
  launched for either dhcp or router namespace, this proxy will accept metada
  service request , and then proxy the request to metadata server via metadata
  agent. But when use_namespaces is False, there is no namespace metadata
  proxy running, how is the request from vm going to get to matadata server
  then? I also checked that there is no NAT rule in the iptables. So do we
  support metadata service with no namespace? If we do , how is it supposed to
  work? This is Juno.
  
  Regards!
  
  Wanjing Xu
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][rally] What's new in Rally v0.0.2

2015-03-12 Thread Boris Pavlovic
Hi stackers,

For those who doesn't know Rally team started making releases.

There are 3 major reasons why we started doing releases:

 * A lot of people started using Rally in their CI/CD.

Usually they don't like to depend on something that is from master.
And would like to have smooth testable upgrades between versions

 * Rally is used in gates of many projects.

As far as you know in Rally everything is plugable. These plugins can be
put  in project tree. This is nice flexibility for all projects. But it
blocks a lot
   development of Rally. To resolve this issue we are going to allow
projects t
   specify which version of Rally to run in their trees. This resolves 2
issues:
   1) projects gates won't depend on Rally master
   2) projects have smooth, no downtime, testable way to switch to newer
   version of Rally

 * Release notes - as a simple way to track project changes.



Release stats:
+--+-+
| Commits  | **100** |
+--+-+
| Bug fixes| **18**  |
+--+-+
| Dev cycle|   **45 days**   |
+--+-+
| Release date | **12/Mar/2015** |
+--+-+


Release notes:

https://rally.readthedocs.org/en/latest/release_notes/v0.0.2.html


Pypi:

https://pypi.python.org/pypi/rally/0.0.2


Future goals:

Our goal is to cut releases ever 2 weeks.  As far as project is quite
bugless and stable we don't need feature freeze at all, so I don't think
that it will be hard to achieve this goal.


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Doug Hellmann


On Thu, Mar 12, 2015, at 12:47 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-03-11 05:59:10 -0700:
  =
   Additional Interesting Bits
  =
  
  Rabbit
  --
  
  There was a whole session on Rabbit -
  https://etherpad.openstack.org/p/PHL-ops-rabbit-queue
  
  Rabbit is a top operational concern for most large sites. Almost all
  sites have a restart everything that talks to rabbit script because
  during rabbit ha opperations queues tend to blackhole.
  
  All other queue systems OpenStack supports are worse than Rabbit (from
  experience in that room).
  
  oslo.messaging  1.6.0 was a significant regression in dependability
  from the incubator code. It now seems to be getting better but still a
  lot of issues. (L112)
  
  Operators *really* want the concept in
  https://review.openstack.org/#/c/146047/ landed. (I asked them to
  provide such feedback in gerrit).
  
 
 This reminded me that there are other options that need investigation.
 
 A few of us have been looking at what it might take to use something
 in between RabbitMQ and ZeroMQ for RPC and notifications. Some initial
 forays into inspecting Gearman (which infra has successfully used for
 quite some time as the backend of Zuul) look promising. A few notes:
 
 * The Gearman protocol is crazy simple. There are currently 4 known
 gearman
   server implementations: Perl, Java, C, and Python (written and
   maintained by our own infra team). http://gearman.org/download/ for
   the others, and https://pypi.python.org/pypi/gear for the python one.
 
 * Gearman has no pub/sub capability built in for 1:N comms. However, it
   is fairly straight forward to write workers that will rebroadcast
   messages to subscribers.
 
 * Gearman's security model is not very rich. Mostly, if you have been
   authenticated to the gearman server (only the C server actually even
   supports any type of authentication, via SSL client certs), you can
   do whatever you want including consuming all the messages in a queue
   or filling up a queue with nonsense. This has been raised as a concern
   in the past and might warrant extra work to add support to the python
   server and/or add ACL support.
 
 Part of our motivation for this is that some of us are going to be
 deploying a cloud soon and none of us are excited about deploying and
 supporting RabbitMQ. So we may be proposing specs to add Gearman as a
 deployment option soon.

That sounds really intriguing, and I look forward to reading it and
learning more about gearman.

Be forewarned that oslo.messaging is pretty badly understaffed right
now. Most of the original contributors have moved on, either to other
parts of OpenStack or out of the community entirely. We can use more
messaging experts to help with reviews and improvements. 

Doug

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Joshua Harlow

Also note that the following is supposed to be true (or should be?):

From: https://docs.python.org/2/library/warnings.html#warning-categories

 DeprecationWarning: Base category for warnings about deprecated
 features (ignored by default).
 Changed in version 2.7: DeprecationWarning is ignored by default.

So by default these should be off/ignored:

Now projects can change that as they wish (and maybe we should have it 
be set to 'once' during development, and 'ignore' at release?), where 
these levels are at:


https://docs.python.org/2/library/warnings.html#the-warnings-filter

I believe I put the 'once' into cinder code:

https://github.com/openstack/cinder/blob/master/cinder/cmd/all.py#L34

But feel free to change it (this seems like it should be something 
consistent across projects)...


-Josh

John Griffith wrote:



On Thu, Mar 12, 2015 at 10:48 AM, Doug Hellmann d...@doughellmann.com
mailto:d...@doughellmann.com wrote:



On Thu, Mar 12, 2015, at 05:24 AM, Duncan Thomas wrote:
  ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
  /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
  DeprecationWarning: The oslo namespace package is deprecated.
Please use
  oslo_i18n instead.
from oslo import i18n
  /opt/stack/cinder/cinder/openstack/common/policy.py:98:
  DeprecationWarning:
  The oslo namespace package is deprecated. Please use oslo_config
instead.
from oslo.config import cfg
  /opt/stack/cinder/cinder/openstack/common/policy.py:99:
  DeprecationWarning:
  The oslo namespace package is deprecated. Please use
oslo_serialization
  instead.
from oslo.serialization import jsonutils
  /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning:
The oslo
  namespace package is deprecated. Please use oslo_messaging instead.
from oslo import messaging


/usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fileutils.py:22:
  DeprecationWarning: The oslo namespace package is deprecated.
Please use
  oslo_utils instead.
from oslo.utils import excutils


Yay, the system is working as designed!

Oslo froze early to prepare releases to integrate with the downstream
projects. You found an issue and reported it. Dims and others worked on
patches, and we're releasing new versions. All before your feature
freeze, so you can adopt them.


  What are normal, none developer users supposed to do with such
warnings,
  other than:
  a) panic or b) Assume openstack is beta quality and then panic

Next time, please try to be less snide. It makes it difficult to take
you seriously.

Doug


  --
  Duncan Thomas
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Very cool!!  I think there may have been some misunderstanding here on
how this would all shake out, but yes as Doug and Dims pointed out this
worked great.  Thanks everyone!!​

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-12 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-03-12 09:59:35 -0700:
 On 03/12/2015 12:47 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-03-11 05:59:10 -0700:
  =
   Additional Interesting Bits
  =
 
  Rabbit
  --
 
  There was a whole session on Rabbit -
  https://etherpad.openstack.org/p/PHL-ops-rabbit-queue
 
  Rabbit is a top operational concern for most large sites. Almost all
  sites have a restart everything that talks to rabbit script because
  during rabbit ha opperations queues tend to blackhole.
 
  All other queue systems OpenStack supports are worse than Rabbit (from
  experience in that room).
 
  oslo.messaging  1.6.0 was a significant regression in dependability
  from the incubator code. It now seems to be getting better but still a
  lot of issues. (L112)
 
  Operators *really* want the concept in
  https://review.openstack.org/#/c/146047/ landed. (I asked them to
  provide such feedback in gerrit).
 
  
  This reminded me that there are other options that need investigation.
  
  A few of us have been looking at what it might take to use something
  in between RabbitMQ and ZeroMQ for RPC and notifications. Some initial
  forays into inspecting Gearman (which infra has successfully used for
  quite some time as the backend of Zuul) look promising. A few notes:
  
  * The Gearman protocol is crazy simple. There are currently 4 known gearman
server implementations: Perl, Java, C, and Python (written and
maintained by our own infra team). http://gearman.org/download/ for
the others, and https://pypi.python.org/pypi/gear for the python one.
  
  * Gearman has no pub/sub capability built in for 1:N comms. However, it
is fairly straight forward to write workers that will rebroadcast
messages to subscribers.
  
  * Gearman's security model is not very rich. Mostly, if you have been
authenticated to the gearman server (only the C server actually even
supports any type of authentication, via SSL client certs), you can
do whatever you want including consuming all the messages in a queue
or filling up a queue with nonsense. This has been raised as a concern
in the past and might warrant extra work to add support to the python
server and/or add ACL support.
  
  Part of our motivation for this is that some of us are going to be
  deploying a cloud soon and none of us are excited about deploying and
  supporting RabbitMQ. So we may be proposing specs to add Gearman as a
  deployment option soon.
 
 I think experimentation of other models is good. There was some
 conversation that maybe Kafka was a better model as well. However,
 realize that services are quite chatty at this point and push pretty
 large payloads through that bus. The HA story is also quite important,
 because the underlying message architecture assumes reliable delivery
 for some of the messages, and if they fall on the floor, you'll get
 either leaked resources, or broken resources. It's actually the HA
 recovery piece of Rabbit (and when it doesn't HA recover correctly)
 that's seemingly the sharp edge most people are hitting.

Kafka is definitely another one I'd like to keep an eye on, but have
zero experience using.

Chatty is good for gearman, I don't see a problem with that. It's
particulary good at the sort of send, wait for response, act model
that I see used often in RPC. Large payloads can be a little expensive
as Gearman will keep them all in memory, but I wonder what you mean by
large. 1MB/message is meh if the rate of sending is 50/s.

Reliable delivery is handled several ways:

* Synchronous senders that can hang around and wait for a reply will
  work well as gearman will simply retry those messages if receivers
  have problems. If the gearmand itself dies in this case, client
  libraries should re-send to the next one in the list of servers.

* Async messages can be stored and forwarded. This scales out really
  nicely, but does complicate things in similar ways to ZeroMQ by
  needing a store-and-forward worker on each box.

* Enable persistence in the C server. This one is really darn slow IMO,
  and gives back a lot of Gearman's advantage at being mostly in-memory
  and scaling out. It works a lot like RabbitMQ's shovel HA method where
  you recover from a down node by loading all of the jobs into another
  node's memory. There are multiple options for the store, including
  sqlite, redis, tokyocabinet, and good old fashioned MySQL. My personal
  experience was with tokyocabinet (which I co-wrote the driver for)
  on top of DRBD. We hit delivery rates in the thousands / second with
  small messages, and that was with 2006 CPU's and hard disks. I imagine
  modern hardware with battery backed write cache and  SSD's can deliver
  quite a bit more.

 
 So... experimentation is good, but also important to realize how much is
 provided for by the infrastructure that's there.
 

Indeed, I suspect 

Re: [openstack-dev] [Glance] Nitpicking in code reviews

2015-03-12 Thread Jim Rollenhagen
On Thu, Mar 12, 2015 at 09:07:30AM -0500, Flavio Percoco wrote:
 On 11/03/15 15:06 -1000, John Bresnahan wrote:
 FWIW I agree with #3 and #4 but not #1 and #2.  Spelling is an easy enough
 thing to get right and speaks to the quality standard to which the product
 is held even in commit messages and comments (consider the 'broken window
 theory').  Of course everyone makes mistakes (I am a terrible speller) but
 correcting a spelling error should be a trivial matter.  If a reviewer
 notices a spelling error I would expect them to point it.
 
 I'd agree depending on the status of the patch. If the patch has
 already 2 +2s and someone blocks it because of a spelling error, then
 the cost of fixing it, running the CI jobs and getting the reviews
 again is higher than living with a simple typo.
 
 Process and rules are good but we must evaluate them in a case by case
 basis to make sure we're not blocking important work on things that
 are not that relevant after all.
 

In Ironic, we've set up rough guidelines for situations like these.

1) Reviewers should not -1 a patch solely for spelling/grammar errors.
2) If a reviewer finds one of these errors and feels strongly that it
   should be corrected, they should do one of the following:
   2a) If it won't slow down the patch (i.e. no +2's on it), fix it
   themselves and submit a new patchset. This is even easier for
   commit messages; they can be edited directly in Gerrit.
   2b) Make a note on the review and push a follow-up patch to fix it.
   2c) Ask the submitter for a follow-up patch that fixes it.

This has dramatically reduced our nitpicking on patches and has even
seemed to improve our general velocity.

// jim

 
 On 3/11/15 2:22 PM, Kuvaja, Erno wrote:
 Hi all,
 
 Following the code reviews lately I’ve noticed that we (the fan club
 seems to be growing on weekly basis) have been growing culture of
 nitpicking [1] and bikeshedding [2][3] over almost every single change.
 
 Seriously my dear friends, following things are not worth of “-1” vote
 if even a comment:
 
 1)Minor spelling errors on commit messages (as long as the message comes
 through and flags are not misspelled).
 
 2)Minor spelling errors on comments (docstrings and documentation is
 there and there, but comments, come-on).
 
 3)Used syntax that is functional, readable and does not break
 consistency but does not please your poem bowel.
 
 4)Other things you “just did not realize to check if they were there”.
 After you have gone through the whole change go and look your comments
 again and think twice if your concern/question/whatsoever was addressed
 somewhere else than where your first intuition would have dropped it.
 
 We have relatively high volume for glance at the moment and this
 nitpicking and bikeshedding does not help anyone. At best it just
 tightens nerves and breaks our group. Obviously if there is “you had ONE
 job” kind of situations or there is relatively high amount of errors
 combined with something serious it’s reasonable to ask fix the typos on
 the way as well. The reason being need to increase your statistics,
 personal perfectionist nature or actually I do not care what; just stop
 or go and do it somewhere else.
 
 
 Thanks for bringing all this up, Erno. I've been seeing the same
 pattern for all the points you've mentioned above. It's a good
 reminder for people to treat each patch individually so we avoid
 making our process and rules a pain for everyone.
 
 Flavio
 
 
 Love and pink ponies,
 
 -Erno
 
 [1] www.urbandictionary.com/define.php?term=nitpicking
 http://www.urbandictionary.com/define.php?term=nitpicking
 
 [2] http://bikeshed.com
 
 [3] http://en.wiktionary.org/wiki/bikeshedding
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 @flaper87
 Flavio Percoco



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] is it possible to microversion a static class method?

2015-03-12 Thread Chris Friesen

Hi,

I'm having an issue with microversions.

The api_version() code has a comment saying This decorator MUST appear first 
(the outermost decorator) on an API method for it to work correctly


I tried making a microversioned static class method like this:

@wsgi.Controller.api_version(2.4)  # noqa
@staticmethod
def _my_func(req, foo):

and pycharm highlighted the api_version decorator and complained that This 
decorator will not receive a callable it may expect; the built-in decorator 
returns a special object.


Is this a spurious warning from pycharm?  The pep8 checks don't complain.

If I don't make it static, then pycharm suggests that the method could be 
static.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] is it possible to microversion a static class method?

2015-03-12 Thread Sean Dague
On 03/12/2015 02:03 PM, Chris Friesen wrote:
 Hi,
 
 I'm having an issue with microversions.
 
 The api_version() code has a comment saying This decorator MUST appear
 first (the outermost decorator) on an API method for it to work correctly
 
 I tried making a microversioned static class method like this:
 
 @wsgi.Controller.api_version(2.4)  # noqa
 @staticmethod
 def _my_func(req, foo):
 
 and pycharm highlighted the api_version decorator and complained that
 This decorator will not receive a callable it may expect; the built-in
 decorator returns a special object.
 
 Is this a spurious warning from pycharm?  The pep8 checks don't complain.
 
 If I don't make it static, then pycharm suggests that the method could
 be static.

*API method*

This is not intended for use by methods below the top controller level.
If you want conditionals lower down in your call stack pull the request
version out yourself and use that.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Reviews.

2015-03-12 Thread Nikhil Komawar
Hi all (and specifically Glance cores reviewers),


We are approaching a Feature Freeze and there are reviews pending on multiple 
features [1]. I think as Good Samaritans and members of Glance community, we'd 
focus our reviews on the code that is expecting our attention in due time. So, 
let's try to avoid reviewing anything that can wait until after the freeze and 
focus our energy on the reviews related to these features still in Needs Code 
Review status.


[1] https://launchpad.net/glance/+milestone/kilo-3


Thanks,
-Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Experimental API

2015-03-12 Thread Alexander Tivelkov
Thanks Lakshmi, that's useful.

So, we want to release Artifacts API in Kilo as experimental. We do need
some early adopters to begin working with it (the initial interest was from
Heat and Murano projects, and the OVA/OVF initiative for Images as well) in
the next cycle and provide some feedback on the API and its usefulness, so
we may take this feedback into account before releasing the stable version
of API with L.
I've talked to Murano folks, they are ok with that plan, some feedback from
Heat and OVA teams would be great as well.

Anyways, we will not break the compatibility without serious reasons for
that, and we will collaborate with any early adopters about any such
breaking changes.

--
Regards,
Alexander Tivelkov

On Thu, Mar 12, 2015 at 8:19 PM, Sampath, Lakshmi lakshmi.samp...@hp.com
wrote:

 We had a discussion with API WG today about what it means to be an
 EXPERIMENTAL API and here's the takeway from that discussion.

 - API's can be experimental, but mark it clearly in the docs as such
 - Experimental means a breaking change may be introduced
 - Use /x1/ instead of /v1/  in the endpoint.

 Thoughts/Suggestions?


 Thanks
 Lakshmi.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need input on possible API change for bug #1420848

2015-03-12 Thread Chris Friesen


So I've been playing with microversions and noticed that the information about
req.ver_obj.matches(start_version, end_version) in the spec doesn't actually 
match what's in the codebase.


1) The code has req.api_version_request instead of req.ver_obj

2) The spec says that end_version is optional in which case it will match any
version greater than or equal to start_version.  The code requires both but 
allows either to be None.


If we're going to refer people to the spec, do we maybe want to update the spec 
to match the code?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Do we need release announcements for all the things?

2015-03-12 Thread Boris Pavlovic
Clint,

Personally I am quite interested in new releases of python clients and oslo
stuff, but I believe
that oslo releases announcement should be merged to single email with rate
limit 1 email per week.
It will be much simpler at least for me to track stuff.


Best regards,
Boris Pavlovic

On Thu, Mar 12, 2015 at 11:22 PM, Clint Byrum cl...@fewbar.com wrote:

 I spend a not-insignificant amount of time deciding which threads to
 read and which to fully ignore each day, so extra threads mean extra
 work, even with a streamlined workflow of single-key-press-per-thread.

 So I'm wondering what people are getting from these announcements being
 on the discussion list. I feel like they'd be better off in a weekly
 digest, on a web page somewhere, or perhaps with a tag that could be
 filtered out for those that don't benefit from them.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Do we need release announcements for all the things?

2015-03-12 Thread Gregory Haynes
Excerpts from Clint Byrum's message of 2015-03-12 20:22:04 +:
 I spend a not-insignificant amount of time deciding which threads to
 read and which to fully ignore each day, so extra threads mean extra
 work, even with a streamlined workflow of single-key-press-per-thread.
 
 So I'm wondering what people are getting from these announcements being
 on the discussion list. I feel like they'd be better off in a weekly
 digest, on a web page somewhere, or perhaps with a tag that could be
 filtered out for those that don't benefit from them.
 

++

Or maybe even just send them to the already existing openstack-anounce
list?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Robert Collins
On 13 March 2015 at 08:09, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/12/2015 11:38 AM, Boris Bobrov wrote:
 On Thursday 12 March 2015 12:59:10 Duncan Thomas wrote:
 So, assuming that all of the oslo depreciations aren't going to
 be fixed before release

 What makes you think that?

 In my opinion it's just one component's problem. These particular
 deprecation warnings are a result of still on-going migration from
 oslo.package to oslo_package. Ironically, all components except
 oslo have already moved to the new naming scheme.

 It's actually wrong. For example, Trove decided to stay on using the
 old namespace for Kilo.

Why?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs

2015-03-12 Thread Jay Pipes

On 03/12/2015 05:09 PM, John Dickinson wrote:

On Mar 12, 2015, at 12:48 PM, Jay Pipes jaypi...@gmail.com wrote:

On 03/12/2015 03:08 PM, John Dickinson wrote:

I'd like a little more info here.

Is Horizon relying on the X-Timestamp header for reads (GET/HEAD)? If so, I 
think that's somewhat odd, but not hugely problematic. Swift has been returning 
an X-Timestamp header since patch b20264c9d3196 (which landed 3 years ago -- 
April 2012).


OK, so there is a documentation bug here that X-Timestamp should be part of the 
Swift REST API. It currently is not documented that X-Timestamp is a 
non-optional HTTP Header, and therefore the RadosGW folks did not send 
X-Timestamp headers back in the container response.


The X-Timestamp header is certainly part of the Swift API. It is required for 
container-sync functionality (implemented in early 2011) so that two clusters 
can communicate about the proper timestamp of the objects.


OK. Sounds like an implementation detail leaking out of the API to me. In other 
words, RadosGW (which is attempting to expose a Swift API in front of Ceph 
backend storage) needs to expose this X-Timestamp header even if it implements 
container-sync using an entirely difference mechanism...


I'm not sure if this actually matters for Horizon in this specific case. But 
it's certainly true that Swift requires and used the X-Timestamp header for 
implementing core functionality. Anyone talking to a Swift endpoint can assume 
that there is an X-Timestamp header in the response and use it as they see fit.


Anyone talking to an upstream Swift *implementation* can assume that header 
will be there :) But, the header is not actually documented in the Swift *API* 
and therefore one cannot make this assumption.

Thus the confusion. :)



I don't particularly agree with the characterization of the API and implementation as 
separate, but that's a discussion that's as old as openstack itself. (and we 
don't need to belabor it here.)

:-)


:) Understood.


But yes, in my opinion, x-timestamp needs to be added to docs.


K, thx for the verification. I've added a doc bug:

https://bugs.launchpad.net/openstack-manuals/+bug/1431568


Anyway, sounds like X-Timestamp should be documented as part of the official 
Swift API. What about the X-Object-Meta-Mtime header in the related bug? That, 
too, is problematic for a similar reason. Is that header part of the Swift API 
as well?



Anything prefixed by X-Object-Meta- is user metadata, ie completely arbitrary 
and set by an end user (same with x-container-meta-* and x-account-meta-*). From the 
context of Swift, I have zero semantic understanding of any key or value in that 
namespace.


Excellent, thanks for the clarification.

All the best,
-jay


--John



Best,
-jay



--John





On Mar 9, 2015, at 12:53 PM, Anne Gentle annegen...@justwriteclick.com wrote:



On Mon, Mar 9, 2015 at 2:32 PM, Matthew Farina m...@mattfarina.com wrote:
David,

FYI, the last time I chatted with John Dickinson I learned there are numerous 
API elements not documented. Not meant to be private but the docs have not kept 
up. How should we handle that?


I've read through this thread and the bug comments and searched through the 
docs and I'd like more specifics: which docs have not kept up? Private API docs 
for swift internal workings? Or is this a header that could be in _some_ swift 
(not ceph) deployments?

Thanks,
Anne


Thanks,
Matt Farina

On Sat, Mar 7, 2015 at 5:25 PM, David Lyle dkly...@gmail.com wrote:
I agree that Horizon should not be requiring optional headers. Changing status 
of bug.

On Tue, Mar 3, 2015 at 5:51 PM, Jay Pipes jaypi...@gmail.com wrote:
Added [swift] to topic.

On 03/03/2015 07:41 AM, Matthew Farina wrote:
Radoslaw,

Unfortunately the documentation for OpenStack has some holes. What you
are calling a private API may be something missed in the documentation.
Is there a documentation bug on the issue? If not one should be created.

There is no indication that the X-Timestamp or X-Object-Meta-Mtime HTTP headers 
are part of the public Swift API:

http://developer.openstack.org/api-ref-objectstorage-v1.html

I don't believe this is a bug in the Swift API documentation, either. John 
Dickinson (cc'd) mentioned that the X-Timestamp HTTP header is required for the 
Swift implementation of container replication (John, please do correct me if 
wrong on that).

But that is the private implementation and not part of the public API.

In practice OpenStack isn't a specification and implementation. The
documentation has enough missing information you can't treat it this
way. If you want to contribute to improving the documentation I'm sure
the documentation team would appreciate it. The last time I looked there
were a number of undocumented public swift API details.

The bug here is not in the documentation. The bug is that Horizon is coded to rely on HTTP 
headers that are not in the Swift API. Horizon should be fixed to use 

Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs

2015-03-12 Thread John Dickinson

 On Mar 12, 2015, at 12:48 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 On 03/12/2015 03:08 PM, John Dickinson wrote:
 I'd like a little more info here.
 
 Is Horizon relying on the X-Timestamp header for reads (GET/HEAD)? If so, I 
 think that's somewhat odd, but not hugely problematic. Swift has been 
 returning an X-Timestamp header since patch b20264c9d3196 (which landed 3 
 years ago -- April 2012).
 
 OK, so there is a documentation bug here that X-Timestamp should be part of 
 the Swift REST API. It currently is not documented that X-Timestamp is a 
 non-optional HTTP Header, and therefore the RadosGW folks did not send 
 X-Timestamp headers back in the container response.
 
 The X-Timestamp header is certainly part of the Swift API. It is required 
 for container-sync functionality (implemented in early 2011) so that two 
 clusters can communicate about the proper timestamp of the objects.
 
 OK. Sounds like an implementation detail leaking out of the API to me. In 
 other words, RadosGW (which is attempting to expose a Swift API in front of 
 Ceph backend storage) needs to expose this X-Timestamp header even if it 
 implements container-sync using an entirely difference mechanism...
 
 I'm not sure if this actually matters for Horizon in this specific case. But 
 it's certainly true that Swift requires and used the X-Timestamp header for 
 implementing core functionality. Anyone talking to a Swift endpoint can 
 assume that there is an X-Timestamp header in the response and use it as 
 they see fit.
 
 Anyone talking to an upstream Swift *implementation* can assume that header 
 will be there :) But, the header is not actually documented in the Swift 
 *API* and therefore one cannot make this assumption.
 
 Thus the confusion. :)


I don't particularly agree with the characterization of the API and 
implementation as separate, but that's a discussion that's as old as 
openstack itself. (and we don't need to belabor it here.)

:-)


But yes, in my opinion, x-timestamp needs to be added to docs.


 
 Anyway, sounds like X-Timestamp should be documented as part of the official 
 Swift API. What about the X-Object-Meta-Mtime header in the related bug? 
 That, too, is problematic for a similar reason. Is that header part of the 
 Swift API as well?


Anything prefixed by X-Object-Meta- is user metadata, ie completely arbitrary 
and set by an end user (same with x-container-meta-* and x-account-meta-*). 
From the context of Swift, I have zero semantic understanding of any key or 
value in that namespace.


--John

 
 Best,
 -jay
 
 
 --John
 
 
 
 
 On Mar 9, 2015, at 12:53 PM, Anne Gentle annegen...@justwriteclick.com 
 wrote:
 
 
 
 On Mon, Mar 9, 2015 at 2:32 PM, Matthew Farina m...@mattfarina.com wrote:
 David,
 
 FYI, the last time I chatted with John Dickinson I learned there are 
 numerous API elements not documented. Not meant to be private but the docs 
 have not kept up. How should we handle that?
 
 
 I've read through this thread and the bug comments and searched through the 
 docs and I'd like more specifics: which docs have not kept up? Private API 
 docs for swift internal workings? Or is this a header that could be in 
 _some_ swift (not ceph) deployments?
 
 Thanks,
 Anne
 
 
 Thanks,
 Matt Farina
 
 On Sat, Mar 7, 2015 at 5:25 PM, David Lyle dkly...@gmail.com wrote:
 I agree that Horizon should not be requiring optional headers. Changing 
 status of bug.
 
 On Tue, Mar 3, 2015 at 5:51 PM, Jay Pipes jaypi...@gmail.com wrote:
 Added [swift] to topic.
 
 On 03/03/2015 07:41 AM, Matthew Farina wrote:
 Radoslaw,
 
 Unfortunately the documentation for OpenStack has some holes. What you
 are calling a private API may be something missed in the documentation.
 Is there a documentation bug on the issue? If not one should be created.
 
 There is no indication that the X-Timestamp or X-Object-Meta-Mtime HTTP 
 headers are part of the public Swift API:
 
 http://developer.openstack.org/api-ref-objectstorage-v1.html
 
 I don't believe this is a bug in the Swift API documentation, either. John 
 Dickinson (cc'd) mentioned that the X-Timestamp HTTP header is required for 
 the Swift implementation of container replication (John, please do correct 
 me if wrong on that).
 
 But that is the private implementation and not part of the public API.
 
 In practice OpenStack isn't a specification and implementation. The
 documentation has enough missing information you can't treat it this
 way. If you want to contribute to improving the documentation I'm sure
 the documentation team would appreciate it. The last time I looked there
 were a number of undocumented public swift API details.
 
 The bug here is not in the documentation. The bug is that Horizon is coded 
 to rely on HTTP headers that are not in the Swift API. Horizon should be 
 fixed to use DICT.get('X-Timestamp') instead of doing 
 DICT['X-Timestamp'] in its view pages for container details. There are 
 already patches up that the Horizon 

Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/12/2015 09:35 PM, Robert Collins wrote:
 On 13 March 2015 at 08:09, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
 
 On 03/12/2015 11:38 AM, Boris Bobrov wrote:
 On Thursday 12 March 2015 12:59:10 Duncan Thomas wrote:
 So, assuming that all of the oslo depreciations aren't going
 to be fixed before release
 
 What makes you think that?
 
 In my opinion it's just one component's problem. These
 particular deprecation warnings are a result of still on-going
 migration from oslo.package to oslo_package. Ironically,
 all components except oslo have already moved to the new naming
 scheme.
 
 It's actually wrong. For example, Trove decided to stay on using
 the old namespace for Kilo.
 
 Why?
 
 -Rob
 

http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2015-02-11.log

starting from 2015-02-11T18:03:11. I guess the assumption was that
there is immediate benefit, and they can just wait. Though I don't
think the fact that it means deprecation warnings in their logs was
appreciated at the time of decision.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAfplAAoJEC5aWaUY1u579TgIAMfCaH3SEHNXg5D0gHSyA6xd
ScQEcD9L7pL7C70ZW8ZVH4sneVEoFZAxu1hGx8Vzi3YoPXrnoQbtjWgpiscIwHC9
wrj4b1kjDmnI0z5fu+S0doq8Rzp0VarIRe4gIpzqfkZilmsPgKD8My3Fewf9ee5N
Or5ulSvP5URut+9fSClUmk0jjHUgHsRz4n0dBvhKrpbBAM/kEIlvQ9hcbg+jYgM7
JOKwTJRRMq7boqPpMAq+OhYXenZ9gDTvSJAovGveUw6G4i6wdz/99M595mnQ7bIJ
/Fj0iizZU0jo8NW/6dr6+aUgKHPw8MMpqt/DcinTp+0oYSHhU8Yvb/vo5DvFL28=
=Oqbc
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Do we need release announcements for all the things?

2015-03-12 Thread Jeremy Stanley
On 2015-03-12 13:22:04 -0700 (-0700), Clint Byrum wrote:
[...]
 So I'm wondering what people are getting from these announcements
 being on the discussion list.
[...]

The main thing I get from them is that they're being recorded to a
(theoretically) immutable archive indexed by a lot of other systems.
Some day I'd love for them to include checksums of the release
artifacts and be OpenPGP-signed by a release delegate for whatever
project is releasing, and for those people to also try to get their
keys signed by one another and members of the community at large.

Sure, we could divert them to a different list (openstack-announce
was suggested in another reply), but I suspect that most people
subscribed to -dev are also subscribed to -announce and so it
wouldn't effectively decrease their E-mail volume. On the other
hand, a lot more people should be subscribed to -announce so that's
probably a good idea anyway?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] is it possible to microversion a static class method?

2015-03-12 Thread Chen CH Ji
Know it's a little bit tricky but from
doc/source/devref/api_microversions.rst,
can we make _version_specific_func static function thought it's not
required or we can explicitly suggest not to do so...


 91 @api_version(2.1, 2.4)
 92 def _version_specific_func(self, req, arg1):
 93 pass
 94
 95 @api_version(min_version=2.5) #noqa
 96 def _version_specific_func(self, req, arg1):
 97 pass
 98
 99 def show(self, req, id):
100  common stuff 
101 self._version_specific_func(req, foo)
102  common stuff 

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Sean Dague s...@dague.net
To: openstack-dev@lists.openstack.org
Date:   03/12/2015 07:16 PM
Subject:Re: [openstack-dev] [nova] is it possible to microversion a
static class method?



On 03/12/2015 02:03 PM, Chris Friesen wrote:
 Hi,

 I'm having an issue with microversions.

 The api_version() code has a comment saying This decorator MUST appear
 first (the outermost decorator) on an API method for it to work
correctly

 I tried making a microversioned static class method like this:

 @wsgi.Controller.api_version(2.4)  # noqa
 @staticmethod
 def _my_func(req, foo):

 and pycharm highlighted the api_version decorator and complained that
 This decorator will not receive a callable it may expect; the built-in
 decorator returns a special object.

 Is this a spurious warning from pycharm?  The pep8 checks don't complain.

 If I don't make it static, then pycharm suggests that the method could
 be static.

*API method*

This is not intended for use by methods below the top controller level.
If you want conditionals lower down in your call stack pull the request
version out yourself and use that.

 -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nitpicking in code reviews

2015-03-12 Thread Ian Cordasco
On 3/12/15, 09:26, Daniel P. Berrange berra...@redhat.com wrote:

On Thu, Mar 12, 2015 at 09:07:30AM -0500, Flavio Percoco wrote:
 On 11/03/15 15:06 -1000, John Bresnahan wrote:
 FWIW I agree with #3 and #4 but not #1 and #2.  Spelling is an easy
enough
 thing to get right and speaks to the quality standard to which the
product
 is held even in commit messages and comments (consider the 'broken
window
 theory').  Of course everyone makes mistakes (I am a terrible speller)
but
 correcting a spelling error should be a trivial matter.  If a reviewer
 notices a spelling error I would expect them to point it.
 
 I'd agree depending on the status of the patch. If the patch has
 already 2 +2s and someone blocks it because of a spelling error, then
 the cost of fixing it, running the CI jobs and getting the reviews
 again is higher than living with a simple typo.

Also remember that submitting patches which fix typos is a great way
for new contributors to gain a ATC and thus qualify for free design
summit pass, so it is good to leave plenty of typos around :-P

So we should file more bugs with low-hanging fruit for these? I kind of
like that idea. Those bugs can be immediately triaged unlike some of
Glance’s other bugs.

Cheers,

Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nitpicking in code reviews

2015-03-12 Thread Nikhil Komawar
+2A :P (Daniel and Ian)

Thanks,
-Nikhil


From: Ian Cordasco ian.corda...@rackspace.com
Sent: Thursday, March 12, 2015 10:59 AM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Glance] Nitpicking in code reviews

On 3/12/15, 09:26, Daniel P. Berrange berra...@redhat.com wrote:

On Thu, Mar 12, 2015 at 09:07:30AM -0500, Flavio Percoco wrote:
 On 11/03/15 15:06 -1000, John Bresnahan wrote:
 FWIW I agree with #3 and #4 but not #1 and #2.  Spelling is an easy
enough
 thing to get right and speaks to the quality standard to which the
product
 is held even in commit messages and comments (consider the 'broken
window
 theory').  Of course everyone makes mistakes (I am a terrible speller)
but
 correcting a spelling error should be a trivial matter.  If a reviewer
 notices a spelling error I would expect them to point it.

 I'd agree depending on the status of the patch. If the patch has
 already 2 +2s and someone blocks it because of a spelling error, then
 the cost of fixing it, running the CI jobs and getting the reviews
 again is higher than living with a simple typo.

Also remember that submitting patches which fix typos is a great way
for new contributors to gain a ATC and thus qualify for free design
summit pass, so it is good to leave plenty of typos around :-P

So we should file more bugs with low-hanging fruit for these? I kind of
like that idea. Those bugs can be immediately triaged unlike some of
Glance’s other bugs.

Cheers,

Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Jay S. Bryant

Duncan,

If you see any of these coming out of Cinder (not oslo) please get a bug 
to me.


I think I have them all removed but need to know if I have missed anything.

Thanks!
Jay

On 03/12/2015 04:24 AM, Duncan Thomas wrote:

ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
/usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19: 
DeprecationWarning: The oslo namespace package is deprecated. Please 
use oslo_i18n instead.

  from oslo import i18n
/opt/stack/cinder/cinder/openstack/common/policy.py:98: 
DeprecationWarning: The oslo namespace package is deprecated. Please 
use oslo_config instead.

  from oslo.config import cfg
/opt/stack/cinder/cinder/openstack/common/policy.py:99: 
DeprecationWarning: The oslo namespace package is deprecated. Please 
use oslo_serialization instead.

  from oslo.serialization import jsonutils
/opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The 
oslo namespace package is deprecated. Please use oslo_messaging instead.

  from oslo import messaging
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fileutils.py:22: 
DeprecationWarning: The oslo namespace package is deprecated. Please 
use oslo_utils instead.

  from oslo.utils import excutils


What are normal, none developer users supposed to do with such 
warnings, other than:

a) panic or b) Assume openstack is beta quality and then panic

--
Duncan Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Do we need release announcements for all the things?

2015-03-12 Thread Clint Byrum
I spend a not-insignificant amount of time deciding which threads to
read and which to fully ignore each day, so extra threads mean extra
work, even with a streamlined workflow of single-key-press-per-thread.

So I'm wondering what people are getting from these announcements being
on the discussion list. I feel like they'd be better off in a weekly
digest, on a web page somewhere, or perhaps with a tag that could be
filtered out for those that don't benefit from them.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs

2015-03-12 Thread Jay Pipes

On 03/12/2015 03:08 PM, John Dickinson wrote:

I'd like a little more info here.

Is Horizon relying on the X-Timestamp header for reads (GET/HEAD)? If so, I 
think that's somewhat odd, but not hugely problematic. Swift has been returning 
an X-Timestamp header since patch b20264c9d3196 (which landed 3 years ago -- 
April 2012).


OK, so there is a documentation bug here that X-Timestamp should be part 
of the Swift REST API. It currently is not documented that X-Timestamp 
is a non-optional HTTP Header, and therefore the RadosGW folks did not 
send X-Timestamp headers back in the container response.



The X-Timestamp header is certainly part of the Swift API. It is required for 
container-sync functionality (implemented in early 2011) so that two clusters 
can communicate about the proper timestamp of the objects.


OK. Sounds like an implementation detail leaking out of the API to me. 
In other words, RadosGW (which is attempting to expose a Swift API in 
front of Ceph backend storage) needs to expose this X-Timestamp header 
even if it implements container-sync using an entirely difference 
mechanism...



I'm not sure if this actually matters for Horizon in this specific case. But 
it's certainly true that Swift requires and used the X-Timestamp header for 
implementing core functionality. Anyone talking to a Swift endpoint can assume 
that there is an X-Timestamp header in the response and use it as they see fit.


Anyone talking to an upstream Swift *implementation* can assume that 
header will be there :) But, the header is not actually documented in 
the Swift *API* and therefore one cannot make this assumption.


Thus the confusion. :)

Anyway, sounds like X-Timestamp should be documented as part of the 
official Swift API. What about the X-Object-Meta-Mtime header in the 
related bug? That, too, is problematic for a similar reason. Is that 
header part of the Swift API as well?


Best,
-jay



--John





On Mar 9, 2015, at 12:53 PM, Anne Gentle annegen...@justwriteclick.com wrote:



On Mon, Mar 9, 2015 at 2:32 PM, Matthew Farina m...@mattfarina.com wrote:
David,

FYI, the last time I chatted with John Dickinson I learned there are numerous 
API elements not documented. Not meant to be private but the docs have not kept 
up. How should we handle that?


I've read through this thread and the bug comments and searched through the 
docs and I'd like more specifics: which docs have not kept up? Private API docs 
for swift internal workings? Or is this a header that could be in _some_ swift 
(not ceph) deployments?

Thanks,
Anne


Thanks,
Matt Farina

On Sat, Mar 7, 2015 at 5:25 PM, David Lyle dkly...@gmail.com wrote:
I agree that Horizon should not be requiring optional headers. Changing status 
of bug.

On Tue, Mar 3, 2015 at 5:51 PM, Jay Pipes jaypi...@gmail.com wrote:
Added [swift] to topic.

On 03/03/2015 07:41 AM, Matthew Farina wrote:
Radoslaw,

Unfortunately the documentation for OpenStack has some holes. What you
are calling a private API may be something missed in the documentation.
Is there a documentation bug on the issue? If not one should be created.

There is no indication that the X-Timestamp or X-Object-Meta-Mtime HTTP headers 
are part of the public Swift API:

http://developer.openstack.org/api-ref-objectstorage-v1.html

I don't believe this is a bug in the Swift API documentation, either. John 
Dickinson (cc'd) mentioned that the X-Timestamp HTTP header is required for the 
Swift implementation of container replication (John, please do correct me if 
wrong on that).

But that is the private implementation and not part of the public API.

In practice OpenStack isn't a specification and implementation. The
documentation has enough missing information you can't treat it this
way. If you want to contribute to improving the documentation I'm sure
the documentation team would appreciate it. The last time I looked there
were a number of undocumented public swift API details.

The bug here is not in the documentation. The bug is that Horizon is coded to rely on HTTP 
headers that are not in the Swift API. Horizon should be fixed to use 
DICT.get('X-Timestamp') instead of doing DICT['X-Timestamp'] in its view 
pages for container details. There are already patches up that the Horizon developers have, 
IMO erroneously, rejected stating this is a problem in Ceph RadosGW for not properly 
following the Swift API).

Best,
-jay

Best of luck,
Matt Farina

On Tue, Mar 3, 2015 at 9:59 AM, Radoslaw Zarzynski
rzarzyn...@mirantis.com mailto:rzarzyn...@mirantis.com wrote:

 Guys,

 I would like discuss a problem which can be seen in Horizon: breaking
 the boundaries of public, well-specified Object Storage API in favour
 of utilizing a Swift-specific extensions. Ticket #1297173 [1] may serve
 as a good example of such violation. It is about relying on
 non-standard (in the terms of OpenStack Object Storage API v1) and
 undocumented HTTP header provided by Swift. In 

  1   2   >