Re: [openstack-dev] [Openstack-operators] [all][qa][gabbi][rally][tempest] Extend rally verfiy to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-11 Thread Andrey Kurilin
 $ rally verify https://github.com/openstack/nova start

As one of end-users of Rally, I dislike such construction, because I don't
want to remember links to repos, they are too long for me:)

On Wed, Mar 11, 2015 at 12:49 PM, Aleksandr Maretskiy 
amarets...@mirantis.com wrote:

 The idea is great, but IMHO we can move all project-specific code out of
 rally, so:

   * rally plugin should be a part of project (for example, located in
 functional tests directory)
   * use {project url} instead of {project name} in rally verify command,
 example:

 $ rally verify https://github.com/openstack/nova start


 On Tue, Mar 10, 2015 at 6:01 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 Hi,

 I like this idea, we use Rally for OpenStack clouds verification at scale
 and it is the real issue - how to run all functional tests from each
 project with the one script. If Rally will do this, I will use Rally to run
 these tests.

 Thank you!

 On Mon, Mar 9, 2015 at 6:04 PM, Chris Dent chd...@redhat.com wrote:

 On Mon, 9 Mar 2015, Davanum Srinivas wrote:

  2. Is there a test project with Gabbi based tests that you know of?


 In addition to the ceilometer tests that Boris pointed out gnocchi
 is using it as well:

https://github.com/stackforge/gnocchi/tree/master/gnocchi/tests/gabbi

  3. What changes if any are needed in Gabbi to make this happen?


 I was unable to tell from the original what this is and how gabbi
 is involved but the above link ought to be able to show you how
 gabbi can be used. There's also the docs (which could do with some
 improvement, so suggestions or pull requests welcome):

http://gabbi.readthedocs.org/en/latest/

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Timur,
 Senior QA Engineer
 OpenStack Projects
 Mirantis Inc

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Clark Boylan
On Wed, Mar 11, 2015, at 05:59 AM, Sean Dague wrote:
 Galera Upstream Testing
 ---
 
 The majority of deploys run with Galera MySQL. There was a question
 about whether or not we could get that into upstream testing pipeline
 as that's the common case.
 
It should be possible to run a two node galera cluster with garbd in the
current two node test environment. Or run an extra mysql on one of the
two nodes for a total of three database daemons (likely the compute node
as the controller already has a bunch of memory stealing daemons
running).

These aiopcpu (all in one plus compute node) tests run against
devstack-gate, devstack, tempest, nova, and neutron's experimental
queues and can be triggered by leaving a `check experimental` comment on
changes belonging to any of these projects. If we need to add the job to
the experimental queues of more projects we can do that too.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need input on possible API change for bug #1420848

2015-03-11 Thread Chen CH Ji
I would think a on v2 extension is needed
for v2.1 , mircoversion is a way but not very sure it's needed.

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Chris Friesen chris.frie...@windriver.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   03/11/2015 04:35 PM
Subject:[openstack-dev] [nova] need input on possible API change for
bug #1420848




Hi,

I'm working on bug #1420848 which addresses the issue that doing a
service-disable followed by a service-enable against a down compute
node
will result in the compute node going up for a while, possibly causing
delays
to operations that try to schedule on that compute node.

The proposed solution is to add a new reported_at field in the DB schema
to
track when the service calls _report_state().

The backend is straightforward, but I'm trying to figure out the best way
to
represent this via the REST API response.

Currently we response includes an updated_at property, which maps
directly to
the auto-updated updated_at field in the database.

Would it be acceptable to just put the reported_at value (if it exists)
in the
updated_at property?  Logically the reported_at value is just a
determination of when the service updated its own state, so an argument
could be
made that this shouldn't break anything.

Otherwise, by my reading of

https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK
 it
seems like if I wanted to add a new reported_at property I would need to
do it
via an API extension.

Anyone have opinions?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] K3 Feature Freeze Exception request for bp/nfs-backup

2015-03-11 Thread Jay S. Bryant

All,

I will sponsor this.

Patch just needs to be rebased.

Jay

On 03/11/2015 10:20 AM, Tom Barron wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I hereby solicit a feature freeze exception for the NFS backup review [1].

Although only about 140 lines of non-test code, this review completes
the implementation of the NFS backup blueprint [2].  Most of the
actual work for this blueprint was a refactor of the Swift backup
driver to
abstract the backup/restore business logic from the use of the Swift
object store itself as the backup repository.  With the help of Xing
Yang, Jay Bryant, and Ivan Kolodyazhny, that review [3] merged
yesterday and made the K3 FFE deadline.

In evaluating this FFE request, please take into account the following
considerations:

* Without the second review, the goal of the blueprint remains
  unfullfilled.

* This code was upstream in January and was essentially complete
  with only superficial changes since then.

* As of March 5 this review had two core +2s.  Delay since then has
  been entirely due to wait time on the dependent review and need
  for rebase given upstream code churn.

* No risk is added outside NFS backup service itself since the
  changes to current code are all in the core refactor and that
  is already merged.

If this FFE is granted, I will give the required rebase my immediate
attention.

Thanks.

- --
Tom Barron
t...@dyncloud.net

[1] - https://review.openstack.org/#/c/149726
[2] - https://blueprints.launchpad.net/cinder/+spec/nfs-backup
[3] - https://review.openstack.org/#/c/149725
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBAgAGBQJVAF1YAAoJEGeKBzAeUxEHayUH/2iWuOiKBnRauX40fwcR7+js
lIM+qRIHlg2iJ+cnqap6HHUhBSxwHnuAV41zQmFBKnfhc3sIqS98ZSVlUaJQtct/
YjjInKOpxFOEw1FgoFMsrg0qm76zFMXXVIKNegy2iXgXsKzDTWed5n57N8FAP2+6
q/uASOZNHgxbeZLV7LSKS21/3WUoQpIQiW0+1GtkVtO1C9t8Io+TrjlZj7T60kHJ
UEH5HShKE0U40SKhgwRyEK7HqbMDGv8w5SsUgyUntdgDlQycgyI/erKm5WJqcZsF
F6om6HY3oxtulcjbrWmA6+ENnOYsLchXFT8fZeLj7JWOarv5SF2fBQFTqzc/36U=
=/FVr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-11 Thread Ian Cordasco
I think part of the problem is that we’re still relying on distribution
packaged dependencies. If you look earlier in the logs for:
http://logs.openstack.org/51/146651/15/check/check-grenade-dsvm/2485c9e/log
s/grenade.sh.txt.gz#_2015-03-11_00_58_45_129 you’ll see that
“python-unittest2” is being installed and that it is installed in
dist-packages instead of site-packages. Haven’t there been multiple
problems with distribution based packages in the past? It would seem wiser
to stop relying on the system packages for installation if at all possible.

On 3/11/15, 09:43, Chris Dent chd...@redhat.com wrote:

On Wed, 11 Mar 2015, Alan Pevec wrote:

 So, we can work around this in devstack, but it seems like there is a
 more fundamental bug here that setup project isn't following
dependencies.

 Dep chain was: testtools (from
 zake=0.1-tooz=0.12,=0.3-ceilometer==2014.2.3.dev2)
 Unneeded _runtime_ dependency on testtools was removed in
 
https://github.com/yahoo/Zake/commit/215620ca51c3c883279ba62ccc860a274219
ecc1

 Is this just another 'pip is drunk' issue in it not actually satisfying
 requirements?

 Seems that pip is drunk by design, clarkb explained that pip only
 updates deps if you pass the --upgrade flag.

That's why I did this for devstack:
https://review.openstack.org/#/c/161195/

Presumably it might be useful other places?

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] K3 Feature Freeze Exception request for bp/nfs-backup

2015-03-11 Thread Tom Barron
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I hereby solicit a feature freeze exception for the NFS backup review [1].

Although only about 140 lines of non-test code, this review completes
the implementation of the NFS backup blueprint [2].  Most of the
actual work for this blueprint was a refactor of the Swift backup
driver to
abstract the backup/restore business logic from the use of the Swift
object store itself as the backup repository.  With the help of Xing
Yang, Jay Bryant, and Ivan Kolodyazhny, that review [3] merged
yesterday and made the K3 FFE deadline.

In evaluating this FFE request, please take into account the following
considerations:

   * Without the second review, the goal of the blueprint remains
 unfullfilled.

   * This code was upstream in January and was essentially complete
 with only superficial changes since then.

   * As of March 5 this review had two core +2s.  Delay since then has
 been entirely due to wait time on the dependent review and need
 for rebase given upstream code churn.

   * No risk is added outside NFS backup service itself since the
 changes to current code are all in the core refactor and that
 is already merged.

If this FFE is granted, I will give the required rebase my immediate
attention.

Thanks.

- --
Tom Barron
t...@dyncloud.net

[1] - https://review.openstack.org/#/c/149726
[2] - https://blueprints.launchpad.net/cinder/+spec/nfs-backup
[3] - https://review.openstack.org/#/c/149725
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBAgAGBQJVAF1YAAoJEGeKBzAeUxEHayUH/2iWuOiKBnRauX40fwcR7+js
lIM+qRIHlg2iJ+cnqap6HHUhBSxwHnuAV41zQmFBKnfhc3sIqS98ZSVlUaJQtct/
YjjInKOpxFOEw1FgoFMsrg0qm76zFMXXVIKNegy2iXgXsKzDTWed5n57N8FAP2+6
q/uASOZNHgxbeZLV7LSKS21/3WUoQpIQiW0+1GtkVtO1C9t8Io+TrjlZj7T60kHJ
UEH5HShKE0U40SKhgwRyEK7HqbMDGv8w5SsUgyUntdgDlQycgyI/erKm5WJqcZsF
F6om6HY3oxtulcjbrWmA6+ENnOYsLchXFT8fZeLj7JWOarv5SF2fBQFTqzc/36U=
=/FVr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] need input on possible API change for bug #1420848

2015-03-11 Thread Chris Friesen


Hi,

I'm working on bug #1420848 which addresses the issue that doing a 
service-disable followed by a service-enable against a down compute node 
will result in the compute node going up for a while, possibly causing delays 
to operations that try to schedule on that compute node.


The proposed solution is to add a new reported_at field in the DB schema to 
track when the service calls _report_state().


The backend is straightforward, but I'm trying to figure out the best way to 
represent this via the REST API response.


Currently we response includes an updated_at property, which maps directly to 
the auto-updated updated_at field in the database.


Would it be acceptable to just put the reported_at value (if it exists) in the 
updated_at property?  Logically the reported_at value is just a 
determination of when the service updated its own state, so an argument could be 
made that this shouldn't break anything.


Otherwise, by my reading of 
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK; it 
seems like if I wanted to add a new reported_at property I would need to do it 
via an API extension.


Anyone have opinions?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package

2015-03-11 Thread Timothy Swanson (tiswanso)
I don’t have any solution just chiming in that I see the same error with 
devstack pulled from master on a new ubuntu trusty VM created last night.

'pip install —upgrade wheel’ indicates:
Requirement already up-to-date: wheel in /usr/local/lib/python2.7/dist-packages

Haven’t gotten it cleared up.

Thanks,

Tim

On Mar 2, 2015, at 2:11 AM, Smigiel, Dariusz 
dariusz.smig...@intel.commailto:dariusz.smig...@intel.com wrote:



From: yuntong [mailto:yuntong...@gmail.com]
Sent: Monday, March 2, 2015 7:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [devstack] pip wheel' requires the 'wheel' package

Hello,
I got an error when try to ./stack.sh as:
2015-03-02 05:58:20.692 | net.ipv4.ip_local_reserved_ports = 35357,35358
2015-03-02 05:58:20.959 | New python executable in tmp-venv-NoMO/bin/python
2015-03-02 05:58:22.056 | Installing setuptools, pip...done.
2015-03-02 05:58:22.581 | ERROR: 'pip wheel' requires the 'wheel' package. To 
fix this, run: pip install wheel

After pip install wheel, got same error.
In [2]: wheel.__path__
Out[2]: ['/usr/local/lib/python2.7/dist-packages/wheel']
In [5]: pip.__path__
Out[5]: ['/usr/local/lib/python2.7/dist-packages/pip']

$ which python
/usr/bin/python

As above, the wheel can be imported successfully,
any hints ?

Thanks.

Did you try pip install –upgrade wheel ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Shared storage support

2015-03-11 Thread Chris Friesen

On 03/02/2015 06:24 PM, Jay Pipes wrote:

On 02/25/2015 06:41 AM, Daniel P. Berrange wrote:

On Wed, Feb 25, 2015 at 02:08:32PM +, Gary Kotton wrote:

I understand that this is a high or critical bug but I think that
we need to discuss more on it and try have a more robust model.


What I'm not seeing from the bug description is just what part of
the scheduler needs the ability to have total summed disk across
every host in the cloud.


The scheduler does not need to know this information at all.


Actually, there's a valid reason for the scheduler to know this.

If I want to schedule 5 instances, each with 10GB disk, and I have 5 compute 
nodes each reporting 30 GB of space, it really does matter to the scheduler 
whether those compute nodes are all on a single shared storage device or if 
they've all got separate storage.


The scheduler is always operating off stale data (as reported by the compute 
nodes some time ago) plus its own predictions.  If its predictions don't match 
the actual behaviour then its decisions are going to be wrong.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread John Belamaric
Here is a compromise option. The pluggable IPAM will be optionally enabled
in Kilo. We could introduce the restriction, but only when pluggable IPAM
is enabled. Support for having a tenant with overlapping IP space, along
with pluggable IPAM would wait until Liberty, when we can fully implement
the address scope concept. This concept was discussed during the spec
reviews of pluggable IPAM, and is simply adding a first-class object that
represents a layer 3 address space. A subnet would belong to a specific
scope, and any IP within a scope would be unique. To support the tenant
with overlapping space, you would create multiple scopes for that tenant.

This option maintains backward compatibility for existing deployments,
while allowing us to improve the model moving forward.

John

On 3/11/15, 2:22 AM, Carl Baldwin c...@ecbaldwin.net wrote:

On Tue, Mar 10, 2015 at 11:34 AM, Gabriel Bezerra
gabri...@lsd.ufcg.edu.br wrote:
 Em 10.03.2015 14:24, Carl Baldwin escreveu:
 I'd vote for allowing against such restriction, but throwing an error in
 case of creating a router between the subnets.

 I can imagine a tenant running multiple instances of an application,
each
 one with its own network that uses the same address range, to minimize
 configuration differences between them.

I see your point but yuck!  This isn't the place to skimp on
configuration changes.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread John Belamaric
On 3/12/15, 12:46 AM, Carl Baldwin c...@ecbaldwin.net wrote:


When talking with external IPAM to get a subnet, Neutron will pass
both the cidr as the primary identifier and the subnet_id as an
alternate identifier.  External systems that do not allow overlap can


Recall that IPAM driver instances are associated with a specific subnet
pool. As long as we do not allow overlap within a *pool* this is not
necessary. The pool will imply the scope (currently implicit, with one per
tenant), which the driver/external system would use to differentiate the
CIDR. As I mentioned in an earlier email, this would introduce the
uniqueness constraint in Kilo, but only when pluggable IPAM is enabled.

John




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Sylvain Bauza

Thanks Sean for writing up this report, greatly appreciated.
Comments inline.

Le 11/03/2015 13:59, Sean Dague a écrit :

The last couple of days I was at the Operators Meetup acting as Nova
rep for the meeting. All the sessions were quite nicely recorded to
etherpads here - https://etherpad.openstack.org/p/PHL-ops-meetup

There was both a specific Nova session -
https://etherpad.openstack.org/p/PHL-ops-nova-feedback as well as a
bunch of relevant pieces of information in other sessions.

This is an attempt for some summary here, anyone else that was in
attendance please feel free to correct if I'm interpreting something
incorrectly. There was a lot of content there, so this is in no way
comprehensive list, just the highlights that I think make the most
sense for the Nova team.

=
  Nova Network - Neutron
=

This remains listed as the #1 issue from the Operator Community on
their burning issues list
(https://etherpad.openstack.org/p/PHL-ops-burning-issues L18). During
the tags conversation we straw polled the audience
(https://etherpad.openstack.org/p/PHL-ops-tags L45) and about 75% of
attendees were over on neutron already. However those on Nova Network
we disproportionally the largest clusters and longest standing
OpenStack users.

Of those on nova-network about 1/2 had no interest in being on
Neutron (https://etherpad.openstack.org/p/PHL-ops-nova-feedback
L24). Some of the primary reasons were the following:

- Complexity concerns - neutron has a lot more moving parts
- Performance concerns - nova multihost means there is very little
   between guests and the fabric, which is really important for the HPC
   workload use case for OpenStack.
- Don't want OVS - ovs adds additional complexity, and performance
   concerns. Many large sites are moving off ovs back to linux bridge
   with neutron because they are hitting OVS scaling limits (especially
   if on UDP) - (https://etherpad.openstack.org/p/PHL-ops-OVS L142)

The biggest disconnect in the model seems to be that Neutron assumes
you want self service networking. Most of these deploys don't. Or even
more importantly, they live in an organization where that is never
going to be an option.

Neutron provider networks is close, except it doesn't provide for
floating IP / NAT.

Going forward: I think the gap analysis probably needs to be revisited
with some of the vocal large deployers. I think we assumed the
functional parity gap was closed with DVR, but it's not clear in it's
current format it actually meets the n-net multihost users needs.

===
  EC2 going forward
===

Having a sustaninable EC2 is of high interest to the operator
community. Many large deploys have some users that were using AWS
prior to using OpenStack, or currently are using both. They have
preexisting tooling for that.

There didn't seem to be any objection to the approach of an external
proxy service for this function -
(https://etherpad.openstack.org/p/PHL-ops-nova-feedback L111). Mostly
the question is timing, and the fact that no one has validated the
stackforge project. The fact that we landed everything people need to
run this in Kilo is good, as these production deploys will be able to
test it for their users when they upgrade.


  Burning Nova Features/Bugs


Hierarchical Projects Quotas


Hugely desired feature by the operator community
(https://etherpad.openstack.org/p/PHL-ops-nova-feedback L116). Missed
Kilo. This made everyone sad.

Action: we should queue this up as early Liberty priority item.

Out of sync Quotas
--

https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63

The quotas code is quite racey (this is kind of a known if you look at
the bug tracker). It was actually marked as a top soft spot during
last fall's bug triage -
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html

There is an operator proposed spec for an approach here -
https://review.openstack.org/#/c/161782/

Action: we should make a solution here a top priority for enhanced
testing and fixing in Liberty. Addressing this would remove a lot of
pain from ops.

Reporting on Scheduler Fails


Apparently, some time recently, we stopped logging scheduler fails
above DEBUG, and that behavior also snuck back into Juno as well
(https://etherpad.openstack.org/p/PHL-ops-nova-feedback L78). This
has made tracking down root cause of failures far more difficult.

Action: this should hopefully be a quick fix we can get in for Kilo
and backport.
It's unfortunate that failed scheduling attempts are providing only an 
INFO log. A quick fix could be at least to turn the verbosity up to WARN 
so it would be noticied more easily (including the whole filters stack 
with their results).
That said, I'm pretty against any proposal which would expose those 
specific details (ie. the number 

Re: [openstack-dev] [Fuel] Recent issues with our review workflow

2015-03-11 Thread Dmitry Borodaenko
It's really fine to create backports, even of other people's commits,
as long as you don't do it too early (and thus make it possible to
forget to update backports in line with newer patch sets of the master
commit and land inconsistent implementations to different release
series) and don't mangle Change-Id and Author (and turn git history
examination into an IT archaeology excercise).

Nothing personal: I've seen the same mistakes done by many people, so
I want to clarify why it's important.

On Wed, Mar 11, 2015 at 5:38 AM, Bartlomiej Piotrowski
bpiotrow...@mirantis.com wrote:
 I'll keep it in mind not to create unnecessary backports, although I really
 find it more convenient to do them once I submit changes to master for
 review. I apologize for [4], it indeed was wrong and it won't happen again.

 Regards,
 Bartłomiej

 On Wed, Mar 11, 2015 at 12:36 AM, Ryan Moe r...@mirantis.com wrote:

 Here are some examples of proposing changes prior to being merged in
 master [0][1][2][3][4]. [0] is a perfect example of why this isn't a good
 process. A change was proposed to stable/6.0 before master was merged, and
 now the change to master needs to be reworked based on review feedback.
 Premature backporting just creates unnecessary additional work. I'd also
 like to give a friendly reminder to make sure we maintain the Change-Id and
 author of any change we backport.

 The wiki [5] has also been updated to make this explicit.

 [0]
 https://review.openstack.org/#/q/Ief8186006386af8ae7e40cffeeaeef5a5c0f3c70,n,z
 [1]
 https://review.openstack.org/#/q/I4c94bb03501f4238ead2378cf504485b7d67b236,n,z
 [2]
 https://review.openstack.org/#/q/Ic15a3bfb6238e4281b06aae0a3f9fe4abf96590d,n,z
 [3]
 https://review.openstack.org/#/q/I7ab6dc2341821c3b82ef3d3ac63b64a5a9958fa9,n,z
 [4]
 https://review.openstack.org/#/q/Iff947f0053577f19441c04101e5a35a7820e40a0,n,z
 [5]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series

 Thanks,
 Ryan

 On Tue, Mar 10, 2015 at 4:20 AM, Tomasz Napierala
 tnapier...@mirantis.com wrote:


  On 09 Mar 2015, at 18:21, Ryan Moe r...@mirantis.com wrote:
 
  Hi All,
 
  I've noticed a few times recently where reviews have been abandoned by
  people who were not the original authors. These reviews were only days old
  and there was no prior notice or discussion. This is both rude and
  discouraging to contributors. Reasons for abandoning should be discussed 
  on
  the review and/or in email before any action is taken.

 Hi Ryan,

 I was trying to find any examples, and the only one I see is:
 https://review.openstack.org/#/c/152674/

 I spoke to Bogdan and he agreed it was not proper way to do it, but they
 were in a rush - I know, it does not explain anything really.

 Do you have any other examples? I’d like to clarify them

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com








 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0045] Vulnerable clients allow a TLS protocol downgrade (FREAK)

2015-03-11 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Vulnerable clients allow a TLS protocol downgrade (FREAK)
- ---

### Summary ###
Some client-side libraries, including un-patched versions of OpenSSL,
contain a vulnerability which can allow a man-in-the-middle (MITM) to
force a TLS version downgrade. Even though this vulnerability exists in
the client side, an attack known as FREAK is exploitable when TLS
servers offer weak cipher choices. This security note provides guidance
to mitigate the FREAK attack on the server side, so that TLS provides
reasonable security for even un-patched clients.

### Affected Services / Software ###
Any service using TLS. Depending on the backend TLS library, this
can include many components of an OpenStack cloud:

- - OpenStack services
- - OpenStack clients
- - Web servers (Apache, Nginx, etc)
- - SSL/TLS terminators (Stud, Pound, etc)
- - Proxy services (HAProxy, etc)
- - Miscellaneous services (eventlet, syslog, ldap, smtp, etc)

### Discussion ###
TLS connections are established by a process known as a TLS handshake.
During this process a client first sends a message to the server known
as HELLO, where among other things the client lists all of the TLS
encryption ciphers it supports. In the next step, the server responds
with its own HELLO packet, in which the server picks one of the cipher
options the client offered. After this the client and server continue on
to securely exchange a secret which becomes a master key.

The FREAK attack exploits a flaw in client logic in which vulnerable
clients don't actually check that the cipher which was selected by the
server was one they had offered in the first place. This creates the
possibility that an attacker on the network somewhere between the client
and server can alter the client's HELLO packet, removing all choices
except for really weak ciphers known as export grade ciphers. If the
server supports these weak ciphers, it will (regretfully) chose it, as
it appears that this is the only option that the client supports, and
send it back in the server HELLO message.

Export grade ciphers are a legacy of a time when the US government
prohibited the export of cryptographic ciphers which were stronger than
512 bits for asymmetric ciphers and 40 bits for symmetric ciphers. Today
these ciphers are easily and quickly crackable using commodity hardware
or cloud resources.

### Recommended Actions ###
Since this is a vulnerability in client logic, the best option is to
ensure that all clients update to the latest version of the affected
library. For more details about upgrading OpenSSL please see the link to
the OpenSSL advisory in the Further discussion section below. Since it
is unfeasible to assume that all clients have updated, we should also
mitigate this on the TLS server-side.

To mitigate the FREAK attack on the server side, we need to remove
support for any ciphers which are weak. This is to prevent a MITM from
forcing the negotiation of a weak cipher. In particular we need to
remove support for any export grade ciphers, which are especially weak.

The first step is to find what versions your TLS server currently
supports.  Two useful solutions exist for this: SSL Server Test at
Qualys SSL Labs can be used to scan any web accessible endpoints, and
SSLScan is a command line tool which attempts a TLS connection to a
server with all possible cipher suites.  Please see tools below for
links to both.

The specific steps required to configure which cipher suites are
supported in a TLS deployment depend on the software and configuration,
and are beyond the scope of this note. Some good starting places are
provided below in the section: Resources for configuring TLS options.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0045
Original LaunchPad Bug : N/A
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
CVE: CVE-2015-0204 (OpenSSL)
Further discussion of the issue:

http://blog.cryptographyengineering.com/2015/03/attack-of-week-freak-or-factoring-nsa.html
https://www.smacktls.com/
https://www.openssl.org/news/secadv_20150108.txt
Tools:
SSLScan: http://sourceforge.net/projects/sslscan/
SSL Server Test: https://www.ssllabs.com/ssltest/
Resources for configuring TLS options:
In OpenStack:
http://docs.openstack.org/security-guide/content/tls-proxies-and-http-services.html
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAKEbAAoJEJa+6E7Ri+EV1zsH/3xbCGhalwARdommJ6hWhoMz
3L3LnkixTCjpapOX+oiGvob1PRr5fRkc9T0k0pNXgNIa2WSPrTCLFn7yNbwBV6pz
ZCtiiL9eyp9YqAMf5RMtnKM4jLuyoQuZfQ9NtlJoiqOLzgkhD2oY6FU9wAcA8M2D
7VmLfQtMEH4MBbERMEPwG6Rq6rMeoL/kbjN1aB4b3WDxrJx/1iNX0gJxaMPqERz6
oYiQxgRsInwEtT66/slDOc9R8vD8u40pSbm7TBMQUU/orwVzakkwwM5XA/5Q2cIU
IIRbXmxrwtTKIEBL1PmDldxBrXEQsLnKyzfquu1gEu0m8FGvdfgcXxZaMNwixbI=
=qgzB
-END PGP SIGNATURE-


Re: [openstack-dev] [Solum] Should app names be unique?

2015-03-11 Thread Murali Allada
The only reason this came up yesterday is because we wanted Solums 'app create' 
behavior to be consistent with other openstack services.


However, if heat has a unique stack name constraint and glance\nova don't, then 
the argument of consistency does not hold.


I'm still of the opinion that we should have a unique name constraint for apps 
and languagepacks within a tenants namespace, as it can get very confusing if a 
user creates multiple apps with the same name.


Also, customer research done here at Rackspace has shown that users prefer 
using 'names' rather than 'UUIDs'.


-Murali




From: Devdatta Kulkarni devdatta.kulka...@rackspace.com
Sent: Wednesday, March 11, 2015 2:48 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Solum] Should app names be unique?


Hi Solum team,


In yesterday's team meeting the question of whether Solum should enforce unique 
app name constraint

within a tenant came up.


As a recollection, in Solum one can create an 'app' using:

solum app create --plan-file plan-file --name app-name


Currently Solum does support creating multiple apps with the same name.

However, in yesterday's meeting we were debating/discussing whether this should 
be the case.

The meeting log is available here:

http://eavesdrop.openstack.org/meetings/solum_team_meeting/2015/solum_team_meeting.2015-03-10-21.00.log.html



To set the context for discussion, consider the following:

- heroku does not allow creating another app with the same name as that of an 
already existing app

- github does not allow creating another repository with the same name as that 
of an already existing repo


Thinking about why this might be in case for heroku, one aspect that comes to 
mind is the setting of a 'remote' using
the app name. When we do a 'git push', it happens to this remote.
When we don't specify a remote in 'git push' command, git defaults to using the 
'origin' remote.
Even if multiple remotes with the same name were to be possible, when using an 
implicit command such as 'git push',
in which some of the input comes from the context, the system will not be able 
to disambiguate which remote to use.
So requiring unique names ensures that there is no ambiguity when using such 
implicit commands.
This might also be the reason why on github we cannot create repository with an 
already existing name.

But this is just a guess for why unique names might be required. I could be 
totally off.

I think Solum's use case is similar.

Agreed that Solum currently does not host application repositories and so there 
is no question of
Solum generated remotes. But by allowing non-unique app names
it might be difficult to support this feature in the future.

As an aside, I checked what position other Openstack services take on this 
issue.
1) Heat enforces unique stack-name constraint.
2) Nova does not enforce this constraint.


So it is clear that within Openstack there is no consistency on this issue.


What should Solum do?


Thoughts?


Best regards,

Devdatta


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Sean Dague
On 03/11/2015 12:53 PM, Sylvain Bauza wrote:
 Reporting on Scheduler Fails
 

 Apparently, some time recently, we stopped logging scheduler fails
 above DEBUG, and that behavior also snuck back into Juno as well
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L78). This
 has made tracking down root cause of failures far more difficult.

 Action: this should hopefully be a quick fix we can get in for Kilo
 and backport.
 It's unfortunate that failed scheduling attempts are providing only an
 INFO log. A quick fix could be at least to turn the verbosity up to WARN
 so it would be noticied more easily (including the whole filters stack
 with their results).
 That said, I'm pretty against any proposal which would expose those
 specific details (ie. the number of hosts which are succeeding per
 filter) in an API endpoint because it would also expose the underlying
 infrastructure capacity and would ease DoS discoveries. A workaround
 could be to include in the ERROR message only the name of the filter
 which has been denied so the operators could very easily match what the
 user is saying with what they're seeing in the scheduler logs.
 
 Does that work for people ? I can provide changes for both.

Right, it definitely used to be *WARN* level in the logs.

I'll definitely help land patches to restore informative WARN messages
in the logs on scheduler failure. I think that handles the regression.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Robert Collins
On 12 March 2015 at 10:09, Sean Dague s...@dague.net wrote:
 On 03/11/2015 11:26 AM, Clark Boylan wrote:
 On Wed, Mar 11, 2015, at 05:59 AM, Sean Dague wrote:
 Galera Upstream Testing
 ---

 The majority of deploys run with Galera MySQL. There was a question
 about whether or not we could get that into upstream testing pipeline
 as that's the common case.

 It should be possible to run a two node galera cluster with garbd in the
 current two node test environment. Or run an extra mysql on one of the
 two nodes for a total of three database daemons (likely the compute node
 as the controller already has a bunch of memory stealing daemons
 running).

 These aiopcpu (all in one plus compute node) tests run against
 devstack-gate, devstack, tempest, nova, and neutron's experimental
 queues and can be triggered by leaving a `check experimental` comment on
 changes belonging to any of these projects. If we need to add the job to
 the experimental queues of more projects we can do that too.

 Can we sanely run a 2 node galera in a single node devstack? It might
 make for a more interesting default there given that time and again this
 seems to be the deployment strategy for nearly everyone, and there are
 some edge case differences that Jay Pipes has highlighted that would be
 nice to be more in front of everyone.

Yes. Its just a matter of memory and applying the effort to set it up.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-11 Thread Nikhil Komawar
I'd prefer to go with 1400 UTC unless there's a majority for 1500UTC.

P.S. It's my feeling that ML announcements and conversations are not effective 
when taking poll from wider audience so we'd discuss this a bit more in the 
next meeting and merge the votes.

Thanks,
-Nikhil


From: Louis Taylor lo...@kragniz.eu
Sent: Wednesday, March 11, 2015 10:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

On Wed, Mar 11, 2015 at 02:25:26PM +, Ian Cordasco wrote:
 I have no opinions on the matter. Either 1400 or 1500 work for me. I think
 there are a lot of people asking for it to be at 1500 instead though.
 Would anyone object to changing it to 1500 instead (as long as it is one
 consistent time for the meeting)?

I have no problem with that. I'm +1 on a consistent time, but don't mind when
it is.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should app names be unique?

2015-03-11 Thread Keith Bray
Dev, thanks for bringing up the item about Heat enforcing unique stack names.. 
My mistake on thinking it supported non-unique stack names.  I remember it 
working early on, but probably got changed/fixed somewhere along the way.

My argument in IRC was one based on consistency with related/similar 
projects... So, as Murali pointed out, if things aren't consistent within 
OpenStack, then that certainly leaves much more leeway in my opinion for Solum 
to determine its own path without concern for falling in line with what the 
other projects have done (since a precedent can't be established).

To be honest, I don't agree with the argument about github, however.  Github 
(and also Heroku) are using URLs, which are Unique IDs.  I caution against 
conflating a URL with a name, where a URL in the case of github serves both 
purposes, but each (both a name and an ID) have merit as standalone 
representations.

I am happy to give my support to enforcing unique names as the Solum default, 
but I continue to highly encourage things be architected in a way that 
non-unique names could be supported in the future on at least a per-tenant 
basis, should that need become validated by customer asks.

Kind regards,
-Keith

From: Murali Allada 
murali.all...@rackspace.commailto:murali.all...@rackspace.com
Reply-To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, March 11, 2015 2:12 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] Should app names be unique?


The only reason this came up yesterday is because we wanted Solums 'app create' 
behavior to be consistent with other openstack services.


However, if heat has a unique stack name constraint and glance\nova don't, then 
the argument of consistency does not hold.


I'm still of the opinion that we should have a unique name constraint for apps 
and languagepacks within a tenants namespace, as it can get very confusing if a 
user creates multiple apps with the same name.


Also, customer research done here at Rackspace has shown that users prefer 
using 'names' rather than 'UUIDs'.


-Murali




From: Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com
Sent: Wednesday, March 11, 2015 2:48 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Solum] Should app names be unique?


Hi Solum team,


In yesterday's team meeting the question of whether Solum should enforce unique 
app name constraint

within a tenant came up.


As a recollection, in Solum one can create an 'app' using:

solum app create --plan-file plan-file --name app-name


Currently Solum does support creating multiple apps with the same name.

However, in yesterday's meeting we were debating/discussing whether this should 
be the case.

The meeting log is available here:

http://eavesdrop.openstack.org/meetings/solum_team_meeting/2015/solum_team_meeting.2015-03-10-21.00.log.html



To set the context for discussion, consider the following:

- heroku does not allow creating another app with the same name as that of an 
already existing app

- github does not allow creating another repository with the same name as that 
of an already existing repo


Thinking about why this might be in case for heroku, one aspect that comes to 
mind is the setting of a 'remote' using
the app name. When we do a 'git push', it happens to this remote.
When we don't specify a remote in 'git push' command, git defaults to using the 
'origin' remote.
Even if multiple remotes with the same name were to be possible, when using an 
implicit command such as 'git push',
in which some of the input comes from the context, the system will not be able 
to disambiguate which remote to use.
So requiring unique names ensures that there is no ambiguity when using such 
implicit commands.
This might also be the reason why on github we cannot create repository with an 
already existing name.

But this is just a guess for why unique names might be required. I could be 
totally off.

I think Solum's use case is similar.

Agreed that Solum currently does not host application repositories and so there 
is no question of
Solum generated remotes. But by allowing non-unique app names
it might be difficult to support this feature in the future.

As an aside, I checked what position other Openstack services take on this 
issue.
1) Heat enforces unique stack-name constraint.
2) Nova does not enforce this constraint.


So it is clear that within Openstack there is no consistency on this issue.


What should Solum do?


Thoughts?


Best regards,

Devdatta


__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread John Belamaric


On 3/12/15, 2:33 AM, Carl Baldwin c...@ecbaldwin.net wrote:

John,

I think our proposals fit together nicely.  This thread is about
allowing overlap within a pool.  I think it is fine for an external

IPAM driver to disallow such overlap for now.  However, the reference
implementation must support it for backward compatibility and so my
proposal will account for that.

I was proposing that the reference driver not support it either, and we
only handle that use case via the non-pluggable implementation in Kilo,
waiting until Liberty to handle it in the pluggable implementation.
However, I don't think it's particularly harmful to do it either way so I
am OK with this.

Thanks,
John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Sean Dague
On 03/11/2015 01:21 PM, Tim Bell wrote:
 Reporting on Scheduler Fails 
 
 Apparently, some time recently, we stopped logging scheduler
 fails above DEBUG, and that behavior also snuck back into
 Juno as well 
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L78).
 This has made tracking down root cause of failures far more
 difficult.
 
 Action: this should hopefully be a quick fix we can get in
 for Kilo and backport.
 It's unfortunate that failed scheduling attempts are providing
 only an INFO log. A quick fix could be at least to turn the
 verbosity up to WARN so it would be noticied more easily
 (including the whole filters stack with their results). That
 said, I'm pretty against any proposal which would expose those
 specific details (ie. the number of hosts which are succeeding
 per filter) in an API endpoint because it would also expose the
 underlying infrastructure capacity and would ease DoS
 discoveries. A workaround could be to include in the ERROR
 message only the name of the filter which has been denied so the
 operators could very easily match what the user is saying with 
 what they're seeing in the scheduler logs.
 
 Does that work for people ? I can provide changes for both.
 
 -Sylvain
 
 In the CERN use case, we'd be OK providing more details to the end
 user. This would save on followup helpdesk tickets which could
 instead be documented (e.g. try a different availability zone or a
 smaller flavour). However, I fully understand that this is a private
 cloud oriented answer so it should be configurable.
 
 At minimum, providing the information as standard in the logs is
 needed. These scenarios are automatic helpdesk cases so giving the
 operator the information needed in the logs with the instance IDs
 saves the I've turned on DEBUG, can you try again?.
 
 Tim

I think there is an interesting follow discussion (maybe in Vancouver)
about self service debugging. There are a lot of assumptions of data
hiding which assume your users are untrustable. In some environments,
that's appropriate. However in many of the private cloud use cases the
users are quite trusted, and exposing more information on errors would
actually be good for everyone. It would close the loops on problem
determination.

Anyway, something to ponder longer term. Handling that is beyond the
scope of the regression, but it's an interesting idea that has at least
one big operator in favor of it.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] auth migration and user data migration

2015-03-11 Thread Clay Gerrard
On Wed, Mar 11, 2015 at 1:16 PM, Weidong Shao weidongs...@gmail.com wrote:


 ​the url is encoded in the object hash​! This somehow entangles the data
 storage/validity with its account and makes it difficult to migrate the
 data. I guess it is too late to debate on the design of this. Do you know
 the technical reasons for doing this?




Well, yeah - can't see much good coming of trying to debate the design :)

The history may well be an aside from the issue at hand, but...

Not having a lookup/indirection layer was a design principle for achieving
the desired scaling properties of Swift.  Before Swift some of the
developers that worked on it had built another system that had a lookup
layer and it was a huge pain in the ass after a half billion objects or so
- but as with anything it's not the only way to do it, just trying
something and it seemed to work out.

I'd guess at least some of the justification came from: uri's don't change
- people change them [1].

Without a lookup layer that you can update (i.e. name = resource =
new_name = resource) - you can either create a new resources that happens
to have the same content of the other and delete the old OR add some custom
namespace redirection to make the resource accessible from another name (a
vanity url middleware comes up from time to time - reseller prefix rewrite
may be as good a use-case as any).

I made sure I was watching the swauth repo [2] - if you open any issues
there I'll try to keep an eye on them.  Thanks!

-Clay

1. http://www.w3.org/Provider/Style/URI.html
2. https://github.com/gholt/swauth/issues
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Sean Dague
On 03/11/2015 11:26 AM, Clark Boylan wrote:
 On Wed, Mar 11, 2015, at 05:59 AM, Sean Dague wrote:
 Galera Upstream Testing
 ---

 The majority of deploys run with Galera MySQL. There was a question
 about whether or not we could get that into upstream testing pipeline
 as that's the common case.

 It should be possible to run a two node galera cluster with garbd in the
 current two node test environment. Or run an extra mysql on one of the
 two nodes for a total of three database daemons (likely the compute node
 as the controller already has a bunch of memory stealing daemons
 running).
 
 These aiopcpu (all in one plus compute node) tests run against
 devstack-gate, devstack, tempest, nova, and neutron's experimental
 queues and can be triggered by leaving a `check experimental` comment on
 changes belonging to any of these projects. If we need to add the job to
 the experimental queues of more projects we can do that too.

Can we sanely run a 2 node galera in a single node devstack? It might
make for a more interesting default there given that time and again this
seems to be the deployment strategy for nearly everyone, and there are
some edge case differences that Jay Pipes has highlighted that would be
nice to be more in front of everyone.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Tim Bell


 -Original Message-
 From: Jeremy Stanley [mailto:fu...@yuggoth.org]
 Sent: 11 March 2015 20:40
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Avoiding regression in project governance
 
 On 2015-03-11 19:06:21 + (+), Tim Bell wrote:
 [...]
  I think we can make this work. Assuming more than N (to my mind 
  5 or so) deployments report they are using project X, we can say that
  this is used in production/POC/... and the number of
  nodes/hypervisors/etc.
 
  This makes it concrete and anonymous to avoid the fishing queries.
  It also allows our community to enter what they are doing in one place
  rather than answering multiple surveys. I am keen to avoid generic
  queries such as How many hypervisors are installed for public clouds
  using Xen but if we have an agreement that 5 avoids company
  identification, I feel this is feasible.
 [...]
 
 I'm mildly concerned that this adds a strong incentive to start gaming 
 responses
 to/participation in the user survey going forward, once it gets around that 
 you
 just need N people to take the survey and claim to be using this project in
 production so that it can get the coveted production-ready tag. I'm 
 probably a
 little paranoid and certainly would prefer to assume good faith on the part of
 everyone in our community, but as the community continues to grow that faith
 gets spread thinner and thinner.

Agreed on the worry... I'd hope that gaming the user survey would be relatively 
difficult and is already a risk.

However, there are lots of other motivations for influencing the user survey 
and we need to address these anyway.

I don't think it is perfect or universal but it seems better than a doodle for 
each project which is easier to influence.

Tim 
 --
 Jeremy Stanley
 
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread Carl Baldwin
On Wed, Mar 11, 2015 at 2:54 PM, John Belamaric jbelama...@infoblox.com wrote:
 I was proposing that the reference driver not support it either, and we
 only handle that use case via the non-pluggable implementation in Kilo,
 waiting until Liberty to handle it in the pluggable implementation.
 However, I don't think it's particularly harmful to do it either way so I
 am OK with this.

I see.  That is a bit different than what I had understood.  After the
responses on this thread and some other private responses, I'm not
sure we'll ever be able to not support it for the reference driver.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Doug Hellmann


On Wed, Mar 11, 2015, at 01:47 PM, Mike Bayer wrote:
 Hello Cinder -
 
 I’d like to note that for issue
 https://bugs.launchpad.net/oslo.db/+bug/1417018, no solution that
 actually
 solves the problem for Cinder is scheduled to be committed anywhere. The
 patch I proposed for oslo.db is on hold, and the patch proposed for
 oslo.incubator in the service code will not fix this issue for Cinder, it
 will only make it fail harder and faster.
 
 I’ve taken myself off as the assignee on this issue, as someone on the
 Cinder team should really propose the best fix of all which is to call
 engine.dispose() when first entering a new child fork. Related issues are
 already being reported, such as
 https://bugs.launchpad.net/cinder/+bug/1430859. Right now Cinder is very
 unreliable on startup and this should be considered a critical issue.

That service code is actually from the Oslo incubator, so we should fix
it there. If we can always call engine.dispose(), then let's put that
into the incubated module and see how it works in cinder (submit the
patch to the incubator, then update cinder's copy and submit a patch
with Depends-On set to point to the patch to the incubator so the copy
in cinder won't be updated until we land the fix in the incubator).

Doug

 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] K3 Feature Freeze Exception request for bp/nfs-backup

2015-03-11 Thread Duncan Thomas
Discussed in the weekly meeting, plenty of sponsors, agreed to try to get
it through today

On 11 March 2015 at 18:48, Jay S. Bryant jsbry...@electronicjungle.net
wrote:

 All,

 I will sponsor this.

 Patch just needs to be rebased.

 Jay


 On 03/11/2015 10:20 AM, Tom Barron wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I hereby solicit a feature freeze exception for the NFS backup review [1].

 Although only about 140 lines of non-test code, this review completes
 the implementation of the NFS backup blueprint [2].  Most of the
 actual work for this blueprint was a refactor of the Swift backup
 driver to
 abstract the backup/restore business logic from the use of the Swift
 object store itself as the backup repository.  With the help of Xing
 Yang, Jay Bryant, and Ivan Kolodyazhny, that review [3] merged
 yesterday and made the K3 FFE deadline.

 In evaluating this FFE request, please take into account the following
 considerations:

 * Without the second review, the goal of the blueprint remains
   unfullfilled.

 * This code was upstream in January and was essentially complete
   with only superficial changes since then.

 * As of March 5 this review had two core +2s.  Delay since then has
   been entirely due to wait time on the dependent review and need
   for rebase given upstream code churn.

 * No risk is added outside NFS backup service itself since the
   changes to current code are all in the core refactor and that
   is already merged.

 If this FFE is granted, I will give the required rebase my immediate
 attention.

 Thanks.

 - --
 Tom Barron
 t...@dyncloud.net

 [1] - https://review.openstack.org/#/c/149726
 [2] - https://blueprints.launchpad.net/cinder/+spec/nfs-backup
 [3] - https://review.openstack.org/#/c/149725
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBAgAGBQJVAF1YAAoJEGeKBzAeUxEHayUH/2iWuOiKBnRauX40fwcR7+js
 lIM+qRIHlg2iJ+cnqap6HHUhBSxwHnuAV41zQmFBKnfhc3sIqS98ZSVlUaJQtct/
 YjjInKOpxFOEw1FgoFMsrg0qm76zFMXXVIKNegy2iXgXsKzDTWed5n57N8FAP2+6
 q/uASOZNHgxbeZLV7LSKS21/3WUoQpIQiW0+1GtkVtO1C9t8Io+TrjlZj7T60kHJ
 UEH5HShKE0U40SKhgwRyEK7HqbMDGv8w5SsUgyUntdgDlQycgyI/erKm5WJqcZsF
 F6om6HY3oxtulcjbrWmA6+ENnOYsLchXFT8fZeLj7JWOarv5SF2fBQFTqzc/36U=
 =/FVr
 -END PGP SIGNATURE-

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Eric Harney
On 03/11/2015 03:37 PM, Mike Bayer wrote:
 
 
 Mike Perez thin...@gmail.com wrote:
 
 On 11:49 Wed 11 Mar , Walter A. Boring IV wrote:
 We have this patch in review currently.   I think this one should
 'fix' it no?

 Please review.

 https://review.openstack.org/#/c/163551/

 Looks like it to me. Would appreciate a +1 from Mike Bayer before we push 
 this
 through. Thanks for all your time on this Mike.
 
 I have a question there, since I don’t know the scope of “Base”, that this
 “Base” constructor is generally called once per Python process. It’s OK if 
 it’s
 called a little more than that, but if it’s called on like every service
 request or something, then those engine.dispose() calls are not the right
 approach, you’d instead just turn off pooling altogether, because otherwise
 you’re spending tons of time creating and destroying connection pools that
 aren’t even used as pools.   you want the “engine” to be re-used across
 requests and everything else as much as possible, *except* across process
 boundaries.
 

I don't see it used anywhere that isn't a long-standing service, it's
only used by service and API managers, and BackupDrivers.  So should be
ok in this regard.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Should app names be unique?

2015-03-11 Thread Devdatta Kulkarni
Hi Solum team,


In yesterday's team meeting the question of whether Solum should enforce unique 
app name constraint

within a tenant came up.


As a recollection, in Solum one can create an 'app' using:

solum app create --plan-file plan-file --name app-name


Currently Solum does support creating multiple apps with the same name.

However, in yesterday's meeting we were debating/discussing whether this should 
be the case.

The meeting log is available here:

http://eavesdrop.openstack.org/meetings/solum_team_meeting/2015/solum_team_meeting.2015-03-10-21.00.log.html



To set the context for discussion, consider the following:

- heroku does not allow creating another app with the same name as that of an 
already existing app

- github does not allow creating another repository with the same name as that 
of an already existing repo


Thinking about why this might be in case for heroku, one aspect that comes to 
mind is the setting of a 'remote' using
the app name. When we do a 'git push', it happens to this remote.
When we don't specify a remote in 'git push' command, git defaults to using the 
'origin' remote.
Even if multiple remotes with the same name were to be possible, when using an 
implicit command such as 'git push',
in which some of the input comes from the context, the system will not be able 
to disambiguate which remote to use.
So requiring unique names ensures that there is no ambiguity when using such 
implicit commands.
This might also be the reason why on github we cannot create repository with an 
already existing name.

But this is just a guess for why unique names might be required. I could be 
totally off.

I think Solum's use case is similar.

Agreed that Solum currently does not host application repositories and so there 
is no question of
Solum generated remotes. But by allowing non-unique app names
it might be difficult to support this feature in the future.

As an aside, I checked what position other Openstack services take on this 
issue.
1) Heat enforces unique stack-name constraint.
2) Nova does not enforce this constraint.


So it is clear that within Openstack there is no consistency on this issue.


What should Solum do?


Thoughts?


Best regards,

Devdatta


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Tim Bell
 -Original Message-
 From: Sylvain Bauza [mailto:sba...@redhat.com]
 Sent: 11 March 2015 17:53
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] readout from Philly Operators Meetup
 
 Thanks Sean for writing up this report, greatly appreciated.
 Comments inline.
 
 Le 11/03/2015 13:59, Sean Dague a écrit :
  The last couple of days I was at the Operators Meetup acting as Nova
  rep for the meeting. All the sessions were quite nicely recorded to
  etherpads here - https://etherpad.openstack.org/p/PHL-ops-meetup
 
  There was both a specific Nova session -
  https://etherpad.openstack.org/p/PHL-ops-nova-feedback as well as a
  bunch of relevant pieces of information in other sessions.
 
  This is an attempt for some summary here, anyone else that was in
  attendance please feel free to correct if I'm interpreting something
  incorrectly. There was a lot of content there, so this is in no way
  comprehensive list, just the highlights that I think make the most
  sense for the Nova team.
 
  =
Nova Network - Neutron
  =
 
  This remains listed as the #1 issue from the Operator Community on
  their burning issues list
  (https://etherpad.openstack.org/p/PHL-ops-burning-issues L18). During
  the tags conversation we straw polled the audience
  (https://etherpad.openstack.org/p/PHL-ops-tags L45) and about 75% of
  attendees were over on neutron already. However those on Nova Network
  we disproportionally the largest clusters and longest standing
  OpenStack users.
 
  Of those on nova-network about 1/2 had no interest in being on Neutron
  (https://etherpad.openstack.org/p/PHL-ops-nova-feedback
  L24). Some of the primary reasons were the following:
 
  - Complexity concerns - neutron has a lot more moving parts
  - Performance concerns - nova multihost means there is very little
 between guests and the fabric, which is really important for the HPC
 workload use case for OpenStack.
  - Don't want OVS - ovs adds additional complexity, and performance
 concerns. Many large sites are moving off ovs back to linux bridge
 with neutron because they are hitting OVS scaling limits (especially
 if on UDP) - (https://etherpad.openstack.org/p/PHL-ops-OVS L142)
 
  The biggest disconnect in the model seems to be that Neutron assumes
  you want self service networking. Most of these deploys don't. Or even
  more importantly, they live in an organization where that is never
  going to be an option.
 
  Neutron provider networks is close, except it doesn't provide for
  floating IP / NAT.
 
  Going forward: I think the gap analysis probably needs to be revisited
  with some of the vocal large deployers. I think we assumed the
  functional parity gap was closed with DVR, but it's not clear in it's
  current format it actually meets the n-net multihost users needs.
 
  ===
EC2 going forward
  ===
 
  Having a sustaninable EC2 is of high interest to the operator
  community. Many large deploys have some users that were using AWS
  prior to using OpenStack, or currently are using both. They have
  preexisting tooling for that.
 
  There didn't seem to be any objection to the approach of an external
  proxy service for this function -
  (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L111). Mostly
  the question is timing, and the fact that no one has validated the
  stackforge project. The fact that we landed everything people need to
  run this in Kilo is good, as these production deploys will be able to
  test it for their users when they upgrade.
 
  
Burning Nova Features/Bugs
  
 
  Hierarchical Projects Quotas
  
 
  Hugely desired feature by the operator community
  (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L116). Missed
  Kilo. This made everyone sad.
 
  Action: we should queue this up as early Liberty priority item.
 
  Out of sync Quotas
  --
 
  https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63
 
  The quotas code is quite racey (this is kind of a known if you look at
  the bug tracker). It was actually marked as a top soft spot during
  last fall's bug triage -
  http://lists.openstack.org/pipermail/openstack-dev/2014-September/0465
  17.html
 
  There is an operator proposed spec for an approach here -
  https://review.openstack.org/#/c/161782/
 
  Action: we should make a solution here a top priority for enhanced
  testing and fixing in Liberty. Addressing this would remove a lot of
  pain from ops.
 
  Reporting on Scheduler Fails
  
 
  Apparently, some time recently, we stopped logging scheduler fails
  above DEBUG, and that behavior also snuck back into Juno as well
  (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L78). This has
  made tracking down root cause of failures far 

Re: [openstack-dev] [swift] auth migration and user data migration

2015-03-11 Thread Clay Gerrard
On Mon, Mar 9, 2015 at 12:27 PM, Weidong Shao weidongs...@gmail.com wrote:


 I noticed swauth project is not actively maintained. In my local testing,
 swauth did not work after I upgraded swift to latest.


Hrm... I think gholt would be open to patches/support, I know of a number
of deployers of Swauth - so maybe if there's issues we should try to
enumerate them.


 I want to migrate off swauth. What are the auth alternative beside
 tempauth?



Keystone.  The only other systems I know about are proprietary - what are
your needs?


 On account-to-account server-side copy, is there an operation that is
 similar to mv? i.e., I want the data associated with an account to assume
 ownership of  a new account, but I do not want to copy the actual data on
 the disks.



The account url is encoded in the object hash - the only realistic way to
change the location (account/container/object) of an entity to swift is to
read from it's current location and write it to the new location the delete
the old object.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mellanox request for permission for Nova CI

2015-03-11 Thread Joe Gordon
On Wed, Mar 11, 2015 at 12:49 AM, Nurit Vilosny nur...@mellanox.com wrote:

  Hi ,

 I would like to ask for a CI permission to start commenting on Nova branch.

 Mellanox is engaged in pci pass-through features for quite some time now.

 We have an operating Neutron CI for  ~2 years, and since the pci
 pass-through features are part of Nova as well, we would like to start
 monitoring Nova’s patches.

 Our CI had been silently running locally over the past couple of weeks,
 and I would like to step ahead, and start commenting in a *non-voting
 mode*.

 During this period we will be closely monitor our systems and be ready to
 solve any problem that might occur.


Do you have a link to the output of your testing system, so we can check
what its testing etc.




 Thanks,

 Nurit Vilosny

 SW Cloud Solutions Manager



 Mellanox Technologies

 13 Zarchin St. Raanana, Israel

 Office: 972-74-712-9410

 Cell: 972-54-4713000

 Fax: 972-74-712-9111





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-11 Thread Kyle Mestery
On Wed, Mar 4, 2015 at 1:42 PM, Kyle Mestery mest...@mestery.com wrote:

 I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron as
 evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
 doing a great job keeping Neutron current there. He's already a critical
 reviewer for all the Neutron repositories. In addition, he's a stable
 maintainer. Ihar makes himself available in IRC, and has done a great job
 working with the entire Neutron team. His reviews are thoughtful and he
 really takes time to work with code submitters to ensure his feedback is
 addressed.

 I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 and reinforce that +1/-1 reviews are super useful, and I encourage everyone
 to continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.

 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to
 the core reviewer team.

 Thanks!
 Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/90


It's been a week, and Ihar has received 11 +1 votes. I'd like to welcome
Ihar as the newest Neutron Core Reviewer!

Thanks,
Kyle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] new failures running Barbican functional tests

2015-03-11 Thread Doug Hellmann


On Tue, Mar 10, 2015, at 05:50 PM, Douglas Mendizabal wrote:
 Thanks for the insight, other Doug. :)  It appears that this is in part
 due to the fact that Tempest has not yet updated to oslo_log and is still
 using incubator oslo.log.  Can someone from the Tempest team chime in on
 what the status of migrating to oslo_log is?

I am working on a patch to update tempest to use all of the Oslo
libraries instead of the older incubated modules:
https://review.openstack.org/#/c/163549/1

 It’s imperative for us to fix our gate, since we’re blocked from landing
 any code, which just over a week away from a milestone release is a major
 hinderance.
 
 Thanks,
 -Doug Mendizábal
 
 
 Douglas Mendizábal
 IRC: redrobot
 PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C
 
  On Mar 9, 2015, at 8:58 PM, Doug Hellmann d...@doughellmann.com wrote:
  
  
  
  On Mon, Mar 9, 2015, at 05:39 PM, Steve Heyman wrote:
  We are getting issues running barbican functional tests - seems to have
  started sometime between Thursday last week (3/5) and today (3/9)
  
  Seems that oslo config giving us DuplicateOptErrors now.  Our functional
  tests use oslo config (via tempest) as well as barbican server code.
  Looking into it...seems that oslo_config is 1.9.1 and oslo_log is 1.0.0
  and a system I have working has oslo_config 1.9 and oslo_log at 0.4 (both
  with same barbican code).
  
  We released oslo.log today with an updated setting for the default log
  levels for third-party libraries. The new option in the library
  conflicts with the old definition of the option in the incubated code,
  so if you have some dependencies using oslo.log and some using the
  incubated library you'll see this error.
  
  Updating from the incubated version to the library is not complex, but
  it's not just a matter of changing a few imports. There are some
  migration notes in this review: https://review.openstack.org/#/c/147312/
  
  Let me know if you run into issues or need a hand with a review.
  
  Also getting Failure: ArgsAlreadyParsedError (arguments already parsed:
  cannot register CLI option)which seems to be related.
  
  This is probably an import order issue. After a ConfigOpts object has
  been called to parse the command line you cannot register new command
  line options. It's possible the problem is actually caused by the same
  module conflict, since both log modules register command line options. I
  would need a full traceback to know for sure.
  
  Doug
  
  Is this a known issue? Is there a launchpad bug yet?
  
  Thanks!
  
  [cid:5076AFB4-808D-4676-8F1C-A6E468E2CD73]
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  Email had 1 attachment:
  + signature-with-mafia[2].png
   19k (image/png)
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Email had 1 attachment:
 + signature.asc
   1k (application/pgp-signature)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-11 Thread Edgar Magana
Congratulations Ihar!

Edgar

From: Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, March 11, 2015 at 1:54 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a 
Neutron Core Reviewer

On Wed, Mar 4, 2015 at 1:42 PM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
I'd like to propose that we add Ihar Hrachyshka to the Neutron core reviewer 
team. Ihar has been doing a great job reviewing in Neutron as evidence by his 
stats [1]. Ihar is the Oslo liaison for Neutron, he's been doing a great job 
keeping Neutron current there. He's already a critical reviewer for all the 
Neutron repositories. In addition, he's a stable maintainer. Ihar makes himself 
available in IRC, and has done a great job working with the entire Neutron 
team. His reviews are thoughtful and he really takes time to work with code 
submitters to ensure his feedback is addressed.

I'd also like to again remind everyone that reviewing code is a responsibility, 
in Neutron the same as other projects. And core reviewers are especially 
beholden to this responsibility. I'd also like to point out and reinforce that 
+1/-1 reviews are super useful, and I encourage everyone to continue reviewing 
code across Neutron as well as the other OpenStack projects, regardless of your 
status as a core reviewer on these projects.

Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to the 
core reviewer team.

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/90

It's been a week, and Ihar has received 11 +1 votes. I'd like to welcome Ihar 
as the newest Neutron Core Reviewer!

Thanks,
Kyle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Mike Bayer


Joshua Harlow harlo...@outlook.com wrote:

 I've also got the following up:
 
 https://review.openstack.org/#/c/162781/
 
 Which tries the force file descriptors to be closed; although I'm unsure 
 where the tempest results went (and am rechecking that); maybe it breaks so 
 badly that tempest doesn't even run?

OK, I guess if the descriptors are closed then the pool, because we have a
lot of reconnection stuff set up in oslo.db, will figure it out, catch the
error, and try again, without actually propagating the error. So it would
sort of “fix” the problem, rather than fail hard. But Cinder should really
at least prevent against this whole condition from occurring anyway.


 -Josh
 
 Mike Bayer wrote:
 Hello Cinder -
 
 I’d like to note that for issue
 https://bugs.launchpad.net/oslo.db/+bug/1417018, no solution that actually
 solves the problem for Cinder is scheduled to be committed anywhere. The
 patch I proposed for oslo.db is on hold, and the patch proposed for
 oslo.incubator in the service code will not fix this issue for Cinder, it
 will only make it fail harder and faster.
 
 I’ve taken myself off as the assignee on this issue, as someone on the
 Cinder team should really propose the best fix of all which is to call
 engine.dispose() when first entering a new child fork. Related issues are
 already being reported, such as
 https://bugs.launchpad.net/cinder/+bug/1430859. Right now Cinder is very
 unreliable on startup and this should be considered a critical issue.
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Jeremy Stanley
On 2015-03-11 19:06:21 + (+), Tim Bell wrote:
[...]
 I think we can make this work. Assuming more than N (to my mind 
 5 or so) deployments report they are using project X, we can say
 that this is used in production/POC/... and the number of
 nodes/hypervisors/etc.
 
 This makes it concrete and anonymous to avoid the fishing queries.
 It also allows our community to enter what they are doing in one
 place rather than answering multiple surveys. I am keen to avoid
 generic queries such as How many hypervisors are installed for
 public clouds using Xen but if we have an agreement that 5
 avoids company identification, I feel this is feasible.
[...]

I'm mildly concerned that this adds a strong incentive to start
gaming responses to/participation in the user survey going forward,
once it gets around that you just need N people to take the survey
and claim to be using this project in production so that it can get
the coveted production-ready tag. I'm probably a little paranoid
and certainly would prefer to assume good faith on the part of
everyone in our community, but as the community continues to grow
that faith gets spread thinner and thinner.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Stefano Maffulli
On Tue, 2015-03-10 at 15:23 -0700, James E. Blair wrote:
 The holy grail of this system would be the suitable for production
 deployment tag, but no one has figured out how to define it yet.

Are crazy ideas welcome in this phase?

I start with 2 below: 

Preface: an idea circulates about visually displaying in a web page the
projects.yaml file and the tags in there. Visitors would be able to
browse the list of projects and sort, pick, search and find what they
need from a nice representation of the 'big tent'. 

1) how about we pull the popularity of OpenStack projects as reported in
the User Survey and display such number on the page where we list the
projects? What if, together with the objective tags managed by TC and
community at large, we show also the number of known deployment as
guidance?

2) there are some 'fairly objective' indicators of quality of open
source code, developed in a handful of academic projects that I'm aware
of (Calipso and sos-opensource.org come to mind, but there are other).
Maybe we can build a tool that pulls those metrics from each of our
repositories and provides more guidance to visitors so they can form
their own mind?

Nobody can really vet for 'production ready' but probably we can provide
data for someone to get a more informed opinion. Too crazy? 

.stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Joe Gordon
On Wed, Mar 11, 2015 at 5:59 AM, Sean Dague s...@dague.net wrote:

 The last couple of days I was at the Operators Meetup acting as Nova
 rep for the meeting. All the sessions were quite nicely recorded to
 etherpads here - https://etherpad.openstack.org/p/PHL-ops-meetup

 There was both a specific Nova session -
 https://etherpad.openstack.org/p/PHL-ops-nova-feedback as well as a
 bunch of relevant pieces of information in other sessions.

 This is an attempt for some summary here, anyone else that was in
 attendance please feel free to correct if I'm interpreting something
 incorrectly. There was a lot of content there, so this is in no way
 comprehensive list, just the highlights that I think make the most
 sense for the Nova team.

 =
  Nova Network - Neutron
 =

 This remains listed as the #1 issue from the Operator Community on
 their burning issues list
 (https://etherpad.openstack.org/p/PHL-ops-burning-issues L18). During
 the tags conversation we straw polled the audience
 (https://etherpad.openstack.org/p/PHL-ops-tags L45) and about 75% of
 attendees were over on neutron already. However those on Nova Network
 we disproportionally the largest clusters and longest standing
 OpenStack users.

 Of those on nova-network about 1/2 had no interest in being on
 Neutron (https://etherpad.openstack.org/p/PHL-ops-nova-feedback
 L24). Some of the primary reasons were the following:

 - Complexity concerns - neutron has a lot more moving parts
 - Performance concerns - nova multihost means there is very little
   between guests and the fabric, which is really important for the HPC
   workload use case for OpenStack.
 - Don't want OVS - ovs adds additional complexity, and performance
   concerns. Many large sites are moving off ovs back to linux bridge
   with neutron because they are hitting OVS scaling limits (especially
   if on UDP) - (https://etherpad.openstack.org/p/PHL-ops-OVS L142)

 The biggest disconnect in the model seems to be that Neutron assumes
 you want self service networking. Most of these deploys don't. Or even
 more importantly, they live in an organization where that is never
 going to be an option.

 Neutron provider networks is close, except it doesn't provide for
 floating IP / NAT.

 Going forward: I think the gap analysis probably needs to be revisited
 with some of the vocal large deployers. I think we assumed the
 functional parity gap was closed with DVR, but it's not clear in it's
 current format it actually meets the n-net multihost users needs.

 ===
  EC2 going forward
 ===

 Having a sustaninable EC2 is of high interest to the operator
 community. Many large deploys have some users that were using AWS
 prior to using OpenStack, or currently are using both. They have
 preexisting tooling for that.

 There didn't seem to be any objection to the approach of an external
 proxy service for this function -
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L111). Mostly
 the question is timing, and the fact that no one has validated the
 stackforge project. The fact that we landed everything people need to
 run this in Kilo is good, as these production deploys will be able to
 test it for their users when they upgrade.

 
  Burning Nova Features/Bugs
 

 Hierarchical Projects Quotas
 

 Hugely desired feature by the operator community
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L116). Missed
 Kilo. This made everyone sad.

 Action: we should queue this up as early Liberty priority item.

 Out of sync Quotas
 --

 https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63

 The quotas code is quite racey (this is kind of a known if you look at
 the bug tracker). It was actually marked as a top soft spot during
 last fall's bug triage -

 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html

 There is an operator proposed spec for an approach here -
 https://review.openstack.org/#/c/161782/

 Action: we should make a solution here a top priority for enhanced
 testing and fixing in Liberty. Addressing this would remove a lot of
 pain from ops.


To help us better track quota bugs I created a quotas tag:

https://bugs.launchpad.net/nova/+bugs?field.tag=quotas

Next step is re-triage those bugs: mark fixed bugs as fixed, deduplicate
bugs etc.


 Reporting on Scheduler Fails
 

 Apparently, some time recently, we stopped logging scheduler fails
 above DEBUG, and that behavior also snuck back into Juno as well
 (https://etherpad.openstack.org/p/PHL-ops-nova-feedback L78). This
 has made tracking down root cause of failures far more difficult.

 Action: this should hopefully be a quick fix we can get in for Kilo
 and backport.

 =
  Additional Interesting Bits
 =

 Rabbit
 --

 

Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Walter A. Boring IV
We have this patch in review currently.   I think this one should 'fix' 
it no?


Please review.

https://review.openstack.org/#/c/163551/

Walt

On 03/11/2015 10:47 AM, Mike Bayer wrote:

Hello Cinder -

I’d like to note that for issue
https://bugs.launchpad.net/oslo.db/+bug/1417018, no solution that actually
solves the problem for Cinder is scheduled to be committed anywhere. The
patch I proposed for oslo.db is on hold, and the patch proposed for
oslo.incubator in the service code will not fix this issue for Cinder, it
will only make it fail harder and faster.

I’ve taken myself off as the assignee on this issue, as someone on the
Cinder team should really propose the best fix of all which is to call
engine.dispose() when first entering a new child fork. Related issues are
already being reported, such as
https://bugs.launchpad.net/cinder/+bug/1430859. Right now Cinder is very
unreliable on startup and this should be considered a critical issue.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Mike Perez
On 11:49 Wed 11 Mar , Walter A. Boring IV wrote:
 We have this patch in review currently.   I think this one should
 'fix' it no?
 
 Please review.
 
 https://review.openstack.org/#/c/163551/

Looks like it to me. Would appreciate a +1 from Mike Bayer before we push this
through. Thanks for all your time on this Mike.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] keystonemiddleware 1.5.0 released

2015-03-11 Thread Morgan Fainberg
The Keystone development community would like to announce the release of 
keystonemiddleware 1.5.0.

This release can be installed from the following locations: 
* http://tarballs.openstack.org/keystonemiddleware 
* https://pypi.python.org/pypi/keystonemiddleware 

Detailed changes in this release:
https://launchpad.net/keystonemiddleware/+milestone/1.5.0


-- 
Morgan Fainberg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Joshua Harlow

I've also got the following up:

https://review.openstack.org/#/c/162781/

Which tries the force file descriptors to be closed; although I'm unsure 
where the tempest results went (and am rechecking that); maybe it breaks 
so badly that tempest doesn't even run?


-Josh

Mike Bayer wrote:

Hello Cinder -

I’d like to note that for issue
https://bugs.launchpad.net/oslo.db/+bug/1417018, no solution that actually
solves the problem for Cinder is scheduled to be committed anywhere. The
patch I proposed for oslo.db is on hold, and the patch proposed for
oslo.incubator in the service code will not fix this issue for Cinder, it
will only make it fail harder and faster.

I’ve taken myself off as the assignee on this issue, as someone on the
Cinder team should really propose the best fix of all which is to call
engine.dispose() when first entering a new child fork. Related issues are
already being reported, such as
https://bugs.launchpad.net/cinder/+bug/1430859. Right now Cinder is very
unreliable on startup and this should be considered a critical issue.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VXLAN with single-NIC compute nodes: Avoiding the MTU pitfalls

2015-03-11 Thread Ian Wells
On 11 March 2015 at 04:27, Fredy Neeser fredy.nee...@solnet.ch wrote:

 7: br-ex.1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
 UNKNOWN group default
 link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.1
valid_lft forever preferred_lft forever

 8: br-ex.12: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc noqueue
 state UNKNOWN group default
 link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.12
valid_lft forever preferred_lft forever


I find it hard to believe that you want the same address configured on
*both* of these interfaces - which one do you think will be sending packets?

You may find that configuring a VLAN interface for eth1.12 (not in a
bridge, with a local address suitable for communication with compute nodes,
for VXLAN traffic) and eth1.1 (in br-ex, for external traffic to use) does
better for you.

I'm also not clear what your Openstack API endpoint address or MTU is -
maybe that's why the eth1.1 interface is addressed?  I can tell you that if
you want your API to be on the same address 192.168.1.14 as the VXLAN
tunnel endpoints then it has to be one address on one interface and the two
functions will share the same MTU - almost certainly not what you're
looking for.  If you source VXLAN packets from a different IP address then
you can put it on a different interface and give it a different MTU - which
appears to fit what you want much better.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread Carl Baldwin
John,

I think our proposals fit together nicely.  This thread is about
allowing overlap within a pool.  I think it is fine for an external
IPAM driver to disallow such overlap for now.  However, the reference
implementation must support it for backward compatibility and so my
proposal will account for that.

The consequence is that your driver will receive the subnet_id as a
parameter to get_subnet.  It is free to ignore this parameter.

I'd still like to hear from Salvatore on this before I update the interface.

Carl

On Wed, Mar 11, 2015 at 11:08 AM, John Belamaric
jbelama...@infoblox.com wrote:
 On 3/12/15, 12:46 AM, Carl Baldwin c...@ecbaldwin.net wrote:


When talking with external IPAM to get a subnet, Neutron will pass
both the cidr as the primary identifier and the subnet_id as an
alternate identifier.  External systems that do not allow overlap can


 Recall that IPAM driver instances are associated with a specific subnet
 pool. As long as we do not allow overlap within a *pool* this is not
 necessary. The pool will imply the scope (currently implicit, with one per
 tenant), which the driver/external system would use to differentiate the
 CIDR. As I mentioned in an earlier email, this would introduce the
 uniqueness constraint in Kilo, but only when pluggable IPAM is enabled.

 John




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Tim Bell
 -Original Message-
 From: Stefano Maffulli [mailto:stef...@openstack.org]
 Sent: 11 March 2015 03:16
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Avoiding regression in project governance
 
 On Tue, 2015-03-10 at 15:23 -0700, James E. Blair wrote:
  The holy grail of this system would be the suitable for production
  deployment tag, but no one has figured out how to define it yet.
 
 Are crazy ideas welcome in this phase?
 
 I start with 2 below:
 
 Preface: an idea circulates about visually displaying in a web page the
 projects.yaml file and the tags in there. Visitors would be able to browse 
 the list
 of projects and sort, pick, search and find what they need from a nice
 representation of the 'big tent'.
 
 1) how about we pull the popularity of OpenStack projects as reported in the
 User Survey and display such number on the page where we list the projects?
 What if, together with the objective tags managed by TC and community at
 large, we show also the number of known deployment as guidance?
 

I think we can make this work. Assuming more than N (to my mind  5  or so) 
deployments report they are using project X, we can say that this is used in 
production/POC/... and the number of nodes/hypervisors/etc.

This makes it concrete and anonymous to avoid the fishing queries. It also 
allows our community to enter what they are doing in one place rather than 
answering multiple surveys. I am keen to avoid generic queries such as How 
many hypervisors are installed for public clouds using Xen but if we have an 
agreement that 5 avoids company identification, I feel this is feasible.

It does help address the maturity question concretely. If it's in prod in 200 
deployments, I would consider this to be reasonably mature. If there is only 1, 
I would worry.

 2) there are some 'fairly objective' indicators of quality of open source 
 code,
 developed in a handful of academic projects that I'm aware of (Calipso and 
 sos-
 opensource.org come to mind, but there are other).
 Maybe we can build a tool that pulls those metrics from each of our 
 repositories
 and provides more guidance to visitors so they can form their own mind?
 
 Nobody can really vet for 'production ready' but probably we can provide data
 for someone to get a more informed opinion. Too crazy?
 

If an operator says that they are using this for their production cloud and 
there is a reasonable profile of scalability, this is a strong (but not 
guaranteed) endorsement for me. There could be influence but given the survey 
results can be scrutinised in more detail by the people with NDA access, it 
would discourage this behaviour.

 .stef
 
 
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Doug Hellmann


On Tue, Mar 10, 2015, at 10:16 PM, Stefano Maffulli wrote:
 On Tue, 2015-03-10 at 15:23 -0700, James E. Blair wrote:
  The holy grail of this system would be the suitable for production
  deployment tag, but no one has figured out how to define it yet.
 
 Are crazy ideas welcome in this phase?
 
 I start with 2 below: 
 
 Preface: an idea circulates about visually displaying in a web page the
 projects.yaml file and the tags in there. Visitors would be able to
 browse the list of projects and sort, pick, search and find what they
 need from a nice representation of the 'big tent'. 
 
 1) how about we pull the popularity of OpenStack projects as reported in
 the User Survey and display such number on the page where we list the
 projects? What if, together with the objective tags managed by TC and
 community at large, we show also the number of known deployment as
 guidance?
 
 2) there are some 'fairly objective' indicators of quality of open
 source code, developed in a handful of academic projects that I'm aware
 of (Calipso and sos-opensource.org come to mind, but there are other).
 Maybe we can build a tool that pulls those metrics from each of our
 repositories and provides more guidance to visitors so they can form
 their own mind?
 
 Nobody can really vet for 'production ready' but probably we can provide
 data for someone to get a more informed opinion. Too crazy? 

This sort of information would be useful, but I don't think we want to
turn our project governance documentation into a product guide. We
should *have* a product guide, but it should live somewhere else.

Doug

 
 .stef
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] about the image size

2015-03-11 Thread Steven Dake (stdake)


On 3/10/15, 12:22 AM, Bohai (ricky) bo...@huawei.com wrote:

Hi, stackers

I try to use the Kolla Images and pull them down from docker hub.
I found the size of the image is bigger than what I thought(for example,
the images of docker conductor service is about 1.4GB).

Is it possible to get a more smaller images.
Do we have the plan to minimize the images.

Best regards to you.
Ricky


The images are quite large mainly because the base Fedora 20 image is very
large.  There are centos images available for download which average bout
750MB/per.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] K3 Feature Freeze Exception request for bp/nfs-backup

2015-03-11 Thread Mike Perez
On 11:20 Wed 11 Mar , Tom Barron wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 I hereby solicit a feature freeze exception for the NFS backup review [1].

This was granted by the community in the Cinder IRC meeting this morning (sorry
raw log is only available due to irc bot issues):

http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-11-16.00.log.txt

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-11 Thread Assaf Muller
I've filed a bug here:
https://bugs.launchpad.net/neutron/+bug/1430984

I've outlined the path I'd like to take in the bug description.

- Original Message -
 +1 on avoiding changes that break rolling upgrade.
 
 Rolling upgrade has been working so far (at least from my perspective), and
 as openstack adoption spreads, it will be important for more and more users.
 
 How do we make rolling upgrade a supported part of Neutron?

Finding a sane way to test it would be a start. I'm still looking...

 
 - Jack
 
  -Original Message-
  From: Assaf Muller [mailto:amul...@redhat.com]
  Sent: Thursday, March 05, 2015 11:59 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to
  Kilo due
  to agent report_state RPC namespace patch
  
  
  
  - Original Message -
   To turn this stuff off, you don't need to revert.  I'd suggest just
   setting the namespace contants to None, and that will result in the same
   thing.
  
  
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/common/constants.py#
  n152
  
   It's definitely a non-backwards compatible change.  That was a conscious
   choice as the interfaces are a bit of a tangled mess, IMO.  The
   non-backwards compatible changes were simpler so I went that route,
   because as far as I could tell, rolling upgrades were not supported.  If
   they do work, it's due to luck.  There's multiple things including the
   lack of testing this scenario to lack of data versioning that make it a
   pretty shaky area.
  
   However, if it worked for some people, I totally get the argument
   against breaking it intentionally.  As mentioned before, a quick fix if
   needed is to just set the namespace constants to None.  If someone wants
   to do something to make it backwards compatible, that's even better.
  
  
  I sent out an email to the operators list to get some feedback:
  http://lists.openstack.org/pipermail/openstack-operators/2015-March/006429.html
  
  And at least one operator reported that he performed a rolling Neutron
  upgrade
  from I to J successfully. So, I'm agreeing with you agreeing with me that
  we
  probably don't want to mess this up knowingly, even though there is no
  testing
  to make sure that it keeps working.
  
  I'll follow up on IRC with you to figure out who's doing what.
  
   --
   Russell Bryant
  
   On 03/04/2015 11:50 AM, Salvatore Orlando wrote:
To put in another way I think we might say that change 154670 broke
backward compatibility on the RPC interface.
To be fair this probably happened because RPC interfaces were organised
in a way such that this kind of breakage was unavoidable.
   
I think the strategy proposed by Assaf is a viable one. The point about
being able to do rolling upgrades only from version N to N+1 is a
sensible one, but it has more to do with general backward compability
rules for RPC interfaces.
   
In the meanwhile this is breaking a typical upgrade scenario. If a fix
allowing agent state updates both namespaced and not is available today
or tomorrow, that's fine. Otherwise I'd revert just to be safe.
   
By the way, we were supposed to have already removed all server rpc
callbacks in the appropriate package... did we forget out this one or
is
there a reason for which it's still in neutron.db?
   
Salvatore
   
On 4 March 2015 at 17:23, Miguel Ángel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:
   
I agree with Assaf, this is an issue across updates, and
we may want (if that’s technically possible) to provide
access to those functions with/without namespace.
   
Or otherwise think about reverting for now until we find a
migration strategy
   
   
  https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:m
  aster+topic:bp/rpc-docs-and-namespaces,n,z
   
   
Best regards,
Miguel Ángel Ajo
   
On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:
   
Hello everyone,
   
I'd like to highlight an issue with:
https://review.openstack.org/#/c/154670/
   
According to my understanding, most deployments upgrade the
controllers first
and compute/network nodes later. During that time period, all
agents will
fail to report state as they're sending the report_state message
outside
of any namespace while the server is expecting that message in a
namespace.
This is a show stopper as the Neutron server will think all of its
agents are dead.
   
I think the obvious solution is to modify the Neutron server code
so that
it accepts the report_state method both in and outside of the
report_state
RPC namespace and chuck that code away in L (Assuming we support
rolling upgrades
   

[openstack-dev] [nova][neutron][nfv] is there any reason neutron.allow_duplicate_networks should not be True by default?

2015-03-11 Thread Matt Riedemann
While looking at some other problems yesterday [1][2] I stumbled across 
this feature change in Juno [3] which adds a config option 
allow_duplicate_networks to the [neutron] group in nova. The default 
value is False, but according to the spec [4] neutron allows this and 
it's just exposing a feature available in neutron via nova when creating 
an instance (create the instance with 2 ports from the same network).


My question then is why do we have a config option to toggle a feature 
that is already supported in neutron and is really just turning a 
failure case into a success case, which is generally considered OK by 
our API change guidelines [5].


I'm wondering if there is anything about this use case that breaks other 
NFV use cases, maybe something with SR-IOV / PCI?  If not, I plan on 
pushing a change to deprecate the option in Kilo and remove it in 
Liberty with the default being to allow the operation.


That's barring getting some tests in Tempest to cover the change which 
were promised in the spec but never showed up in Juno, or Kilo, so you 
know, shame shame shame.


[1] https://bugs.launchpad.net/nova/+bug/1430481
[2] https://bugs.launchpad.net/nova/+bug/1430512
[3] https://review.openstack.org/#/c/98488/
[4] https://review.openstack.org/#/c/97716/
[5] 
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should app names be unique?

2015-03-11 Thread Adrian Otto
I agree that we should pursue an implementation that allows for duplicate 
names, as I think that use case makes sense. I manage several application 
deployments that run different concurrent versions of the same application 
during blue/green and canary deployment, and I routinely use duplicate names 
with unique tags to represent them while using unique identifiers to act on 
them individually.

Our agreement in our 21015-03-10 team meeting allows operators to select on a 
per-tenant basis whether unique name constraints are imposed at resource 
creation time. It also allows tenants to change the setting if they prefer the 
alternative to the selected setting. What we set the system default to should 
be informed first by our best guess, and revisited later as-needed based on 
user feedback. Like Keith, I’m happy to try the unique name constraint as on by 
default to begin with.

Adrian

On Mar 11, 2015, at 1:48 PM, Keith Bray 
keith.b...@rackspace.commailto:keith.b...@rackspace.com wrote:

Dev, thanks for bringing up the item about Heat enforcing unique stack names.. 
My mistake on thinking it supported non-unique stack names.  I remember it 
working early on, but probably got changed/fixed somewhere along the way.

My argument in IRC was one based on consistency with related/similar 
projects... So, as Murali pointed out, if things aren't consistent within 
OpenStack, then that certainly leaves much more leeway in my opinion for Solum 
to determine its own path without concern for falling in line with what the 
other projects have done (since a precedent can't be established).

To be honest, I don't agree with the argument about github, however.  Github 
(and also Heroku) are using URLs, which are Unique IDs.  I caution against 
conflating a URL with a name, where a URL in the case of github serves both 
purposes, but each (both a name and an ID) have merit as standalone 
representations.

I am happy to give my support to enforcing unique names as the Solum default, 
but I continue to highly encourage things be architected in a way that 
non-unique names could be supported in the future on at least a per-tenant 
basis, should that need become validated by customer asks.

Kind regards,
-Keith

From: Murali Allada 
murali.all...@rackspace.commailto:murali.all...@rackspace.com
Reply-To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, March 11, 2015 2:12 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] Should app names be unique?

The only reason this came up yesterday is because we wanted Solums 'app create' 
behavior to be consistent with other openstack services.

However, if heat has a unique stack name constraint and glance\nova don't, then 
the argument of consistency does not hold.

I'm still of the opinion that we should have a unique name constraint for apps 
and languagepacks within a tenants namespace, as it can get very confusing if a 
user creates multiple apps with the same name.

Also, customer research done here at Rackspace has shown that users prefer 
using 'names' rather than 'UUIDs'.

-Murali



From: Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com
Sent: Wednesday, March 11, 2015 2:48 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Solum] Should app names be unique?

Hi Solum team,

In yesterday's team meeting the question of whether Solum should enforce unique 
app name constraint
within a tenant came up.

As a recollection, in Solum one can create an 'app' using:
solum app create --plan-file plan-file --name app-name

Currently Solum does support creating multiple apps with the same name.
However, in yesterday's meeting we were debating/discussing whether this should 
be the case.
The meeting log is available here:
http://eavesdrop.openstack.org/meetings/solum_team_meeting/2015/solum_team_meeting.2015-03-10-21.00.log.html


To set the context for discussion, consider the following:
- heroku does not allow creating another app with the same name as that of an 
already existing app
- github does not allow creating another repository with the same name as that 
of an already existing repo

Thinking about why this might be in case for heroku, one aspect that comes to 
mind is the setting of a 'remote' using
the app name. When we do a 'git push', it happens to this remote.
When we don't specify a remote in 'git push' command, git defaults to using the 
'origin' remote.
Even if multiple remotes with the same name were to be possible, when using an 
implicit command such as 'git push',
in which some of the input comes from the context, the system will not be able 
to disambiguate which remote to use.

Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I also agree wilcards do not look solid, and without clear use cases
with reasonable demand from operators, it's better to avoid putting
ourselves into position of supporting the feature for eternity.

We had cases of immature API design rushed into the tree before
(particularly, ipv6 ra|addr modes for subnets) resulting in long fire
fighting and confusing behaviour that we're stuck in, and it's better
to avoid it here if possible.

/Ihar

On 03/10/2015 09:03 PM, Tidwell, Ryan wrote:
 I agree with dropping support for the wildcards.  It can always be
 revisited at later. I agree that being locked into backward
 compatibility with a design that we really haven't thought through
 is a good thing to avoid.  Most importantly (to me anyway) is that
 this will help in getting subnet allocation completed for Kilo. We
 can iterate on it later, but at least the base functionality will
 be there.
 
 -Ryan
 
 -Original Message- From: Carl Baldwin
 [mailto:c...@ecbaldwin.net] Sent: Tuesday, March 10, 2015 11:44 AM 
 To: OpenStack Development Mailing List (not for usage questions) 
 Subject: Re: [openstack-dev] [api][neutron] Best API for generating
 subnets from pool
 
 On Tue, Mar 10, 2015 at 12:24 PM, Salvatore Orlando
 sorla...@nicira.com wrote:
 I guess that frustration has now become part of the norm for
 Openstack. It is not the first time I frustrate people because I
 ask to reconsider decisions approved in specifications.
 
 I'm okay revisiting decisions.  It is just the timing that is
 difficult.
 
 This is probably bad behaviour on my side. Anyway, I'm not
 suggesting to go back to the drawing board, merely trying to get
 larger feedback, especially since that patch should always have
 had the ApiImpact flag.
 
 It did have the ApiImpact flag since PS1 [1].
 
 Needless to say, I'm happy to proceed with things as they've been
 agreed.
 
 I'm happy to discuss and I value your input very highly.  I was
 just hoping that it had come at a better time to react.
 
 There is nothing intrinsically wrong with it - in the sense that
 it does not impact the functional behaviour of the system. My
 comment is about RESTful API guidelines. What we pass to/from the
 API endpoint is a resource, in this case the subnet being
 created. You expect gateway_ip to be always one thing - a gateway
 address, whereas with the wildcarded design it could be an
 address or an incremental counter within a range, but with the
 counter being valid only in request objects. Differences in
 entities between requests and response are however fairly common
 in RESTful APIs, so if the wildcards sastisfy a concrete and
 valid use case I will stop complaining, but I'm not sure I see
 any use case for wildcarded gateways and allocation pools.
 
 Let's drop the use case and the wildcards as we've discussed.
 
 Also, there might also be backward-compatible ways of switching
 from one approach to another, in which case I'm happy to keep
 things as they are and relieve Ryan from yet another worry.
 
 I think dropping the use case for now allows us the most freedom
 and doesn't commit us to supporting backward compatibility for a
 decision that may end up proving to be a mistake in API design.
 
 Carl
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAMFWAAoJEC5aWaUY1u57p3IH/j3PQgbQS/g1r1xjCvLT0Dmx
T25VBtT+jercZMd7Xe+isY1nhwmCOnlZH27ZwviIz6HIfL/t2qyyrW/B5t9ZiL49
kYDd2SXcxGm/qSiSV99hPMR5vVLC57PzAjwjoFoNebodbYDwqTb00jYcA/+JLgrm
LQxisvo7C3QBSd+/xAyDCy4Y62B6RFvcYItmBVN2shbPkseNrJk3+jnoK0qxzOXa
FyOecP1hi+W0OuR7SCEi8BBYvKSGDJ977I1+B3tUsfjBOxW1Ay5Llb3X3CfM4Xks
XHA+W43lJtgz1CtK1ur906gkrNnT5ve95oxdiXj+fAl27zNUqqqKWjbt2ecbWHg=
=+PU9
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Thinking about our python client UX

2015-03-11 Thread Ruby Loo
On 11 March 2015 at 18:21, Robert Collins robe...@robertcollins.net wrote:
...

 Since there was no debate on the compat thing, I've thrown up an
 etherpad to start the discussion.

 https://etherpad.openstack.org/p/ironic-client-ux


Thanks Rob. Michael Davies has a spec [1] that discusses how a client
interacts with Ironic (pre-microversion and post-microversion). Maybe
we can move our discussion to that spec? I was hoping we'd be able to
decide and implement *something* for the kilo release, but maybe I am
being overly optimistic.

--ruby

[1] https://review.openstack.org/#/c/161110/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Joe Gordon
On Wed, Mar 11, 2015 at 4:07 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/11/2015 07:48 PM, Joe Gordon wrote:
  Out of sync Quotas --
 
  https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63
 
  The quotas code is quite racey (this is kind of a known if you look
  at the bug tracker). It was actually marked as a top soft spot
  during last fall's bug triage -
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html
 
   There is an operator proposed spec for an approach here -
  https://review.openstack.org/#/c/161782/
 
  Action: we should make a solution here a top priority for enhanced
  testing and fixing in Liberty. Addressing this would remove a lot
  of pain from ops.
 
 
  To help us better track quota bugs I created a quotas tag:
 
  https://bugs.launchpad.net/nova/+bugs?field.tag=quotas
 
  Next step is re-triage those bugs: mark fixed bugs as fixed,
  deduplicate bugs etc.

 (Being quite far from nova code, so ignore if not applicable)

 I would like to note that other services experience races in quota
 management too. Neutron has a spec approved to land in Kilo-3 that is
 designed to introduce a new quota enforcement mechanism that is
 expected to avoid (some of those) races:


 https://github.com/openstack/neutron-specs/blob/master/specs/kilo/better-quotas.rst

 I thought you may be interested in looking into it to apply similar
 ideas to nova.


Working on a library for this hasn't been ruled out yet. But right now I am
simply trying to figure out how to reproduce the issue, and nothing else.



 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJVAMrNAAoJEC5aWaUY1u57aV8H/1jTSrpJYYRBH6hOrl2MlPNV
 mgH+sszhKDZbNjHAlIjCGqu4/Rzbf7Xhp1ujhzQFA54UcQ/cKSsao7kgHcAGJ7ib
 FHM8RIf0UNXw2KkvCc4gvrfEO0vxMkzWZ12RdtyKSJVxJgWNctnX6VI4f1ZS5mIb
 S5mrpPxVyL/Hplr+Yy05m4S7ALyiKGVogCYtRYqnBaH3IIK9Nve3hyfzcgBvd+h+
 9CfGeFX4WJr/JDGoDCNX/Zq9/SgVuEjm2fXgbZPaZDGoTc4RrrTe6TWi1g8mx/pD
 QuTsyUu3CvRAr2qafPS4c2j0qEaooCsA22A8Il8mvnO7Rp12KkxCWojkoPvFOpA=
 =JV5/
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/11/2015 06:54 PM, Kyle Mestery wrote:
 On Wed, Mar 4, 2015 at 1:42 PM, Kyle Mestery mest...@mestery.com 
 mailto:mest...@mestery.com wrote:
 
 I'd like to propose that we add Ihar Hrachyshka to the Neutron
 core reviewer team. Ihar has been doing a great job reviewing in
 Neutron as evidence by his stats [1]. Ihar is the Oslo liaison for
 Neutron, he's been doing a great job keeping Neutron current there.
 He's already a critical reviewer for all the Neutron repositories.
 In addition, he's a stable maintainer. Ihar makes himself available
 in IRC, and has done a great job working with the entire Neutron
 team. His reviews are thoughtful and he really takes time to work
 with code submitters to ensure his feedback is addressed.
 
 I'd also like to again remind everyone that reviewing code is a 
 responsibility, in Neutron the same as other projects. And core 
 reviewers are especially beholden to this responsibility. I'd also 
 like to point out and reinforce that +1/-1 reviews are super
 useful, and I encourage everyone to continue reviewing code across
 Neutron as well as the other OpenStack projects, regardless of your
 status as a core reviewer on these projects.
 
 Existing Neutron cores, please vote +1/-1 on this proposal to add 
 Ihar to the core reviewer team.
 
 Thanks! Kyle
 
 [1] http://stackalytics.com/report/contribution/neutron-group/90
 
 
 It's been a week, and Ihar has received 11 +1 votes. I'd like to
 welcome Ihar as the newest Neutron Core Reviewer!
 

Thanks everyone to consider me for the position.

I am really happy to join the team, and I will try to do my best
serving the community with reviews and what not.

Thanks again,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAMHSAAoJEC5aWaUY1u576iMIAOQ8sWjeUsQgGnSjjwBZq2ax
PaTf57FqSJ14dzRQC7loRelJbj0nwWD7oiFkP9vGinmFmtezbIcMlKnQKorTc9NR
8rAgPUd+B4AXlUgVuZKFJPTMAwRcQ9rrXTj2qVALZp2Gu1zsWLU/0MxVmX/RZ6Nn
g5YQFXK4o7loWghhNHGGWHrY5EbjGUDgslV4+FX3OFiO3t0XFL9mqdlUnruSnUiP
xoy16jB7UXvSXDSETnNyXN1gR/VstFMBSYIOk3xeMA/7Gc9clBqg86XHYj8n8Gz+
wc+9gybp2p1r/n/4H0x/jWrsfokvVG1qMZFV8yRB1rFFK9Ts73NLQ29Hjmnv6No=
=fv31
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread Kevin Benton
My concern is that we are introducing new objects in Neutron that are
scoped to a tenant and we don't have anything else like that right now. For
example, I can create 100 3-tier topologies (router + 3 subnets/networks)
with duplicated names, CIDRs, etc between all of them and it doesn't matter
if I do it on 100 different tenants or all under the same tenant.

Put a different way, tenants don't mean anything in the Neutron data model.
They are just a tag to use to enforce quotas, policies, and filters on
incoming API requests. Uniqueness shouldn't come from 'x' + tenant_id.

It seems like what is being introduced here is going to fundamentally
change that by forcing the creation of separate tenants to have overlapping
IPs. I think the barrier should be very high to introduce anything that
does that.

Can you elaborate why tenants are used for uniqueness for IPAM instead of
having a separate grouping mechanism?

On Wed, Mar 11, 2015 at 3:24 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/10/2015 06:34 PM, Gabriel Bezerra wrote:
  Em 10.03.2015 14:24, Carl Baldwin escreveu:
  Neutron currently does not enforce the uniqueness, or
  non-overlap, of subnet cidrs within the address scope for a
  single tenant.  For example, if a tenant chooses to use
  10.0.0.0/24 on more than one subnet, he or she is free to do so.
  Problems will arise when trying to connect a router between these
  subnets but that is left up to the tenant to work out.
 
  In the current IPAM rework, we had decided to allow this overlap
  in the reference implementation for backward compatibility.
  However, we've hit a snag.  It would be convenient to use the
  subnet cidr as the handle with which to refer to a previously
  allocated subnet when talking to IPAM.  If overlap is allowed,
  this is not possible and we need to come up with another
  identifier such as Neutron's subnet_id or another unique IPAM
  specific ID.  It could be a burden on an external IPAM system --
  which does not allow overlap -- to work with a completely
  separate identifier for a subnet.
 
  I do not know of anyone using this capability (or mis-feature)
  of Neutron.  I would hope that tenants are aware of the issues
  with trying to route between subnets with overlapping address
  spaces and would avoid it.  Is this potential overlap something
  that we should really be worried about?  Could we just add the
  assumption that subnets do not overlap within a tenant's scope?
 
  An important thing to note is that this topic is different than
  allowing overlap of cidrs between tenants.  Neutron will continue
  to allow overlap of addresses between tenants and support the
  isolation of these address spaces.  The IPAM rework will support
  this.
 
  Carl Baldwin
 
 
  I'd vote for allowing against such restriction, but throwing an
  error in case of creating a router between the subnets.
 
  I can imagine a tenant running multiple instances of an
  application, each one with its own network that uses the same
  address range, to minimize configuration differences between them.
 

 I agree with Gabriel on the matter. There is nothing inherently wrong
 about a tenant running multiple isolated network setups that use
 overlapping addresses (as there is nothing wrong about multiple
 tenants doing the same).

 There seems to be a value in disallowing overlapping subnets attached
 to the same router though.

 All in all, there seems to be no need to limit neutron API just
 because most external IPAM implementation do not seem to care about
 supporting the use case.

 It's also an issue that the change proposed is backwards incompatible.

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJVAMCPAAoJEC5aWaUY1u57d/cH/A+nAuLrNaCPumjlOPJP90c3
 4oSh0ioxEQw2uBRx1mWvcQle0M1U4Psy5CqIjeHgDhRlCNKB2gAYvm7/lCwZoCw7
 pxLUerfZPFguNKYCUly1MYyo0gorycFgemoHKEFwHboDJfPdGYxdhW8HuemCClOG
 ZSeRzjO2rGaHU8XT+QWgI14UBiAu+XlQHy3UwUKEaffXOnja7noCU99/6d7+6TgF
 5/RdFhpfn6pcRvLboiquo57w6N3q7moORgrGSMuP8cnMMTxo9/TLie+vMXmLPobB
 YmeG1ar1q0ouGKb6ZG7KokUyl6CdpgowZ6bqetPELjBLIO+uhcsJ6g9NvRl4T9o=
 =WQ3Q
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread John Belamaric
This has been settled and we're not moving forward with it for Kilo. I agree 
tenants are an administrative concept, not a networking one so using them for 
uniqueness doesn't really make sense.

In Liberty we are proposing a new grouping mechanism, as you call it, 
specifically for the purpose of defining uniqueness - address scopes. This 
would be owned by a tenant but could be shared across tenants. It's still in 
the early stages of definition though, and more discussion is needed but should 
probably wait until after Kilo is out!


From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, March 12, 2015 at 6:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a 
tenant

My concern is that we are introducing new objects in Neutron that are scoped to 
a tenant and we don't have anything else like that right now. For example, I 
can create 100 3-tier topologies (router + 3 subnets/networks) with duplicated 
names, CIDRs, etc between all of them and it doesn't matter if I do it on 100 
different tenants or all under the same tenant.

Put a different way, tenants don't mean anything in the Neutron data model. 
They are just a tag to use to enforce quotas, policies, and filters on incoming 
API requests. Uniqueness shouldn't come from 'x' + tenant_id.

It seems like what is being introduced here is going to fundamentally change 
that by forcing the creation of separate tenants to have overlapping IPs. I 
think the barrier should be very high to introduce anything that does that.

Can you elaborate why tenants are used for uniqueness for IPAM instead of 
having a separate grouping mechanism?

On Wed, Mar 11, 2015 at 3:24 PM, Ihar Hrachyshka 
ihrac...@redhat.commailto:ihrac...@redhat.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/10/2015 06:34 PM, Gabriel Bezerra wrote:
 Em 10.03.2015 14:24, Carl Baldwin escreveu:
 Neutron currently does not enforce the uniqueness, or
 non-overlap, of subnet cidrs within the address scope for a
 single tenant.  For example, if a tenant chooses to use
 10.0.0.0/24http://10.0.0.0/24 on more than one subnet, he or she is free 
 to do so.
 Problems will arise when trying to connect a router between these
 subnets but that is left up to the tenant to work out.

 In the current IPAM rework, we had decided to allow this overlap
 in the reference implementation for backward compatibility.
 However, we've hit a snag.  It would be convenient to use the
 subnet cidr as the handle with which to refer to a previously
 allocated subnet when talking to IPAM.  If overlap is allowed,
 this is not possible and we need to come up with another
 identifier such as Neutron's subnet_id or another unique IPAM
 specific ID.  It could be a burden on an external IPAM system --
 which does not allow overlap -- to work with a completely
 separate identifier for a subnet.

 I do not know of anyone using this capability (or mis-feature)
 of Neutron.  I would hope that tenants are aware of the issues
 with trying to route between subnets with overlapping address
 spaces and would avoid it.  Is this potential overlap something
 that we should really be worried about?  Could we just add the
 assumption that subnets do not overlap within a tenant's scope?

 An important thing to note is that this topic is different than
 allowing overlap of cidrs between tenants.  Neutron will continue
 to allow overlap of addresses between tenants and support the
 isolation of these address spaces.  The IPAM rework will support
 this.

 Carl Baldwin


 I'd vote for allowing against such restriction, but throwing an
 error in case of creating a router between the subnets.

 I can imagine a tenant running multiple instances of an
 application, each one with its own network that uses the same
 address range, to minimize configuration differences between them.


I agree with Gabriel on the matter. There is nothing inherently wrong
about a tenant running multiple isolated network setups that use
overlapping addresses (as there is nothing wrong about multiple
tenants doing the same).

There seems to be a value in disallowing overlapping subnets attached
to the same router though.

All in all, there seems to be no need to limit neutron API just
because most external IPAM implementation do not seem to care about
supporting the use case.

It's also an issue that the change proposed is backwards incompatible.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAMCPAAoJEC5aWaUY1u57d/cH/A+nAuLrNaCPumjlOPJP90c3
4oSh0ioxEQw2uBRx1mWvcQle0M1U4Psy5CqIjeHgDhRlCNKB2gAYvm7/lCwZoCw7
pxLUerfZPFguNKYCUly1MYyo0gorycFgemoHKEFwHboDJfPdGYxdhW8HuemCClOG

Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Should we target it for Kilo? It does not seem right to allow it
slipping into the next release while we know there are operators
relying on the feature.

On 03/11/2015 08:42 PM, Assaf Muller wrote:
 I've filed a bug here: 
 https://bugs.launchpad.net/neutron/+bug/1430984
 
 I've outlined the path I'd like to take in the bug description.
 
 - Original Message -
 +1 on avoiding changes that break rolling upgrade.
 
 Rolling upgrade has been working so far (at least from my
 perspective), and as openstack adoption spreads, it will be
 important for more and more users.
 
 How do we make rolling upgrade a supported part of Neutron?
 
 Finding a sane way to test it would be a start. I'm still
 looking...
 
 
 - Jack
 
 -Original Message- From: Assaf Muller
 [mailto:amul...@redhat.com] Sent: Thursday, March 05, 2015
 11:59 AM To: OpenStack Development Mailing List (not for usage
 questions) Subject: Re: [openstack-dev] [Neutron] Issue when
 upgrading from Juno to Kilo due to agent report_state RPC
 namespace patch
 
 
 
 - Original Message -
 To turn this stuff off, you don't need to revert.  I'd
 suggest just setting the namespace contants to None, and that
 will result in the same thing.
 
 
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/common/constants.py#

 
n152
 
 It's definitely a non-backwards compatible change.  That was
 a conscious choice as the interfaces are a bit of a tangled
 mess, IMO.  The non-backwards compatible changes were simpler
 so I went that route, because as far as I could tell, rolling
 upgrades were not supported.  If they do work, it's due to
 luck.  There's multiple things including the lack of testing
 this scenario to lack of data versioning that make it a 
 pretty shaky area.
 
 However, if it worked for some people, I totally get the
 argument against breaking it intentionally.  As mentioned
 before, a quick fix if needed is to just set the namespace
 constants to None.  If someone wants to do something to make
 it backwards compatible, that's even better.
 
 
 I sent out an email to the operators list to get some
 feedback: 
 http://lists.openstack.org/pipermail/openstack-operators/2015-March/006429.html


 
And at least one operator reported that he performed a rolling Neutron
 upgrade from I to J successfully. So, I'm agreeing with you
 agreeing with me that we probably don't want to mess this up
 knowingly, even though there is no testing to make sure that it
 keeps working.
 
 I'll follow up on IRC with you to figure out who's doing what.
 
 -- Russell Bryant
 
 On 03/04/2015 11:50 AM, Salvatore Orlando wrote:
 To put in another way I think we might say that change
 154670 broke backward compatibility on the RPC interface. 
 To be fair this probably happened because RPC interfaces
 were organised in a way such that this kind of breakage was
 unavoidable.
 
 I think the strategy proposed by Assaf is a viable one. The
 point about being able to do rolling upgrades only from
 version N to N+1 is a sensible one, but it has more to do
 with general backward compability rules for RPC
 interfaces.
 
 In the meanwhile this is breaking a typical upgrade
 scenario. If a fix allowing agent state updates both
 namespaced and not is available today or tomorrow, that's
 fine. Otherwise I'd revert just to be safe.
 
 By the way, we were supposed to have already removed all
 server rpc callbacks in the appropriate package... did we
 forget out this one or is there a reason for which it's
 still in neutron.db?
 
 Salvatore
 
 On 4 March 2015 at 17:23, Miguel Ángel Ajo
 majop...@redhat.com mailto:majop...@redhat.com wrote:
 
 I agree with Assaf, this is an issue across updates, and we
 may want (if that’s technically possible) to provide access
 to those functions with/without namespace.
 
 Or otherwise think about reverting for now until we find a 
 migration strategy
 
 
 https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:m

 
aster+topic:bp/rpc-docs-and-namespaces,n,z
 
 
 Best regards, Miguel Ángel Ajo
 
 On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller
 wrote:
 
 Hello everyone,
 
 I'd like to highlight an issue with: 
 https://review.openstack.org/#/c/154670/
 
 According to my understanding, most deployments upgrade
 the controllers first and compute/network nodes later.
 During that time period, all agents will fail to report
 state as they're sending the report_state message 
 outside of any namespace while the server is expecting
 that message in a namespace. This is a show stopper as
 the Neutron server will think all of its agents are
 dead.
 
 I think the obvious solution is to modify the Neutron
 server code so that it accepts the report_state method
 both in and outside of the report_state RPC namespace and
 chuck that code away in L (Assuming we support rolling
 upgrades only from version N to N+1, which while is
 unfortunate, is the behavior I've seen 

Re: [openstack-dev] [stable] Icehouse 2014.1.4 freeze exceptions

2015-03-11 Thread Adam Gandelman
Big +2 on both those, but will leave it up to Alan make the final call
Cheers

On Wed, Mar 11, 2015 at 3:42 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/11/2015 12:21 PM, Alan Pevec wrote:
  Hi,
 
  next Icehouse stable point release 2014.1.4 has been slipping last
  few weeks due to various gate issues, see Recently closed section
  in https://etherpad.openstack.org/p/stable-tracker for details.
  Branch looks good enough now to push the release tomorrow
  (Thursdays are traditional release days) and I've put freeze -2s on
  the open reviews. I'm sorry about the short freeze period but
  branch was effectively frozen last two weeks due to gate issues and
  further delay doesn't make sense. Attached is the output from the
  stable_freeze script for thawing after tags are pushed.
 
  At the same time I'd like to propose following freeze exceptions
  for the review by stable-maint-core:
 
  * https://review.openstack.org/144714 - Eventlet green threads not
  released back to pool Justification: while not OSSA fix, it does
  have SecurityImpact tag
 
  * https://review.openstack.org/163035 - [OSSA 2015-005] Websocket
  Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259)
  Justification: pending merge on master and juno
 
 

 There seems to be a regression [1] introduced into Icehouse and Juno
 trees by backporting [2] that may leave instances in db that are not
 deletable (and we have no force-delete in Icehouse). The fix is merged
 in master and backported to both stable branches [3]. I suggest we
 consider it as an exception for the upcoming release to avoid the
 regression in it.

 [1]: https://bugs.launchpad.net/nova/+bug/1423952
 [2]:

 https://review.openstack.org/#/q/Ife712c43c5a61424bc68b2f5ab47cefdb46ac168,n,z
 [3]:

 https://review.openstack.org/#/q/I70f464120c798422f9a3d601b7cdf3b0a8320690,n,z

 Comments?

 Thanks,
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJVAMTyAAoJEC5aWaUY1u57l5AIAKEheCCCzifZgg0jJymDKL4w
 mUST3axIxTK/SlgkU83yw7ies2zMFYv1vhu+D4MH9B/Vl+j3wVL79YgBKn9jeJFu
 CU5+5f+98QW897Ba06GEXZ1HjvKwYGCII9MZsnQ3k2185OpCZvsW7jOZPbMKbmIH
 cN66RJWYpLOhtYjznRCx5KLjOHz0Lp+C36oR29DICUosrTAKfQyE3jFebLSAnxgq
 cpWGG7sfkbkob5YUijtmkYc8loLwC95s6ScGT8riLj/JKqyJ8JotEdfsmAfA3Bfg
 ZmY+bFzLLGFIjyNfX+j5nfmfDYEEbGhhNoh2JzOL9uuP/FOhDp6s+NvZPqDmMDQ=
 =WkTB
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][nfv] is there any reason neutron.allow_duplicate_networks should not be True by default?

2015-03-11 Thread Ian Wells
On 11 March 2015 at 10:56, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:

 While looking at some other problems yesterday [1][2] I stumbled across
 this feature change in Juno [3] which adds a config option
 allow_duplicate_networks to the [neutron] group in nova. The default
 value is False, but according to the spec [4] neutron allows this and it's
 just exposing a feature available in neutron via nova when creating an
 instance (create the instance with 2 ports from the same network).

 My question then is why do we have a config option to toggle a feature
 that is already supported in neutron and is really just turning a failure
 case into a success case, which is generally considered OK by our API
 change guidelines [5].

 I'm wondering if there is anything about this use case that breaks other
 NFV use cases, maybe something with SR-IOV / PCI?  If not, I plan on
 pushing a change to deprecate the option in Kilo and remove it in Liberty
 with the default being to allow the operation.


This was all down to backward compatibility.

Nova didn't allow two interfaces on the same Neutron network.  We tried to
change this by filing a bug, and the patches got rejected because the
original behaviour was claimed to be intentional and desirable.  (It's not
clear that it was intentional behaviour because it was never documented,
but the same lack of documented intent meant it's also not clear it was a
bug, so the situation was ambiguous.)

Eventually it was fixed as new functionality using a spec [1] so that the
change and reasoning could be clearly described, and because of the
previous concerns, Racha, who implemented the spec, additionally chose to
use a config item to preserve the original behaviour unless the new one was
explicitly requested.
-- 
Ian.

[1]
https://review.openstack.org/#/c/97716/5/specs/juno/nfv-multiple-if-1-net.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Icehouse 2014.1.4 freeze exceptions

2015-03-11 Thread Flavio Percoco

On 11/03/15 12:46 +0100, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/11/2015 12:21 PM, Alan Pevec wrote:

Hi,

next Icehouse stable point release 2014.1.4 has been slipping last
few weeks due to various gate issues, see Recently closed section
in https://etherpad.openstack.org/p/stable-tracker for details.
Branch looks good enough now to push the release tomorrow
(Thursdays are traditional release days) and I've put freeze -2s on
the open reviews. I'm sorry about the short freeze period but
branch was effectively frozen last two weeks due to gate issues and
further delay doesn't make sense. Attached is the output from the
stable_freeze script for thawing after tags are pushed.

At the same time I'd like to propose following freeze exceptions
for the review by stable-maint-core:

* https://review.openstack.org/144714 - Eventlet green threads not
released back to pool Justification: while not OSSA fix, it does
have SecurityImpact tag

* https://review.openstack.org/163035 - [OSSA 2015-005] Websocket
Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259)
Justification: pending merge on master and juno



+2 to both exceptions, especially the OSSA one. All distributions will
be interested in including the patch in their packages, so it's better
to do it once in upstream.


+2 from me too!





Cheers, Alan



__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVACsoAAoJEC5aWaUY1u57bYgH/0D1PSy0lJ5yyfkWWKDCDsz7
2Uk6jMOiH6g0nS3o+mJHiukBTbCCpsx4mnVKTtejyAJN8Fc8vW3UaWNWxsZDJykn
MplR+l6dO2jVmn3RZYH3FudeP9BtTOohUZahzTsXcnZ2+S9WvFiTX8NRmqzgWPgY
J7GioQ3XcGk2Q22LEBWhhJFCNm7mLsdKjVQds4glyZPMbuH5gNw4fKwU2xFikWOr
RRBPplSL+DJOfNjqPc2w4CCbXyWuN+j398/GGHEi4QRnKf97fSn99x95uyiLM5EY
S/qg7MbWcNFWzjsVtzQyAp6DaQRH4/YK6/Rt8oQpGPtfyFAPb0/3Z50yZ128bxg=
=WBlx
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgp0yyprOQl7a.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Group-Based-Policy] How to deal with improvements

2015-03-11 Thread Ivar Lazzaro
Hello Folks,

As a follow up of [0] I was working on a proposal for adding the same
sharing capabilities to servicechain constructs. While thinking about the
use cases for doing this, a question came to my mind: How should I deal
with this improvement from  a process perspective?

I'm not sure adding sharing capabilities to 2 more objects is exactly a new
feature... It is more of a follow up of an existing one! What is the
expected process in this case? Should I create a new spec? Modify the
existing one? Create a detailed launchpad blueprint without a spec?

Please provide guidance, thanks,

Ivar.

[0]
https://github.com/stackforge/group-based-policy-specs/blob/master/specs/juno/introduce-shared-attribute.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nitpicking in code reviews

2015-03-11 Thread John Bresnahan
FWIW I agree with #3 and #4 but not #1 and #2.  Spelling is an easy 
enough thing to get right and speaks to the quality standard to which 
the product is held even in commit messages and comments (consider the 
'broken window theory').  Of course everyone makes mistakes (I am a 
terrible speller) but correcting a spelling error should be a trivial 
matter.  If a reviewer notices a spelling error I would expect them to 
point it.


On 3/11/15 2:22 PM, Kuvaja, Erno wrote:

Hi all,

Following the code reviews lately I’ve noticed that we (the fan club
seems to be growing on weekly basis) have been growing culture of
nitpicking [1] and bikeshedding [2][3] over almost every single change.

Seriously my dear friends, following things are not worth of “-1” vote
if even a comment:

1)Minor spelling errors on commit messages (as long as the message comes
through and flags are not misspelled).

2)Minor spelling errors on comments (docstrings and documentation is
there and there, but comments, come-on).

3)Used syntax that is functional, readable and does not break
consistency but does not please your poem bowel.

4)Other things you “just did not realize to check if they were there”.
After you have gone through the whole change go and look your comments
again and think twice if your concern/question/whatsoever was addressed
somewhere else than where your first intuition would have dropped it.

We have relatively high volume for glance at the moment and this
nitpicking and bikeshedding does not help anyone. At best it just
tightens nerves and breaks our group. Obviously if there is “you had ONE
job” kind of situations or there is relatively high amount of errors
combined with something serious it’s reasonable to ask fix the typos on
the way as well. The reason being need to increase your statistics,
personal perfectionist nature or actually I do not care what; just stop
or go and do it somewhere else.

Love and pink ponies,

-Erno

[1] www.urbandictionary.com/define.php?term=nitpicking
http://www.urbandictionary.com/define.php?term=nitpicking

[2] http://bikeshed.com

[3] http://en.wiktionary.org/wiki/bikeshedding



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-11 Thread Flavio Percoco

On 11/03/15 23:49 +, Kuvaja, Erno wrote:

I'd prefer 1400 as well. But A Foolish Consistency is the Hobgoblin of Little 
Minds


Either work for me, as long as someone pings me :P

Flavio



- Erno


-Original Message-
From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
Sent: 11 March 2015 20:40
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting
time.

I'd prefer to go with 1400 UTC unless there's a majority for 1500UTC.

P.S. It's my feeling that ML announcements and conversations are not
effective when taking poll from wider audience so we'd discuss this a bit
more in the next meeting and merge the votes.

Thanks,
-Nikhil


From: Louis Taylor lo...@kragniz.eu
Sent: Wednesday, March 11, 2015 10:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting
time.

On Wed, Mar 11, 2015 at 02:25:26PM +, Ian Cordasco wrote:
 I have no opinions on the matter. Either 1400 or 1500 work for me. I
 think there are a lot of people asking for it to be at 1500 instead though.
 Would anyone object to changing it to 1500 instead (as long as it is
 one consistent time for the meeting)?

I have no problem with that. I'm +1 on a consistent time, but don't mind
when it is.

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpzFAtCHdwUY.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-11 Thread Adrian Otto
In summary, we have decided to use the usage notification events emitted by 
heat (http://tinyurl.com/oultjm5). This should allow us to detect stack action 
completions, and errors (with reasons) which should be enough for a Bay 
implementation that does not need to poll. Stack timeouts can be passed to heat 
as parameters. This is not as elegant as what Angus and Zane suggested because 
it requires Magnum to access all RPC messages. With their proposed approach we 
could use a Zaqar queue that only emits the stack events relevant to that 
user/tenant. This conforms better to the principle of least privilege. Our 
preference for the decided approach is that it allows us tight integration with 
Heat using functionality that exists today. We admit that this implementation 
will be harder to debug than other options.

More remarks in-line.

On Mar 11, 2015, at 2:27 AM, Jay Lau jay.lau@gmail.com wrote:

 
 
 2015-03-10 23:21 GMT+08:00 Hongbin Lu hongbin...@gmail.com:
 Hi Adrian,
 
 On Mon, Mar 9, 2015 at 6:53 PM, Adrian Otto adrian.o...@rackspace.com wrote:
 Magnum Team,
 
 In the following review, we have the start of a discussion about how to 
 tackle bay status:
 
 https://review.openstack.org/159546
 
 I think a key issue here is that we are not subscribing to an event feed from 
 Heat to tell us about each state transition, so we have a low degree of 
 confidence that our state will match the actual state of the stack in 
 real-time. At best, we have an eventually consistent state for Bay following 
 a bay creation.
 
 Here are some options for us to consider to solve this:
 
 1) Propose enhancements to Heat (or learn about existing features) to emit a 
 set of notifications upon state changes to stack resources so the state can 
 be mirrored in the Bay resource.
  
 A drawback of this option is that it increases the difficulty of 
 trouble-shooting. In my experience of using Heat (SoftwareDeployments in 
 particular), Ironic and Trove, one of the most frequent errors I encountered 
 is that the provisioning resources stayed in deploying state (never went to 
 completed). The reason is that they were waiting a callback signal from the 
 provisioning resource to indicate its completion, but the callback signal was 
 blocked due to various reasons (e.g. incorrect firewall rules, incorrect 
 configs, etc.). Troubling-shooting such problem is generally harder.
 I think that the heat convergence is working on the issues for your 
 concern: https://wiki.openstack.org/wiki/Heat/Blueprints/Convergence  

Yes, that wold address part of the concern, but not all of it. Depending on the 
implementation of state exchange, and the configuration of the network, it may 
be possible to create an installation of the system that does not function at 
all due to network ACLs, and almost no clue to the implementer about why it 
does not work. To mitigate this concern, we would want an implementation that 
does not require a bi-directional network trust relationship between Heat and 
Magnum. Instead, implement it in a way that connections are only made from 
Magnum to Heat, so if the stack creation call succeeds, so can the status 
updates. Perhaps we can revisit this design discussion in a future iteration of 
our Bay status code.

Adrian

 
 2) Spawn a task to poll the Heat stack resource for state changes, and 
 express them in the Bay status, and allow that task to exit once the stack 
 reaches its terminal (completed) state.
 
 3) Don’t store any state in the Bay object, and simply query the heat stack 
 for status as needed. 
 
 Are each of these options viable? Are there other options to consider? What 
 are the pro/con arguments for each?
 
 Thanks,
 
 Adrian
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Thanks,
 
 Jay Lau (Guangya Liu)
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Stefano Maffulli
On Wed, 2015-03-11 at 17:59 -0500, Ed Leafe wrote:
 The longer we try to be both sides of this process, the longer we will
 continue to have these back-and-forths about stability vs. innovation.

If I understand correctly your model, it works only for users/operators
who decide to rely on a vendor to consume OpenStack. There are quite
large enterprises out there who consume directly the code as it's
shipped from git.openstack.org, some from trunk others from the stable
release .tgz: these guys can't count on companies A, B, C or D to put
resources to fix their problems, because they don't talk to those
companies.

One thing I like of your proposal though, when you say:

 So what is production-ready? And how would you trust any such
 designation? I think that it should be the responsibility of groups
 outside of OpenStack development to make that call. 

This problem has been bugging the European authorities for a long time
and they've invested quite a lot of money to find tools that would help
IT managers of the public (and private) sector estimate the quality of
open source code. It's a big deal in fact when on one hand you have
Microsoft and IBM sales folks selling your IT managers overpriced stuff
that just works and on the other hand you have this Linux thing that
nobody has heard of, it's gratis and I can find it on the web and many
say it just works, too... crazy, right? Well, at the time it was and
to some extent, it still is. So the EU has funded lots of research in
this area.

One group of researcher that I happen to be familiar with, recently has
received another bag of Euros and released code/methodologies to
evaluate and compare open source projects[1]. The principles they use to
evaluate software are not that hard to find and are quite objective. For
example: is there a book published about this project? If there is,
chances are this project is popular enough for a publisher to sell
copies. Is the project's documentation translated in multiple languages?
Then we can assume the project is popular. How long has the code been
around? How large is the pool of contributors? Are there training
programs offered? You get the gist.

Following up on my previous crazy ideas (did I hear someone yell keep
'em coming?), probably a set of tags like:

   book-exists (or book-chapter-exists)
   specific-training-offered
   translated-in-1-language (and its bigger brothers translated-in-5,
translated-in-10+languages)
   contributor-size-high (or low, and we can set a rule as we do for the
diversity metric used in incubation/graduation)
   codebase-age-baby, -young and  -mature,  (in classes, like less than
1, 1-3, 3+ years old)

would help a user understand that Nova or Neutron are different from
(say) Barbican or Zaqar. These are just statements of facts, not a
qualitative assessment of any of the projects mentioned. At the same
time, I have the impression these facts would help our users make up
their mind.

Thoughts?

[1]
http://www.ict-prose.eu/2014/12/09/osseval-prose-open-source-evaluation-methodology-and-tool/


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday March 12th at 17:00 UTC

2015-03-11 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 12th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EST
02:00 JST
03:30 ACDT
18:00 CET
12:00 CST
10:00 PST

-Matt Treinish


pgpTqZt_x_e1E.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Thinking about our python client UX

2015-03-11 Thread Robert Collins
On 8 March 2015 at 13:12, Devananda van der Veen
devananda@gmail.com wrote:
 Hi folks,

 Recently, I've been thinking more of how users of our python client
 will interact with the service, and in particular, how they might
 expect different instances of Ironic to behave.

 We added several extensions to the API this cycle, and along with
 that, also landed microversion support (I'll say more on that in
 another thread). However, I don't feel like we've collectively given
 nearly enough thought to the python client. It seems to work well
 enough for our CI testing, but is that really enough? What about user
 experience?

 In my own testing of the client versioning patch that landed on
 Friday, I noticed some pretty appalling errors (some unrelated to that
 patch) when pointing the current client at a server running the
 stable/juno code...

 http://paste.openstack.org/show/u91DtCf0fwRyv0auQWpx/


 I haven't filed specific bugs from yet this because I think the issue
 is large enough that we should talk about a plan first. I think that
 starts by agreeing on who the intended audience is and what level of
 forward-and-backward compatibility we are going to commit to [*],
 documenting that agreement, and then come up with a plan to deliver
 that during the L cycle. I'd like to start the discussion now, so I
 have put it on the agenda for Monday, but I also expect it will be a
 topic at the Vancouver summit.

Since there was no debate on the compat thing, I've thrown up an
etherpad to start the discussion.

https://etherpad.openstack.org/p/ironic-client-ux

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Icehouse 2014.1.4 freeze exceptions

2015-03-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/11/2015 12:21 PM, Alan Pevec wrote:
 Hi,
 
 next Icehouse stable point release 2014.1.4 has been slipping last
 few weeks due to various gate issues, see Recently closed section
 in https://etherpad.openstack.org/p/stable-tracker for details. 
 Branch looks good enough now to push the release tomorrow
 (Thursdays are traditional release days) and I've put freeze -2s on
 the open reviews. I'm sorry about the short freeze period but
 branch was effectively frozen last two weeks due to gate issues and
 further delay doesn't make sense. Attached is the output from the
 stable_freeze script for thawing after tags are pushed.
 
 At the same time I'd like to propose following freeze exceptions
 for the review by stable-maint-core:
 
 * https://review.openstack.org/144714 - Eventlet green threads not 
 released back to pool Justification: while not OSSA fix, it does
 have SecurityImpact tag
 
 * https://review.openstack.org/163035 - [OSSA 2015-005] Websocket 
 Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259) 
 Justification: pending merge on master and juno
 
 

There seems to be a regression [1] introduced into Icehouse and Juno
trees by backporting [2] that may leave instances in db that are not
deletable (and we have no force-delete in Icehouse). The fix is merged
in master and backported to both stable branches [3]. I suggest we
consider it as an exception for the upcoming release to avoid the
regression in it.

[1]: https://bugs.launchpad.net/nova/+bug/1423952
[2]:
https://review.openstack.org/#/q/Ife712c43c5a61424bc68b2f5ab47cefdb46ac168,n,z
[3]:
https://review.openstack.org/#/q/I70f464120c798422f9a3d601b7cdf3b0a8320690,n,z

Comments?

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAMTyAAoJEC5aWaUY1u57l5AIAKEheCCCzifZgg0jJymDKL4w
mUST3axIxTK/SlgkU83yw7ies2zMFYv1vhu+D4MH9B/Vl+j3wVL79YgBKn9jeJFu
CU5+5f+98QW897Ba06GEXZ1HjvKwYGCII9MZsnQ3k2185OpCZvsW7jOZPbMKbmIH
cN66RJWYpLOhtYjznRCx5KLjOHz0Lp+C36oR29DICUosrTAKfQyE3jFebLSAnxgq
cpWGG7sfkbkob5YUijtmkYc8loLwC95s6ScGT8riLj/JKqyJ8JotEdfsmAfA3Bfg
ZmY+bFzLLGFIjyNfX+j5nfmfDYEEbGhhNoh2JzOL9uuP/FOhDp6s+NvZPqDmMDQ=
=WkTB
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-11 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/11/2015 02:40 PM, Jeremy Stanley wrote:
 I think we can make this work. Assuming more than N (to my mind 
  5 or so) deployments report they are using project X, we can say
  that this is used in production/POC/... and the number of
  nodes/hypervisors/etc.
  
  This makes it concrete and anonymous to avoid the fishing queries.
  It also allows our community to enter what they are doing in one
  place rather than answering multiple surveys. I am keen to avoid
  generic queries such as How many hypervisors are installed for
  public clouds using Xen but if we have an agreement that 5
  avoids company identification, I feel this is feasible.
 [...]
 
 I'm mildly concerned that this adds a strong incentive to start
 gaming responses to/participation in the user survey going forward,
 once it gets around that you just need N people to take the survey
 and claim to be using this project in production so that it can get
 the coveted production-ready tag. I'm probably a little paranoid
 and certainly would prefer to assume good faith on the part of
 everyone in our community, but as the community continues to grow
 that faith gets spread thinner and thinner.

Allow me to re-propose the idea that we are dealing with two separate
entities here, and need two separate entities to make these calls.

There is the the development side of things, where people work hard to
get their ideas for OpenStack incorporated into the codebase. There is
also the distribution side, where people work hard to get a single
deployable package that others can take and make clouds with.

So what is production-ready? And how would you trust any such
designation? I think that it should be the responsibility of groups
outside of OpenStack development to make that call. What would that look
like? Well, let me give you an example.

I have Company A that wants to be known as the simplest OpenStack
distribution. I invest time and money in packaging the co-gated core
along with a few helpful other projects, and make that available to my
customers. There is also Company B, who wants to create the most
powerful, flexible packaging of OpenStack, and takes the time to not
only include the basics, but develops tools to handle the more complex
situations, such as cells or HA designs. This package is not for the
faint of heart, and for most businesses it would require contracting for
the services of Company B in order to get their installation up and
running, as well as fine-tuning it and upgrading it. There are also
Company C and Company D who target other end-user needs. They all draw
from the codebase that the OpenStack developers create, and, of course,
give their feedback as to what changes are needed to make their
particular customers happy. If they're smart, they'll supply developer
cycles to help make them happen.

The longer we try to be both sides of this process, the longer we will
continue to have these back-and-forths about stability vs. innovation.


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJVAMjCAAoJEKMgtcocwZqLkKIP/0ReARgypM+eXnu7xDguJIF/
6QW8RI7tdHpBFrRZfrBaRahDstFdPQiaBxj1XhFbzoKP+BrUvvfLMnxhJ5cRX0ey
mz842khScuGmLFzteKvLWwyOia425CZQRnNNaHYjiwii3Agu3a5JTnUi0FeDxZi4
bD/ZFb/1OxqyVVf2eOJ/T0akVQBqB9QGGNtBnPJEmgEjUl6AhOplLnOkOl1ozRSW
svlCE7Wyq/Gtl86Ksbj1rfDdArO9Hlh3lwJxaJikfV3O316kqfjx+v7JVcrwdXrK
qqNtDUpC/tjlReDuDmrPixa0e8/z9Go4TiF1F6nQ4LbY8itMzRKeVbrUPrx4ojlh
DOE/uOhxh8iTYKOncPsxQlfhbIk0VwzCDWZzDbRxEUpV8Jnzz1ImeosCO3D5/sRL
G+Rd2gr9rsbwQxtJLA1Q2zQUqnJ6Vvsvx2BgV87ukl0kudF9X/1NaGrJRmeiKYxk
UDgd0ESYnp9GQMCORcoDa6hNKnPA0yLebqOEyi5dGMwem8JHjYFDx6D9fVfkrxlQ
7Kt+SfRmm+1zsHcbZIUz6KEPAvU0cmFdbuG7DwnmKSxpPamUnn3wmpIZRDpdwGIj
fY3F2dTeEYoNnTyiitLTCKZYE0KaKpFDOAN+FpDNnQJIqQMn/ksxy9/NHYvsmutD
5DE1LjWhip9IdcVVm+bT
=8L0C
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/11/2015 07:48 PM, Joe Gordon wrote:
 Out of sync Quotas --
 
 https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63
 
 The quotas code is quite racey (this is kind of a known if you look
 at the bug tracker). It was actually marked as a top soft spot
 during last fall's bug triage - 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html

  There is an operator proposed spec for an approach here - 
 https://review.openstack.org/#/c/161782/
 
 Action: we should make a solution here a top priority for enhanced 
 testing and fixing in Liberty. Addressing this would remove a lot
 of pain from ops.
 
 
 To help us better track quota bugs I created a quotas tag:
 
 https://bugs.launchpad.net/nova/+bugs?field.tag=quotas
 
 Next step is re-triage those bugs: mark fixed bugs as fixed,
 deduplicate bugs etc.

(Being quite far from nova code, so ignore if not applicable)

I would like to note that other services experience races in quota
management too. Neutron has a spec approved to land in Kilo-3 that is
designed to introduce a new quota enforcement mechanism that is
expected to avoid (some of those) races:

https://github.com/openstack/neutron-specs/blob/master/specs/kilo/better-quotas.rst

I thought you may be interested in looking into it to apply similar
ideas to nova.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAMrNAAoJEC5aWaUY1u57aV8H/1jTSrpJYYRBH6hOrl2MlPNV
mgH+sszhKDZbNjHAlIjCGqu4/Rzbf7Xhp1ujhzQFA54UcQ/cKSsao7kgHcAGJ7ib
FHM8RIf0UNXw2KkvCc4gvrfEO0vxMkzWZ12RdtyKSJVxJgWNctnX6VI4f1ZS5mIb
S5mrpPxVyL/Hplr+Yy05m4S7ALyiKGVogCYtRYqnBaH3IIK9Nve3hyfzcgBvd+h+
9CfGeFX4WJr/JDGoDCNX/Zq9/SgVuEjm2fXgbZPaZDGoTc4RrrTe6TWi1g8mx/pD
QuTsyUu3CvRAr2qafPS4c2j0qEaooCsA22A8Il8mvnO7Rp12KkxCWojkoPvFOpA=
=JV5/
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [dhcp] dnsmasq scale problem

2015-03-11 Thread Wanjing Xu
One of my peers mentioned that people talked about dnsmasq scale problems at 
paris summit.  So what was the scale problems?  Currently, there is one dnsmasq 
instance for each subnet.  And pool management and allocation seemed to be done 
at neutron.  Thus dnsmasq is light-weighted.  So if dnsmasq does have scale 
issues, what is the suggested solution and what is the common practice in using 
dhcp server?
Regards!
Wanjing Xu__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-11 Thread Ian Wienand
On 03/11/2015 08:10 AM, Robert Collins wrote:
 The wheel has been removed from PyPI and anyone installing testtools
 1.7.0 now will install from source which works fine.

I noticed the centos7 job failed with the source version.

The failing job was [1] where the back-trace looks like ~45 songs on
Python's greatest-hits album (pasted below).  The next run [2] it got
the 1.7.1 wheel and just worked.

Maybe this jumps out at someone as a known issue ...

-i

[1] 
http://logs.openstack.org/49/163249/1/check/check-tempest-dsvm-centos7/8dceac8/logs/devstacklog.txt.gz
[2] 
http://logs.openstack.org/49/163249/1/check/check-tempest-dsvm-centos7/f3b86d5/logs/devstacklog.txt.gz

---
Colecting testtools=0.9.22 (from 
fixtures=0.3.14-oslo.concurrency=1.4.1-keystone==2015.1.dev395)
Downloading 
http://pypi.IAD.openstack.org/packages/source/t/testtools/testtools-1.7.0.tar.gz
 (202kB)

 Installed 
/tmp/easy_install-mV2rSm/unittest2-1.0.0/.eggs/traceback2-1.4.0-py2.7.egg

 Installed 
/tmp/easy_install-mV2rSm/unittest2-1.0.0/.eggs/linecache2-1.0.0-py2.7.egg
 /usr/lib/python2.7/site-packages/setuptools/dist.py:291: UserWarning: The 
version specified (__main__.late_version instance at 0x34654d0) is an invalid 
version, this may not work as expected with newer versions of setuptools, pip, 
and PyPI. Please see PEP 440 for more details.
   details. % self.metadata.version
 Traceback (most recent call last):
   File string, line 20, in module
   File /tmp/pip-build-aGC1zC/testtools/setup.py, line 92, in module
 setup_requires=deps,
   File /usr/lib64/python2.7/distutils/core.py, line 112, in setup
 _setup_distribution = dist = klass(attrs)
   File /usr/lib/python2.7/site-packages/setuptools/dist.py, line 265, in 
__init__
 self.fetch_build_eggs(attrs['setup_requires'])
   File /usr/lib/python2.7/site-packages/setuptools/dist.py, line 310, in 
fetch_build_eggs
 replace_conflicting=True,
   File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 799, 
in resolve
 dist = best[req.key] = env.best_match(req, ws, installer)
   File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 
1049, in best_match
 return self.obtain(req, installer)
   File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 
1061, in obtain
 return installer(requirement)
   File /usr/lib/python2.7/site-packages/setuptools/dist.py, line 377, in 
fetch_build_egg
 return cmd.easy_install(req)
   File /usr/lib/python2.7/site-packages/setuptools/command/easy_install.py, 
line 620, in easy_install
 return self.install_item(spec, dist.location, tmpdir, deps)
   File /usr/lib/python2.7/site-packages/setuptools/command/easy_install.py, 
line 650, in install_item
 dists = self.install_eggs(spec, download, tmpdir)
   File /usr/lib/python2.7/site-packages/setuptools/command/easy_install.py, 
line 835, in install_eggs
 return self.build_and_install(setup_script, setup_base)
   File /usr/lib/python2.7/site-packages/setuptools/command/easy_install.py, 
line 1063, in build_and_install
 self.run_setup(setup_script, setup_base, args)
   File /usr/lib/python2.7/site-packages/setuptools/command/easy_install.py, 
line 1049, in run_setup
 run_setup(setup_script, args)
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 240, in 
run_setup
 raise
   File /usr/lib64/python2.7/contextlib.py, line 35, in __exit__
 self.gen.throw(type, value, traceback)
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 193, in 
setup_context
 yield
   File /usr/lib64/python2.7/contextlib.py, line 35, in __exit__
 self.gen.throw(type, value, traceback)
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 164, in 
save_modules
 saved_exc.resume()
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 139, in 
resume
 compat.reraise(type, exc, self._tb)
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 152, in 
save_modules
 yield saved
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 193, in 
setup_context
 yield
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 237, in 
run_setup
 DirectorySandbox(setup_dir).run(runner)
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 267, in 
run
 return func()
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 236, in 
runner
 _execfile(setup_script, ns)
   File /usr/lib/python2.7/site-packages/setuptools/sandbox.py, line 46, in 
_execfile
 exec(code, globals, locals)
   File /tmp/easy_install-mV2rSm/unittest2-1.0.0/setup.py, line 87, in 
module
 'testtools.tests.matchers',
   File /usr/lib64/python2.7/distutils/core.py, line 152, in setup
 dist.run_commands()
   File /usr/lib64/python2.7/distutils/dist.py, line 953, in run_commands
 self.run_command(cmd)
   File /usr/lib64/python2.7/distutils/dist.py, line 971, in run_command
 cmd_obj.ensure_finalized()
  

Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Robert Collins
On 12 March 2015 at 12:07, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/11/2015 07:48 PM, Joe Gordon wrote:
 Out of sync Quotas --

 https://etherpad.openstack.org/p/PHL-ops-nova-feedback L63

 The quotas code is quite racey (this is kind of a known if you look
 at the bug tracker). It was actually marked as a top soft spot
 during last fall's bug triage -
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html

  There is an operator proposed spec for an approach here -
 https://review.openstack.org/#/c/161782/

 Action: we should make a solution here a top priority for enhanced
 testing and fixing in Liberty. Addressing this would remove a lot
 of pain from ops.


 To help us better track quota bugs I created a quotas tag:

 https://bugs.launchpad.net/nova/+bugs?field.tag=quotas

 Next step is re-triage those bugs: mark fixed bugs as fixed,
 deduplicate bugs etc.

 (Being quite far from nova code, so ignore if not applicable)

 I would like to note that other services experience races in quota
 management too. Neutron has a spec approved to land in Kilo-3 that is
 designed to introduce a new quota enforcement mechanism that is
 expected to avoid (some of those) races:

 https://github.com/openstack/neutron-specs/blob/master/specs/kilo/better-quotas.rst

 I thought you may be interested in looking into it to apply similar
 ideas to nova.

Seems to me that there is enough complexity around quotas that a
dedicated quota REST service could be a good way to abstract that out
- then in consuming code you can reserve, consume and free
appropriately, and the service can take care of being super diligent
about races, correctness, and we'd have one place both in code and
deployments to tune for speed.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Proposal to change Glance meeting time.

2015-03-11 Thread Kuvaja, Erno
I'd prefer 1400 as well. But A Foolish Consistency is the Hobgoblin of Little 
Minds

- Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
 Sent: 11 March 2015 20:40
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting
 time.
 
 I'd prefer to go with 1400 UTC unless there's a majority for 1500UTC.
 
 P.S. It's my feeling that ML announcements and conversations are not
 effective when taking poll from wider audience so we'd discuss this a bit
 more in the next meeting and merge the votes.
 
 Thanks,
 -Nikhil
 
 
 From: Louis Taylor lo...@kragniz.eu
 Sent: Wednesday, March 11, 2015 10:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] Proposal to change Glance meeting
 time.
 
 On Wed, Mar 11, 2015 at 02:25:26PM +, Ian Cordasco wrote:
  I have no opinions on the matter. Either 1400 or 1500 work for me. I
  think there are a lot of people asking for it to be at 1500 instead though.
  Would anyone object to changing it to 1500 instead (as long as it is
  one consistent time for the meeting)?
 
 I have no problem with that. I'm +1 on a consistent time, but don't mind
 when it is.
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-11 Thread Robert Collins
On 12 March 2015 at 12:37, Ian Wienand iwien...@redhat.com wrote:
 On 03/11/2015 08:10 AM, Robert Collins wrote:
 The wheel has been removed from PyPI and anyone installing testtools
 1.7.0 now will install from source which works fine.

 I noticed the centos7 job failed with the source version.

 The failing job was [1] where the back-trace looks like ~45 songs on
 Python's greatest-hits album (pasted below).  The next run [2] it got
 the 1.7.1 wheel and just worked.

 Maybe this jumps out at someone as a known issue ...

 -i

 [1] 
 http://logs.openstack.org/49/163249/1/check/check-tempest-dsvm-centos7/8dceac8/logs/devstacklog.txt.gz
...
File /tmp/easy_install-mV2rSm/unittest2-1.0.0/unittest2/case.py, line 
 16, in module
  ImportError: cannot import name range
  Complete output from command python setup.py egg_info:

Thats a missing symbol in six. Which means unittest2 needs a versioned
dep on six - please file a unittest2 bug, and if you have the time,
figure out when range was added to six.

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [dhcp] dnsmasq scale problem

2015-03-11 Thread Kevin Benton
The easy way to scale it is to just launch more dhcp agents. The scaling
issue arises when a single dhcp agent is managing thousands of dnsmasq
instances and interfaces.

On Wed, Mar 11, 2015 at 4:07 PM, Wanjing Xu wanjing...@hotmail.com wrote:

 One of my peers mentioned that people talked about dnsmasq scale problems
 at paris summit.  So what was the scale problems?  Currently, there is one
 dnsmasq instance for each subnet.  And pool management and allocation
 seemed to be done at neutron.  Thus dnsmasq is light-weighted.  So if
 dnsmasq does have scale issues, what is the suggested solution and what is
 the common practice in using dhcp server?

 Regards!

 Wanjing Xu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-11 Thread Ben Swartzlander


On 03/04/2015 09:33 AM, Danny Al-Gaaf wrote:

Am 04.03.2015 um 15:18 schrieb Csaba Henk:

Hi Danny,

- Original Message -

From: Danny Al-Gaaf danny.al-g...@bisect.de
To: Deepak Shetty dpkshe...@gmail.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org,
ceph-de...@vger.kernel.org
Sent: Wednesday, March 4, 2015 3:05:46 PM
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila


...

Another level of indirection. I really like the approach of filesystem
passthrough ... the only critical question is if virtfs/p9 is still
supported in some way (and the question if not: why?).

That only seems to be a biggie, isn't it?

Yes, it is.


We -- Red Hat -- considered a similar, virtfs based driver for GlusterFS
but we dropped that plan exactly for virtfs being abandonware.

As far as I know it was meant to be a research project, and providing
a fairly well working POC it was concluded -- but Deepak knows more of
the story.

Would like to understand why it was abandoned. I see the need of
filesystem passthrough in the area of virtualization. Is there another
solution available?


Danny, I read through this thread and I wasn't sure I had anything to 
add, but now that it's gone quiet, I'm wondering what your plan is.


I wasn't aware that VirtFS is being considered abandonware but it did 
seem to me that it wasn't being actively maintained. After looking at 
the alternatives I considered VirtFS to be the best option for doing 
what it does, but it's applicability is so narrow that it's hard to find 
it appealing. I have the following problems with VirtFS:
* It requires a QEMU/KVM or Xen hypervisor. VMware and HyperV have zero 
support nor any plans to support it.
* It requires a Linux or BSD. Windows guests can't use it at all. Some 
googling turned up various projects that might give you a headstart 
writing a Windows VirtFS client, but we're a long way from having 
something usable.
* VirtFS boils the filesystem down to the bare minimum, thanks to its P9 
heritage. Interesting features like caching, locking, security 
(authentication/authorization/privacy), name mapping, and multipath I/O 
are either not implemented or delegated to the hypervisor which may or 
may not meet the needs of the guest application.
* Applications designed to run on multiple nodes with shared filesystem 
storage tend to be tested and supported on NFS or CIFS because those 
have been around forever. VirtFS is tested and supported by nobody so 
getting application level support will be impossible.


The third one is the one that kills it for me. VirtFS is useful in 
extremely narrow use cases only. Manila is trying to provide shared 
filesystems in as wide a set of applications as possible. VirtFS offers 
nothing that can't also be achieved another way. That's not to say the 
other way is always ideal. If your use case happens to match exactly 
what VirtFS does well (QEMU hypervisor, Linux guest, no special 
filesystem requirements) then the alternatives may not look so good.


I'm completely in favor of seeing virtfs support go into Nova and doing 
integration with it from the Manila side. I'm concerned though that it 
might be a lot of work, and it might benefits only a few people. Have 
you found any others who share your goal and are willing to help?




Danny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] K3 Feature Freeze Exception request for bp/nfs-backup

2015-03-11 Thread Fox, Kevin M
Also, can 
https://review.openstack.org/#/c/163647https://review.openstack.org/#/c/163647/2
 be considered. all it does is move code from the accepted nfs.py to posix.py 
as described in the approved spec. This enables all posix complient file 
systems to be used, such as lustre instead of just nfs. It is already tested 
through the nfs tests.

The posix backup driver has been cooking in one form or another since Juno. It 
would be a shame to need to wait over a year and a half for it.

Thanks,
Kevin

From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: Wednesday, March 11, 2015 10:41 AM
To: Jay Bryant; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] K3 Feature Freeze Exception request for 
bp/nfs-backup

Discussed in the weekly meeting, plenty of sponsors, agreed to try to get it 
through today

On 11 March 2015 at 18:48, Jay S. Bryant 
jsbry...@electronicjungle.netmailto:jsbry...@electronicjungle.net wrote:
All,

I will sponsor this.

Patch just needs to be rebased.

Jay


On 03/11/2015 10:20 AM, Tom Barron wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I hereby solicit a feature freeze exception for the NFS backup review [1].

Although only about 140 lines of non-test code, this review completes
the implementation of the NFS backup blueprint [2].  Most of the
actual work for this blueprint was a refactor of the Swift backup
driver to
abstract the backup/restore business logic from the use of the Swift
object store itself as the backup repository.  With the help of Xing
Yang, Jay Bryant, and Ivan Kolodyazhny, that review [3] merged
yesterday and made the K3 FFE deadline.

In evaluating this FFE request, please take into account the following
considerations:

* Without the second review, the goal of the blueprint remains
  unfullfilled.

* This code was upstream in January and was essentially complete
  with only superficial changes since then.

* As of March 5 this review had two core +2s.  Delay since then has
  been entirely due to wait time on the dependent review and need
  for rebase given upstream code churn.

* No risk is added outside NFS backup service itself since the
  changes to current code are all in the core refactor and that
  is already merged.

If this FFE is granted, I will give the required rebase my immediate
attention.

Thanks.

- --
Tom Barron
t...@dyncloud.netmailto:t...@dyncloud.net

[1] - https://review.openstack.org/#/c/149726
[2] - https://blueprints.launchpad.net/cinder/+spec/nfs-backup
[3] - https://review.openstack.org/#/c/149725
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBAgAGBQJVAF1YAAoJEGeKBzAeUxEHayUH/2iWuOiKBnRauX40fwcR7+js
lIM+qRIHlg2iJ+cnqap6HHUhBSxwHnuAV41zQmFBKnfhc3sIqS98ZSVlUaJQtct/
YjjInKOpxFOEw1FgoFMsrg0qm76zFMXXVIKNegy2iXgXsKzDTWed5n57N8FAP2+6
q/uASOZNHgxbeZLV7LSKS21/3WUoQpIQiW0+1GtkVtO1C9t8Io+TrjlZj7T60kHJ
UEH5HShKE0U40SKhgwRyEK7HqbMDGv8w5SsUgyUntdgDlQycgyI/erKm5WJqcZsF
F6om6HY3oxtulcjbrWmA6+ENnOYsLchXFT8fZeLj7JWOarv5SF2fBQFTqzc/36U=
=/FVr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/10/2015 06:34 PM, Gabriel Bezerra wrote:
 Em 10.03.2015 14:24, Carl Baldwin escreveu:
 Neutron currently does not enforce the uniqueness, or
 non-overlap, of subnet cidrs within the address scope for a
 single tenant.  For example, if a tenant chooses to use
 10.0.0.0/24 on more than one subnet, he or she is free to do so.
 Problems will arise when trying to connect a router between these
 subnets but that is left up to the tenant to work out.
 
 In the current IPAM rework, we had decided to allow this overlap
 in the reference implementation for backward compatibility.
 However, we've hit a snag.  It would be convenient to use the
 subnet cidr as the handle with which to refer to a previously
 allocated subnet when talking to IPAM.  If overlap is allowed,
 this is not possible and we need to come up with another
 identifier such as Neutron's subnet_id or another unique IPAM
 specific ID.  It could be a burden on an external IPAM system --
 which does not allow overlap -- to work with a completely
 separate identifier for a subnet.
 
 I do not know of anyone using this capability (or mis-feature)
 of Neutron.  I would hope that tenants are aware of the issues
 with trying to route between subnets with overlapping address
 spaces and would avoid it.  Is this potential overlap something
 that we should really be worried about?  Could we just add the
 assumption that subnets do not overlap within a tenant's scope?
 
 An important thing to note is that this topic is different than 
 allowing overlap of cidrs between tenants.  Neutron will continue
 to allow overlap of addresses between tenants and support the
 isolation of these address spaces.  The IPAM rework will support
 this.
 
 Carl Baldwin
 
 
 I'd vote for allowing against such restriction, but throwing an
 error in case of creating a router between the subnets.
 
 I can imagine a tenant running multiple instances of an
 application, each one with its own network that uses the same
 address range, to minimize configuration differences between them.
 

I agree with Gabriel on the matter. There is nothing inherently wrong
about a tenant running multiple isolated network setups that use
overlapping addresses (as there is nothing wrong about multiple
tenants doing the same).

There seems to be a value in disallowing overlapping subnets attached
to the same router though.

All in all, there seems to be no need to limit neutron API just
because most external IPAM implementation do not seem to care about
supporting the use case.

It's also an issue that the change proposed is backwards incompatible.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVAMCPAAoJEC5aWaUY1u57d/cH/A+nAuLrNaCPumjlOPJP90c3
4oSh0ioxEQw2uBRx1mWvcQle0M1U4Psy5CqIjeHgDhRlCNKB2gAYvm7/lCwZoCw7
pxLUerfZPFguNKYCUly1MYyo0gorycFgemoHKEFwHboDJfPdGYxdhW8HuemCClOG
ZSeRzjO2rGaHU8XT+QWgI14UBiAu+XlQHy3UwUKEaffXOnja7noCU99/6d7+6TgF
5/RdFhpfn6pcRvLboiquo57w6N3q7moORgrGSMuP8cnMMTxo9/TLie+vMXmLPobB
YmeG1ar1q0ouGKb6ZG7KokUyl6CdpgowZ6bqetPELjBLIO+uhcsJ6g9NvRl4T9o=
=WQ3Q
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need input on possible API change for bug #1420848

2015-03-11 Thread Chris Friesen


I can see how to do a v2 extension following the example given for 
extended_services.py and extended_services_delete.py.  That seems to be working now.


I'm not at all clear on how to go about doing the equivalent for v2.1.  Does 
that use the api/openstack/compute/plugins/v3/ subdirectory?   Is it possible to 
do the equivalent to the v2 extended_services.py (where the code in 
api/openstack/compute/plugins/v3/services.py checks to see if the other 
extension is enabled) or do I have to write a whole new extension that builds on 
the output of api/openstack/compute/plugins/v3/services.py?


Thanks,
Chris


On 03/11/2015 09:51 AM, Chen CH Ji wrote:

I would think a on v2 extension is needed
for v2.1 , mircoversion is a way but not very sure it's needed.

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC

Inactive hide details for Chris Friesen ---03/11/2015 04:35:01 PM---Hi, I'm
working on bug #1420848 which addresses the issue tChris Friesen ---03/11/2015
04:35:01 PM---Hi, I'm working on bug #1420848 which addresses the issue that 
doing a

From: Chris Friesen chris.frie...@windriver.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 03/11/2015 04:35 PM
Subject: [openstack-dev] [nova] need input on possible API change for bug 
#1420848






Hi,

I'm working on bug #1420848 which addresses the issue that doing a
service-disable followed by a service-enable against a down compute node
will result in the compute node going up for a while, possibly causing delays
to operations that try to schedule on that compute node.

The proposed solution is to add a new reported_at field in the DB schema to
track when the service calls _report_state().

The backend is straightforward, but I'm trying to figure out the best way to
represent this via the REST API response.

Currently we response includes an updated_at property, which maps directly to
the auto-updated updated_at field in the database.

Would it be acceptable to just put the reported_at value (if it exists) in the
updated_at property?  Logically the reported_at value is just a
determination of when the service updated its own state, so an argument could be
made that this shouldn't break anything.

Otherwise, by my reading of
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK; it
seems like if I wanted to add a new reported_at property I would need to do it
via an API extension.

Anyone have opinions?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] readout from Philly Operators Meetup

2015-03-11 Thread Sam Morrison

 On 11 Mar 2015, at 11:59 pm, Sean Dague s...@dague.net wrote:
 
 Nova Rolling Upgrades
 -
 
 Most people really like the concept, couldn't find anyone that had
 used it yet because Neutron doesn't support it, so they had to big
 bang upgrades anyway.

I couldn’t make it to the ops summit but we (NeCTAR) have been using the 
rolling upgrades for Havana - Icehouse and Icehouse - Juno 
and it has worked great. (We’re still using nova-network)

Cheers,
Sam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Nitpicking in code reviews

2015-03-11 Thread Kuvaja, Erno
Hi all,

Following the code reviews lately I've noticed that we (the fan club seems to 
be growing on weekly basis) have been growing culture of nitpicking [1] and 
bikeshedding [2][3] over almost every single change.

Seriously my dear friends, following things are not worth of -1 vote if even 
a comment:

1)  Minor spelling errors on commit messages (as long as the message comes 
through and flags are not misspelled).

2)  Minor spelling errors on comments (docstrings and documentation is 
there and there, but comments, come-on).

3)  Used syntax that is functional, readable and does not break consistency 
but does not please your poem bowel.

4)  Other things you just did not realize to check if they were there. 
After you have gone through the whole change go and look your comments again 
and think twice if your concern/question/whatsoever was addressed somewhere 
else than where your first intuition would have dropped it.

We have relatively high volume for glance at the moment and this nitpicking and 
bikeshedding does not help anyone. At best it just tightens nerves and breaks 
our group. Obviously if there is you had ONE job kind of situations or there 
is relatively high amount of errors combined with something serious it's 
reasonable to ask fix the typos on the way as well. The reason being need to 
increase your statistics, personal perfectionist nature or actually I do not 
care what; just stop or go and do it somewhere else.

Love and pink ponies,

-  Erno

[1] 
www.urbandictionary.com/define.php?term=nitpickinghttp://www.urbandictionary.com/define.php?term=nitpicking
[2] http://bikeshed.com
[3] http://en.wiktionary.org/wiki/bikeshedding

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][qa][gabbi][rally][tempest] Extend rally verfiy to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-11 Thread Boris Pavlovic
Alex,


  * rally plugin should be a part of project (for example, located in
 functional tests directory)


There are 2 issues with such solution:

  1) If rally didn't load plugin, command rally verify project won't
exist
  2) Putting some strange Rally plugin to source of other projects will be
quite complicated task.
  I believe we should have at least POC before even asking for such
stuff.

  * use {project url} instead of {project name} in rally verify command,
 example:


I agree here with Andrey, it is bad UX. Forcing people to write every time
URLs is terrible.
They will build own tools on top of such solution.

What  about rally verify nova start --url  where --url is optional
argument?
If --url is not specified default url is used.


Best regards,
Boris Pavlovic






On Wed, Mar 11, 2015 at 7:14 PM, Andrey Kurilin akuri...@mirantis.com
wrote:

  $ rally verify https://github.com/openstack/nova start

 As one of end-users of Rally, I dislike such construction, because I don't
 want to remember links to repos, they are too long for me:)

 On Wed, Mar 11, 2015 at 12:49 PM, Aleksandr Maretskiy 
 amarets...@mirantis.com wrote:

 The idea is great, but IMHO we can move all project-specific code out of
 rally, so:

   * rally plugin should be a part of project (for example, located in
 functional tests directory)
   * use {project url} instead of {project name} in rally verify command,
 example:

 $ rally verify https://github.com/openstack/nova start


 On Tue, Mar 10, 2015 at 6:01 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 Hi,

 I like this idea, we use Rally for OpenStack clouds verification at
 scale and it is the real issue - how to run all functional tests from each
 project with the one script. If Rally will do this, I will use Rally to run
 these tests.

 Thank you!

 On Mon, Mar 9, 2015 at 6:04 PM, Chris Dent chd...@redhat.com wrote:

 On Mon, 9 Mar 2015, Davanum Srinivas wrote:

  2. Is there a test project with Gabbi based tests that you know of?


 In addition to the ceilometer tests that Boris pointed out gnocchi
 is using it as well:

https://github.com/stackforge/gnocchi/tree/master/gnocchi/
 tests/gabbi

  3. What changes if any are needed in Gabbi to make this happen?


 I was unable to tell from the original what this is and how gabbi
 is involved but the above link ought to be able to show you how
 gabbi can be used. There's also the docs (which could do with some
 improvement, so suggestions or pull requests welcome):

http://gabbi.readthedocs.org/en/latest/

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Timur,
 Senior QA Engineer
 OpenStack Projects
 Mirantis Inc


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Andrey Kurilin.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Ryu/ofagent CI outage

2015-03-11 Thread trinath.soman...@freescale.com
Hi-

Regarding the CI offline mails, I previously received an email from Anita. 
Please find the summary below.

Please set your account listing to down on this page:
https://wiki.openstack.org/wiki/ThirdPartySystems

Following these instructions:  If your system is going down or having problems, 
change the entry to {{ThirdPartySystemTableEntryDown|your
ci system name}} which are found on the 
https://wiki.openstack.org/wiki/ThirdPartySystems page.

Leave the details of your system status on this page:
https://wiki.openstack.org/wiki/ThirdPartySystems/Ryu_CI 

This is the expected workflow of offline systems communication.
Posting to this mailing list is not part of the communication.


Hope this helps when you further communicate the CI status.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

-Original Message-
From: YAMAMOTO Takashi [mailto:yamam...@valinux.co.jp] 
Sent: Wednesday, March 11, 2015 12:33 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] Ryu/ofagent CI outage

hi,

Ryu/ofagent CI will be offline during the next weekend for a scheduled 
maintenance.  sorry for inconvenience.

YAMAMOTO Takashi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mellanox request for permission for Nova CI

2015-03-11 Thread Nurit Vilosny
Hi ,
I would like to ask for a CI permission to start commenting on Nova branch.
Mellanox is engaged in pci pass-through features for quite some time now.
We have an operating Neutron CI for  ~2 years, and since the pci pass-through 
features are part of Nova as well, we would like to start monitoring Nova's 
patches.
Our CI had been silently running locally over the past couple of weeks, and I 
would like to step ahead, and start commenting in a non-voting mode.
During this period we will be closely monitor our systems and be ready to solve 
any problem that might occur.

Thanks,
Nurit Vilosny
SW Cloud Solutions Manager

Mellanox Technologies
13 Zarchin St. Raanana, Israel
Office: 972-74-712-9410
Cell: 972-54-4713000
Fax: 972-74-712-9111


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Do No Evil

2015-03-11 Thread Matthias Runge
On 09/03/15 15:54, Doug Hellmann wrote:
 

 
 Not everyone realizes that many of the distros run our tests against the
 packages they build, too. So our tool choices trickle downstream beyond
 our machines and our CI environment. In this case, because the tool is a
 linter, it seems like the distros wouldn't care about running it. But if
 it was some sort of test runner or other tool that might be used for
 functional tests, then they may well consider running it a requirement
 to validate the packages they create.

For Fedora, we're also running horizons unit tests during package build.

I would assume the same is true for other distros as well.

Because our build system is not allowed to pull anything from the
internet, we rely exclusively on already available software packages.
Due to the license (good not evil), we can not have jslint as package.
For the given reason, pulling it from the net during build doesn't work
either.

Another warning: users expect development as essential and to be
available, because they might require them to CUSTOMIZE horizon.

Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress]How to add tempest tests for testing murano drive

2015-03-11 Thread Aaron Rosen
The ci runs: `cp -r contrib/tempest/tempest /opt/stack/tempest`  so all you
need to do is add you're file and it will get copied into tempest.

Aaron

On Tue, Mar 10, 2015 at 10:31 AM, Wong, Hong hong.w...@hp.com wrote:

  Hi Aaron,



 I just want to confirm how CI is running the congress tempest tests in its
 environment as I am about to check in a tempest test for testing murano
 deployment.  If I check in the test script to
 congress/contrib/tempest/tempest/scenario/congress_datasources, the CI will
 take care of running the test by copying it to
 stack/tempest/tempest/scenario/congress_datasources ?  So, I don't need to
 worry about adding python-congerssclient and python-muranoclient in
 stack/tempest/requirements.txt right ?



 Thanks,

 Hong







 *From:* Aaron Rosen [mailto:aaronoro...@gmail.com aaronoro...@gmail.com]

 *Sent:* Monday, March 09, 2015 9:28 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Congress]How to add tempest tests for
 testing murano drive



 Hi Hong,



 I you should be able to run the tempest tests with ./run_tempest.sh -N
 which by default uses site-packages so they should be installed by the
 devstack script. If you want to run tempest via tox and venv you'll need to
 do:



 echo python-congressclient  requirements.txt

 echo python-muranoclient  requirements.txt



 Then have tox build the venv.



 Best,



 Aaron



 On Mon, Mar 9, 2015 at 8:28 PM, Wong, Hong hong.w...@hp.com wrote:

 Hi Tim and Aaron,



 I got the latest changes from r157166 and I see the
 thirdparty-requirements.txt file where you can define the murano client
 (it’s already there), so the unit tests for murano driver can run out from
 the box.  However, this change is only in congress, so the tempest tests
 (tempest/ directory where congress tempest tests need to copy to as
 described from readme file) required murano and congress clients will still
 have issue as it doesn’t have the thirdparty requirement file concept.
 Will r157166 changes also going to be implemented in tempest package ?



 Thanks,

 Hong



 --



 Message: 10

 Date: Mon, 2 Mar 2015 15:39:11 +

 From: Tim Hinrichs thinri...@vmware.com

 To: OpenStack Development Mailing List (not for usage questions)

   openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Congress]How to add tempest tests for

   testing murano driver

 Message-ID: d6dbf6ed-2207-4e19-9eec-c270bce2f...@vmware.com

 Content-Type: text/plain; charset=utf-8



 Hi Hong,



 Aaron started working on this, but we don?t have anything in place yet, as
 far as I know.  He?s a starting point.



 https://review.openstack.org/#/c/157166/



 Tim



 On Feb 26, 2015, at 2:56 PM, Wong, Hong 
 hong.w...@hp.commailto:hong.w...@hp.com wrote:



 Hi Aaron,



 I am new to congress and trying to write tempest tests for the newly added
 murano datasource driver.  Since the murano datasource tempest tests
 require both murano and python-congress clients as the dependencies.  I was
 told that I can't just simply add the requirements in the
 tempest/requirements.txt file as both packages are in not in the main
 branch, so CI will not be able to pick them up.  Do you know of any
 workaround ?



 Thanks,

 Hong




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization Questions

2015-03-11 Thread Kekane, Abhishek
Hi Devs,

As another alternative we can use start/stop API’s instead of shelve/unshelve 
the instance.
Following is the details of start/stop and shelve/unshelve on the basis of 
cpu/memory/disk released and fast respawning.

API’S: start/stop
cpu/memory released: No
Disk released: No
Fast respawning: Yes

API’S: shelve/unshelve
cpu/memory released: Yes
Disk released: Yes (Not released if shelved_offload_time = -1)
Fast respawning: No (if instance is booted from image)


In order to make unshelve fast enough, we need to preserve instance root disk 
in compute node,
which I have proposed in spec [1] of shelve-partial-offload.

In case of start/stop API’s cpu/memory are not released/reassigned. We can 
modify these API’s to release
the cpu and memory while stopping the instance and reassign the same while 
starting the instance. In this case
also rescheduling logic need  to be modified to reschedule the instance on 
different host, if required resources
are not available while starting the instance. This is similar to what I have 
implemented in [2] Improving the
performance of unshelve API.

[1] https://review.openstack.org/#/c/135387/
[2] https://review.openstack.org/#/c/150344/

Please let me know your opinion, whether we can modify start/stop API’s as an 
alternative to shelve/unshelve API’s.

Thank You,

Abhishek Kekane


From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 24 February 2015 12:47
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization 
Questions

Hi Duncan,

Thank you for the inputs.

@Community-Members
I want to know if there are any other alternatives to improve the performance 
of unshelve api ((booted from image only).
Please give me your opinion on the same.

Thank You,

Abhishek Kekane



From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: 16 February 2015 16:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization 
Questions

There has been some talk in cinder meetings about making cinder-glance 
interactions more efficient. They are already optimised in some deployments, 
e.g. ceph glance and ceph cinder, and some backends cache glance images so that 
many volumes created from the same image becomes very efficient. (search the 
meeting logs or channel logs for 'public snapshot' to get some entry points 
into the discussions)

I'd like to see more work done on this, and perhaps re-examine a cinder backend 
to glance. This would give some of what you're suggesting (particularly fast, 
low traffic un-shelve), and there is more that can be done to improve that 
performance, particularly if we can find a better performing generic CoW 
technology than QCOW2.
As suggested in the review, in the short term you might be better experimenting 
with moving to boot-from-volume instances if you have a suitable cinder 
deployed, since that gives you some of the performance improvements already.

On 16 February 2015 at 12:10, Kekane, Abhishek 
abhishek.kek...@nttdata.commailto:abhishek.kek...@nttdata.com wrote:
Hi Devs,

Problem Statement: Performance and storage efficiency of shelving/unshelving 
instance booted from image is far worse than instance booted from volume.

When you unshelve hundreds of instances at the same time, instance spawning 
time varies and it mainly depends on the size of the instance snapshot and
the network speed between glance and nova servers.

If you have configured file store (shared storage) as a backend in Glance for 
storing images/snapshots, then it's possible to improve the performance of
unshelve instance dramatically by configuring nova.image.download.FileTransfer 
in nova. In this case, it simply copies the instance snapshot as if it is
stored on the local filesystem of the compute node. But then again in this 
case, it is observed the network traffic between shared storage servers and
nova increases enormously resulting in slow spawning of the instances.

I would like to gather some thoughts about how we can improve the performance 
of unshelve api (booted from image only) in terms of downloading large
size instance snapshots from glance.

I have proposed a nova-specs [1] to address this performance issue. Please take 
a look at it.

During the last nova mid-cycle summit, Michael 
Stillhttps://review.openstack.org/#/q/owner:mikal%2540stillhq.com+status:open,n,z
 has suggested alternative solutions to tackle this issue.

Storage solutions like ceph (Software based) and NetApp (Hardare based) support 
exposing images from glance to nova-compute and cinder-volume with
copy in write feature. This way there will be no need to download the instance 
snapshot and unshelve api will be pretty faster than getting it
from glance.

Do you think the above performance issue should be handled in the OpenStack 
software as described in nova-specs [1] or storage solutions like

Re: [openstack-dev] [neutron] VXLAN with single-NIC compute nodes: Avoiding the MTU pitfalls

2015-03-11 Thread Fredy Neeser

[resent with a clarification of what [6] is doing towards EoM]


OK, I looked at the devstack patch

   [6] Configure mtu for ovs with the common protocols

but no -- it doesn't do the job for the VLAN-based separation
of native and encapsulated traffic, which I'm using in [1] for
a clean (correct MTUs ...) VXLAN setup with single-NIC compute nodes.

As shown in Figure 2 of [1], I'm using VLANs 1 and 12 for native
and encapsulated traffic, respectively.  I needed to manually
create br-ex ports br-ex.1 (VLAN 1) and br-ex.12 (VLAN 12) and
configure their MTUs.  Moreover, I needed a small VLAN awareness
patch to ensure that the Neutron router gateway port qg-XXX uses VLAN 1.


Consider the example below:

Example

# ip a
...
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc pfifo_fast 
master ovs-system state UP group default qlen 1000

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
...
6: br-ex: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc noqueue state 
UNKNOWN group default

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff

7: br-ex.1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN group default

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.1
   valid_lft forever preferred_lft forever

8: br-ex.12: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc noqueue 
state UNKNOWN group default

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.12
   valid_lft forever preferred_lft forever


# ovs-vsctl show
c0618b20-1eeb-486c-88bd-fb96988dbf96
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port vxlan-c0a80115
Interface vxlan-c0a80115
type: vxlan
options: {df_default=true, in_key=flow, 
local_ip=192.168.1.14, out_key=flow, remote_ip=192.168.1.21}


Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex.12
tag: 12
Interface br-ex.12
type: internal
Port br-ex.1
tag: 1
Interface br-ex.1
type: internal
Port eth0
tag: 1
trunks: [1, 12]
Interface eth0
Port qg-e046ec4e-e3
tag: 1
Interface qg-e046ec4e-e3
type: internal
Port br-ex
Interface br-ex
type: internal

/Example


My home LAN (external network) is enabled for Jumbo frames, as can be
seen from eth0's MTU of 1554, so my path MTU for VXLAN is 1554 bytes.

VLAN 12 supports encapsulated traffic with an MTU of 1554 bytes.

This allows my VMs to use the standard MTU of 1500 regardless
of whether they are on different compute nodes (so they communicate
via VXLAN) or on the same compute node, i.e., the effective
L2 segment MTU is 1500 bytes.  Because this is the default,
I don't need to change guest MTUs at all.

For bridge br-ex, I configured two internal ports br-ex.{1,12}
as shown in the table below:

  br-ex VLAN MTURemarks
  Port
  --
  br-ex.1  11500
  br-ex.12121554
  br-ex Unused
  qg-e046ec4e-e3   1VLAN awareness patch (cf. [1])


All native traffic (including routed traffic to/from a Neutron router
and traffic generated by the Network Node itself) uses VLAN 1 on
my LAN, with an MTU of 1500 bytes.

For my small VXLAN test setup, I didn't need to assign different IPs to
br-ex.1 and br-ex.12, both are assigned 192.168.1.14/24.

So why doesn't [6] do the right thing? --

Well, obviously [6] does not add the VLAN awareness that I need
for the Neutron qg-XXX gateway ports.

Moreover, [6] tries to auto-configure the L2 segment MTU based on
guessing the path MTU by determining the MTU of an interface associated
with $TUNNEL_ENDPOINT_IP, which is 192.168.1.14 in my case.

It does this essentially by querying

  # ip -o address | awk /192.168.1.14/ {print \$2}

getting the MTU of that interface and then subtracting out the overhead
for VXLAN encapsulation.

However, in my case, the above lookup would return *two* interfaces:
  br-ex.1
  br-ex.12
so the patch [6] wouldn't know which interface's MTU it should take.

Also, when I'm doing VLAN-based traffic separation for an overlay
setup using single-NIC nodes, then I already know both the
L3 path MTU and the desired L2 segment MTU.


I'm currently checking if the MTU selection and advertisement patches
[3-5] are compatible with the VLAN-based traffic separation [1].


Regards

Fredy Neeser
http://blog.systeMathic.ch


On 06.03.2015 18:37, Attila Fazekas wrote:


Can 

[openstack-dev] [GBP] PTL elections

2015-03-11 Thread Bhandaru, Malini K
Hello OpenStackers!



To meet the requirement of an  officially elected PTL, we're running elections 
for the Group Based Policy (GBP)  PTL for Kilo and Liberty cycles. Schedule and 
policies are fully aligned with official OpenStack PTLs elections.



You can find more information in the official elections wiki page [0] and the 
same page for GBP elections [1], additionally some more info in the past 
official nominations opening email [2].



Timeline:



Till 05:59 UTC March 17, 2015: Open candidacy to PTL positions March 17, 2015 - 
1300 UTC March 24, 2015: PTL elections



To announce your candidacy please start a new openstack-dev at 
lists.openstack.org mailing list thread with the following subject:

[GBP] PTL Candidacy.

[0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014

[1] https://wiki.openstack.org/wiki/GroupBasedPolicy/PTL_Elections_Kilo_Liberty



Thank you.



Sincerely yours,



Malini Bhandaru
Architect and Engineering Manager,
Open source Technology Center,
Intel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][qa][gabbi][rally][tempest] Extend rally verfiy to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-11 Thread Aleksandr Maretskiy
The idea is great, but IMHO we can move all project-specific code out of
rally, so:

  * rally plugin should be a part of project (for example, located in
functional tests directory)
  * use {project url} instead of {project name} in rally verify command,
example:

$ rally verify https://github.com/openstack/nova start


On Tue, Mar 10, 2015 at 6:01 PM, Timur Nurlygayanov 
tnurlygaya...@mirantis.com wrote:

 Hi,

 I like this idea, we use Rally for OpenStack clouds verification at scale
 and it is the real issue - how to run all functional tests from each
 project with the one script. If Rally will do this, I will use Rally to run
 these tests.

 Thank you!

 On Mon, Mar 9, 2015 at 6:04 PM, Chris Dent chd...@redhat.com wrote:

 On Mon, 9 Mar 2015, Davanum Srinivas wrote:

  2. Is there a test project with Gabbi based tests that you know of?


 In addition to the ceilometer tests that Boris pointed out gnocchi
 is using it as well:

https://github.com/stackforge/gnocchi/tree/master/gnocchi/tests/gabbi

  3. What changes if any are needed in Gabbi to make this happen?


 I was unable to tell from the original what this is and how gabbi
 is involved but the above link ought to be able to show you how
 gabbi can be used. There's also the docs (which could do with some
 improvement, so suggestions or pull requests welcome):

http://gabbi.readthedocs.org/en/latest/

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Timur,
 Senior QA Engineer
 OpenStack Projects
 Mirantis Inc

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] VXLAN with single-NIC compute nodes: Avoiding the MTU pitfalls

2015-03-11 Thread Fredy Neeser

OK, I looked at the devstack patch

   [6] Configure mtu for ovs with the common protocols

but no -- it doesn't do the job for the VLAN-based separation
of native and encapsulated traffic, which I'm using in [1] for
a clean (correct MTUs ...) VXLAN setup with single-NIC compute nodes.

As shown in Figure 2 of [1], I'm using VLANs 1 and 12 for native
and encapsulated traffic, respectively.  I needed to manually
create br-ex ports br-ex.1 (VLAN 1) and br-ex.12 (VLAN 12) and
configure their MTUs.  Moreover, I needed a small VLAN awareness
patch to ensure that the Neutron router gateway port qg-XXX uses VLAN 1.


Consider the example below:

Example

# ip a
...
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc pfifo_fast 
master ovs-system state UP group default qlen 1000

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
...
6: br-ex: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc noqueue state 
UNKNOWN group default

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff

7: br-ex.1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN group default

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.1
   valid_lft forever preferred_lft forever

8: br-ex.12: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1554 qdisc noqueue 
state UNKNOWN group default

link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.12
   valid_lft forever preferred_lft forever


# ovs-vsctl show
c0618b20-1eeb-486c-88bd-fb96988dbf96
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port vxlan-c0a80115
Interface vxlan-c0a80115
type: vxlan
options: {df_default=true, in_key=flow, 
local_ip=192.168.1.14, out_key=flow, remote_ip=192.168.1.21}


Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex.12
tag: 12
Interface br-ex.12
type: internal
Port br-ex.1
tag: 1
Interface br-ex.1
type: internal
Port eth0
tag: 1
trunks: [1, 12]
Interface eth0
Port qg-e046ec4e-e3
tag: 1
Interface qg-e046ec4e-e3
type: internal
Port br-ex
Interface br-ex
type: internal

/Example


My home LAN (external network) is enabled for Jumbo frames, as can be
seen from eth0's MTU of 1554, so my path MTU for VXLAN is 1554 bytes.

VLAN 12 supports encapsulated traffic with an MTU of 1554 bytes.

This allows my VMs to use the standard MTU of 1500 regardless
of whether they are on different compute nodes (so they communicate
via VXLAN) or on the same compute node, i.e., the effective
L2 segment MTU is 1500 bytes.  Because this is the default,
I don't need to change guest MTUs at all.

For bridge br-ex, I configured two internal ports br-ex.{1,12}
as shown in the table below:

  br-ex VLAN MTURemarks
  Port
  --
  br-ex.1  11500
  br-ex.12121554
  br-ex Unused
  qg-e046ec4e-e3   1VLAN awareness patch (cf. [1])


All native traffic (including routed traffic to/from a Neutron router
and traffic generated by the Network Node itself) uses VLAN 1 on
my LAN, with an MTU of 1500 bytes.

For my small VXLAN test setup, I didn't need to assign different IPs to
br-ex.1 and br-ex.12, both are assigned 192.168.1.14/24.

So why doesn't [6] do the right thing? --

Well, obviously [6] does not add the VLAN awareness that I need
for the Neutron qg-XXX gateway ports.
Moreover, [6] tries to auto-configure the L2 segment MTU based on
determining the MTU of an interface associated with
$TUNNEL_ENDPOINT_IP, which is 192.168.1.14 in my case.

It does this essentially by querying

  # ip -o address | awk /192.168.1.14/ {print \$2}

However, in my case, this would return *two* interfaces:
  br-ex.1
  br-ex.12
so the patch [6] wouldn't know which interface's MTU it should take.


I'm currently checking if the MTU selection and advertisement patches
[3-5] are compatible with the VLAN-based traffic separation [1].


Regards

Fredy Neeser
http://blog.systeMathic.ch


On 06.03.2015 18:37, Attila Fazekas wrote:


Can you check is this patch does the right thing [6]:

[6] https://review.openstack.org/#/c/112523/6

- Original Message -

From: Fredy Neeser fredy.nee...@solnet.ch
To: openstack-dev@lists.openstack.org
Sent: Friday, March 6, 2015 6:01:08 PM
Subject: [openstack-dev] [neutron] VXLAN with single-NIC compute nodes:  
Avoiding the MTU pitfalls

Hello world

I recently 

Re: [openstack-dev] [nova][heat] Autoscaling parameters blueprint

2015-03-11 Thread Steven Hardy
On Wed, Mar 11, 2015 at 09:01:04AM +, ELISHA, Moshe (Moshe) wrote:
Hey,
 
 
 
Can someone please share the current status of the Autoscaling signals to
allow parameter passing for UserData blueprint -
 https://blueprints.launchpad.net/heat/+spec/autoscaling-parameters.

This is quite old, and subsequent discussions have happened which indicate
a slightly different approach, e.g this thread here where I discuss
approaches to signalling an AutoScalingGroup to remove a specific group
member.  As Angus has noted, ResourceGroup already allows this via a
different interface.

http://lists.openstack.org/pipermail/openstack-dev/2014-December/053447.html

We have a very concrete use case that require passing parameters on scale
out.
 
What is the best way to revive this blueprint?

Probably the first thing is to provide a more detailed description of your
use-case.

I'll try to revive the AutoScalingGroup signal patch mentioned in the
thread above this week, it's been around for a while and is probably needed
for any interface where we pass data in to influence AutoScalingGroup
adjustment behaviour asynchronously (e.g not via the template definition).

https://review.openstack.org/#/c/143496/

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Icehouse 2014.1.4 freeze exceptions

2015-03-11 Thread Alan Pevec
Hi,

next Icehouse stable point release 2014.1.4 has been slipping last few
weeks due to various gate issues, see Recently closed section in
https://etherpad.openstack.org/p/stable-tracker for details.
Branch looks good enough now to push the release tomorrow (Thursdays
are traditional release days) and I've put freeze -2s on the open
reviews.
I'm sorry about the short freeze period but branch was effectively
frozen last two weeks due to gate issues and further delay doesn't
make sense.
Attached is the output from the stable_freeze script for thawing after
tags are pushed.

At the same time I'd like to propose following freeze exceptions for
the review by stable-maint-core:

* https://review.openstack.org/144714 - Eventlet green threads not
released back to pool
  Justification: while not OSSA fix, it does have SecurityImpact tag

* https://review.openstack.org/163035 - [OSSA 2015-005] Websocket
Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259)
  Justification: pending merge on master and juno


Cheers,
Alan


stable_freeze.txt-2014.1.4
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-11 Thread Jay Lau
2015-03-10 23:21 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi Adrian,

 On Mon, Mar 9, 2015 at 6:53 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Magnum Team,

 In the following review, we have the start of a discussion about how to
 tackle bay status:

 https://review.openstack.org/159546

 I think a key issue here is that we are not subscribing to an event feed
 from Heat to tell us about each state transition, so we have a low degree
 of confidence that our state will match the actual state of the stack in
 real-time. At best, we have an eventually consistent state for Bay
 following a bay creation.

 Here are some options for us to consider to solve this:

 1) Propose enhancements to Heat (or learn about existing features) to
 emit a set of notifications upon state changes to stack resources so the
 state can be mirrored in the Bay resource.


 A drawback of this option is that it increases the difficulty of
 trouble-shooting. In my experience of using Heat (SoftwareDeployments in
 particular), Ironic and Trove, one of the most frequent errors I
 encountered is that the provisioning resources stayed in deploying state
 (never went to completed). The reason is that they were waiting a callback
 signal from the provisioning resource to indicate its completion, but the
 callback signal was blocked due to various reasons (e.g. incorrect firewall
 rules, incorrect configs, etc.). Troubling-shooting such problem is
 generally harder.

I think that the heat convergence is working on the issues for your
concern: https://wiki.openstack.org/wiki/Heat/Blueprints/Convergence




 2) Spawn a task to poll the Heat stack resource for state changes, and
 express them in the Bay status, and allow that task to exit once the stack
 reaches its terminal (completed) state.

 3) Don’t store any state in the Bay object, and simply query the heat
 stack for status as needed.


 Are each of these options viable? Are there other options to consider?
 What are the pro/con arguments for each?

 Thanks,

 Adrian



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat] Autoscaling parameters blueprint

2015-03-11 Thread Angus Salkeld
On Wed, Mar 11, 2015 at 7:01 PM, ELISHA, Moshe (Moshe) 
moshe.eli...@alcatel-lucent.com wrote:

  Hey,



 Can someone please share the current status of the “Autoscaling signals to
 allow parameter passing for UserData” blueprint -
 https://blueprints.launchpad.net/heat/+spec/autoscaling-parameters.



 We have a very concrete use case that require passing parameters on scale
 out.

 What is the best way to revive this blueprint?


Hi

You can remove a particular instance from a resource group by doing an
update and
adding the instance to be removed to the removal_list.

See the section removal_policies here:
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup

Let us know if this could do the job for you.

Regards
Angus




 Thanks.



 Moshe Elisha.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][heat] Autoscaling parameters blueprint

2015-03-11 Thread ELISHA, Moshe (Moshe)
Hey,

Can someone please share the current status of the Autoscaling signals to 
allow parameter passing for UserData blueprint -  
https://blueprints.launchpad.net/heat/+spec/autoscaling-parameters.

We have a very concrete use case that require passing parameters on scale out.
What is the best way to revive this blueprint?

Thanks.

Moshe Elisha.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo FeatureFreeze is March 19th, FeatureProposalFreeze has happened

2015-03-11 Thread Gary Kotton
Hi,
Not 100% sure that I understand. This is for the BP¹s and specs that were
approved for Kilo. Bug fixes if I understand are going to be reviewed
until the release of Kilo.
Is launchpad not a sufficient source for highlighting bugs?
Thanks
Gary

On 3/11/15, 2:13 PM, John Garbutt j...@johngarbutt.com wrote:

Hi,

Just a quick update on where we are at with the release:
https://wiki.openstack.org/wiki/Kilo_Release_Schedule

Please help review all the code we want to merge before FeatureFreeze:
https://etherpad.openstack.org/p/kilo-nova-priorities-tracking
https://launchpad.net/nova/+milestone/kilo-3

Please note, 19th March is:
kilo-3 release, General FeatureFreeze, StringFreeze, DepFreeze
(that includes all high priority items)

Generally patches that don't look likely to merge by 19th March are
likely to get deferred around 17th March, to make sure we get kilo-3
out the door.

As ever, there may be exceptions, if we really need them, but they
will have to be reviewed by ttx for their impact on the overall
integrated release of kilo. More details will be shared nearer the
time.

A big ask at this time, is to highlight any release critical bugs that
we need to fixe before we can release kilo (and that involves testing
things). We are likely to use the kilo-rc-potential tag to track those
bugs, more details on that soon.

Any questions, please do ask (here or on IRC).

Thanks,
johnthetubaguy

PS
Just a reminder you are now free to submit specs for liberty. Specs
where there is no clear agreement will be the best candidates for a
discussion slot at the summit. Again more on that later.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >