[openstack-dev] stable/juno nova is blocked on bug 1445917

2015-04-19 Thread Matt Riedemann
Something merged around 4/17 which is now wedging the gate for at least 
nova changes in the ironic sideways job:


https://bugs.launchpad.net/nova/+bug/1445917

There has been some grenade refactoring going on lately so I'm not sure 
if something could be related there, but I'm also suspicious of this 
backport to nova on stable/juno:


https://review.openstack.org/#/c/173226/

Otherwise I don't see anything being merged in neutron or ironic on 
stable/juno around 4/17 to cause this.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][releases] upcoming library releases to unfreeze requirements in master

2015-04-21 Thread Matt Riedemann



On 4/21/2015 2:44 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-04-21 13:11:09 -0400:

I'm working on releasing a *bunch* of libraries, including clients, from
their master branches so we can thaw the requirements list for the
liberty cycle. As with any big operation, this may be disruptive. I
apologize in advance if it is, but we cannot thaw the requirements
without making the releases so we need them all.

Here's the full list, in the form of the release script I am running,
in case you start seeing issues and want to check if you were
affected:


OK, this is all done. I have verified that all of the libraries are on
PyPI and I have sent the release notes (sometimes several copies, at no
extra charge to you -- seriously, sorry about the noise).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



And the gate is wedged. :)

https://bugs.launchpad.net/openstack-gate/+bug/1446847

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Policy rules are killed by the context admin check

2015-04-22 Thread Matt Riedemann



On 4/22/2015 8:32 AM, Sylvain Bauza wrote:

Hi,

By discussing on a specific bug [1], I just discovered that the admin
context check which was done at the DB level has been moved to the API
level thanks to the api-policy-v3 blueprint [2]

That behaviour still leads to a bug if the operator wants to change an
endpoint policy by leaving it end-user but still continues to be denied
because of that, as it will forbid any non-admin user to call the
methods (even if authorize() grants the request)

I consequently opened a bug [3] for this but I'm also concerned about
the backportability of that and why it shouldn't fixed in v2.0 too.

Releasing the check by removing it sounds an acceptable change, as it
fixes a bug without changing the expected behaviour [4]. The impact of
the change sounds also minimal with a very precise scope (ie. leave the
policy rules work as they are expected) [5]

Folks, thoughts ?

-Sylvain

[1] https://bugs.launchpad.net/nova/+bug/1447084
[2]
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/v3-api-policy,n,z

[3] https://bugs.launchpad.net/nova/+bug/1447164
[4]
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK
Fixing a bug so that a request which resulted in an error response
before is now successful
[5] https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't disagree, see bug 1168488 from way back in grizzly.

The only thing would be we'd have to make sure the default rule is admin 
for any v2 extensions which are enforcing an admin context today.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable/juno nova is blocked on bug 1445917

2015-04-19 Thread Matt Riedemann



On 4/19/2015 8:20 AM, Matt Riedemann wrote:

Something merged around 4/17 which is now wedging the gate for at least
nova changes in the ironic sideways job:

https://bugs.launchpad.net/nova/+bug/1445917

There has been some grenade refactoring going on lately so I'm not sure
if something could be related there, but I'm also suspicious of this
backport to nova on stable/juno:

https://review.openstack.org/#/c/173226/

Otherwise I don't see anything being merged in neutron or ironic on
stable/juno around 4/17 to cause this.



Found the breaking Tempest change from 4/17, here is the revert:

https://review.openstack.org/#/c/175219/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reviewers please watch the check-tempest-dsvm-cells job now

2015-04-24 Thread Matt Riedemann



On 4/21/2015 2:00 PM, Andrew Laski wrote:

It's been a long road but due to the hard work of bauzas and melwitt the
cells Tempest check job should now be green for changes that don't break
cells.  The job has been red for a long time so it's likely that people
don't think about it much.  I would ask that until we can get the
confidence to make it voting please take notice when it's red and
investigate or bring it to the attention of one of us in #openstack-nova.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I opened a couple of new bugs today since I had a cells job failure in a 
change that wasn't related to cells (was just adding some debug logging 
somewhere else).


1. https://bugs.launchpad.net/nova/+bug/1448316

That looks like a legit race failure in the cells job only.

2. https://bugs.launchpad.net/nova/+bug/1448302

This is more cosmetic than anything, it doesn't appear to be related to 
anything functionally breaking.  We should get the trace cleaned up 
though since it makes debugging the cells job failures harder.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [docs] Scheduler filters documentation

2015-04-29 Thread Matt Riedemann



On 4/29/2015 2:08 PM, Sylvain Bauza wrote:

A good comment has been provided to
https://review.openstack.org/#/c/177824/1 by saying that the reference
docs [1] are duplicate to the Nova devref section [2]

As I also think it is error-prone to keep two separate informations
about how Scheduler filters, I would like to discuss on how we could
make sure both are in sync.
Since the developers owe to provide how to use the filters they write,
my preference goes to keep updating the Nova devref page each time a
filter behaviour is changed or a new filter is created and consequently
make sure that the reference docs is pointing to that devref page, using
an iframe or whatever else.

Of course, each time that a behavioural change is provided, that's also
the developer's duty to provide a DocImpact tag in order to make sure
that docs people are aware of the change.

Thoughts ?
-Sylvain

[1]
http://docs.openstack.org/juno/config-reference/content/section_compute-scheduler.html#aggregate-instanceextraspecsfilter

[2]
http://docs.openstack.org/developer/nova/devref/filter_scheduler.html#filtering





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'd prefer to see the scheduler filter docs be maintained in the nova 
devref where they are close to the source, versioned, and reviewed by 
the nova team when there are scheduler filter changes or new filters added.


I doubt many nova developers/reviewers get over to reviewing changes to 
the config reference docs.


Then if possible have the config reference docs repo refer to the nova 
devref docs as the primary source of information on the filters.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] trimming down Tempest smoke tag

2015-04-30 Thread Matt Riedemann



On 4/28/2015 4:19 PM, David Kranz wrote:

On 04/28/2015 06:38 AM, Sean Dague wrote:

The Tempest Smoke tag was originally introduced to provide a quick view
of your OpenStack environment to ensure that a few basic things were
working. It was intended to be fast.

However, during Icehouse the smoke tag was repurposed as a way to let
neutron not backslide (so it's massively overloaded with network tests).
It current runs at about 15 minutes on neutron jobs. This is why grenade
neutron takes *so* long, because we run tempest smoke twice.

The smoke tag needs a diet. I believe our working definition should be
something as follows:

  - Total run time should be fast (= 5 minutes)
  - No negative tests
  - No admin tests
  - No tests that test optional extensions
  - No tests that test advanced services (like lbaas, vpnaas)
  - No proxy service tests

The criteria for a good set of tests is CRUD operations on basic
services. For instance, with compute we should have built a few servers,
ensure we can shut them down. For neutron we should have done some basic
network / port plugging.

That makes sense. On IRC, Sean and I agreed that this would include
creation of users, projects, etc. So some of the keystone smoke tests
will be left in even though admin. IMO, it is debatable whether admin is
relevant as part of the criteria for smoke.


We also previously had the 'smoke' tag include all of the scenario
tests, which was fine when we had 6 scenario tests. However as those
have grown I think that should be trimmed back to a few basic through
scenarios.

The results of this are -
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:smoke,n,z


The impacts on our upstream gate will mean that grenade jobs will speed
up dramatically (20 minutes faster on grenade neutron).

There is one edge condition which exists, which is the
check-tempest-dsvm-neutron-icehouse job. Neutron couldn't pass either a
full or parallel tempest run in icehouse (it's far too racy). So that's
current running the smoke-serial tag. This would end up reducing the
number of tests run on that job. However, based on the number of
rechecks I've had to run in this series, that job is currently at about
a 30% fail rate - http://goo.gl/N2w7qc - which means some test reduction
is probably in order anyway, as it's mostly just preventing other people
from landing unrelated patches.

This was something we were originally planning on doing during the QA
Sprint but ran out of time. It looks like we'll plan to land this right
after Tempest 4 is cut this week, so that people that really want the
old behavior can stay on the Tempest 4 release, but master is moving
forward.

I think that once we trim down we can decide to point add specific tests
later. I expect smoke to be a bit more fluid over time, so it's not a
tag that anyone should count on a test going into that tag and staying
forever.

Agreed. The criteria and purpose should stay the same but individual
tests may be added or removed from smoke.
Thanks for doing this.

  -David


-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah why would you not include admin tests?  Like listing services and 
hosts in nova?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core

2015-04-30 Thread Matt Riedemann



On 4/30/2015 6:30 AM, John Garbutt wrote:

Hi,

I propose we add Melanie to nova-core.

She has been consistently doing great quality code reviews[1],
alongside a wide array of other really valuable contributions to the
Nova project.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1] https://review.openstack.org/#/dashboard/4690

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mellanox request for permission for Nova CI

2015-05-04 Thread Matt Riedemann



On 5/3/2015 10:32 AM, Lenny Verkhovsky wrote:

Hi Dan and the team,

Here you can see full logs and tempest.conf  
http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_PORT_20150503_1854/
( suspend-resume test is skipped and we are checking this issue )

Except of running Tempest API on Mellanox flavor VM we are also running 
tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps
With configured port_vnic_type = direct

We will add more tests in the future.

Thanks in advance.
Lenny Verkhovsky
SW Engineer,  Mellanox Technologies
www.mellanox.com

Office:+972 74 712 9244
Mobile:  +972 54 554 0233
Fax:+972 72 257 9400

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com]
Sent: Friday, April 24, 2015 7:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Mellanox request for permission for Nova CI

Hi Lenny,


Is there anything missing for us to start 'non-voting' Nova CI ?


Sorry for the slow response from the team.

The results that you've posted look good to me. A quick scan of the tempest 
results don't seem to indicate any new tests that are specifically testing 
SRIOV things. I assume this is mostly implied because of the flavor you're 
configuring for testing, right?

Could you also persist the tempest.conf just so it's easy to see?

Regardless of the above, I think that the results look clean enough to start 
commenting on patches, IMHO. So, count me as +1.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1 for non-voting on nova changes from me.  Looks like it's running 
tests from the tempest repo and not a private third party repo which is 
good.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] What happened with the Hyper-V generation 2 VMs spec?

2015-05-04 Thread Matt Riedemann

This spec was never approved [1] but the code was merged in Kilo [2].

The blueprint is marked complete in launchpad [3] and it's referenced as 
a new feature in the hyper-v driver in the kilo release notes [4], but 
there is no spec published for consumers that detail the feature [5]. 
Also, the spec mentioned doc impacts which I have to assume weren't 
made, and there were abandoned patches [6] tied to the blueprint, so is 
this half-baked or not?  Are we missing information in the kilo release 
notes?


How do we retroactively approve a spec so it's published to 
specs.openstack.org for posterity when obviously our review process 
broke down?


[1] https://review.openstack.org/#/c/103945/
[2] 
https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z

[3] https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms
[4] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Hyper-V
[5] http://specs.openstack.org/openstack/nova-specs/specs/kilo/
[6] 
https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What happened with the Hyper-V generation 2 VMs spec?

2015-05-04 Thread Matt Riedemann



On 5/4/2015 11:12 AM, Alessandro Pilotti wrote:

Hi Matt,

We originally proposed a Juno spec for this blueprint, but it got postponed to 
Kilo where it has been approved without a spec together with other hypervisor 
specific blueprints (the so called “trivial” case).

The BP itself is completed and marked accordingly on launchpad.

Patches referenced in the BP:

https://review.openstack.org/#/c/103945/
Abandoned: Juno specs.

https://review.openstack.org/#/c/107177/
Merged

https://review.openstack.org/#/c/107185/
Merged

https://review.openstack.org/#/c/137429/
Abandoned: Acording to the previous discussions on IRC, this commit is no 
longer necessary.

https://review.openstack.org/#/c/137429/
Abandoned: Acording to the previous discussions on IRC, this commit is no 
longer necessary.

https://review.openstack.org/#/c/145268/
Abandoned, due to sqlalchemy model limitations




On 04 May 2015, at 18:41, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

This spec was never approved [1] but the code was merged in Kilo [2].

The blueprint is marked complete in launchpad [3] and it's referenced as a new 
feature in the hyper-v driver in the kilo release notes [4], but there is no 
spec published for consumers that detail the feature [5]. Also, the spec 
mentioned doc impacts which I have to assume weren't made, and there were 
abandoned patches [6] tied to the blueprint, so is this half-baked or not?  Are 
we missing information in the kilo release notes?

How do we retroactively approve a spec so it's published to specs.openstack.org 
for posterity when obviously our review process broke down?

[1] https://review.openstack.org/#/c/103945/
[2] 
https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z
[3] https://blueprints.launchpad.net/nova/+spec/hyper-v-generation-2-vms
[4] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Hyper-V
[5] http://specs.openstack.org/openstack/nova-specs/specs/kilo/
[6] 
https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/hyper-v-generation-2-vms,n,z

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK, but this doesn't answer all of the questions.

1. Are there doc impacts from the spec that need to be in the kilo 
release notes?  For example, the spec says:


The Nova driver documentation should include an entry about this topic
including when to use and when not to use generation 2 VMs. A note on 
the relevant Glance image property should be added as well.


I don't see any of that in the kilo release notes.

2. If we have a feature merged, we should have something in 
specs.openstack.org for operators to go back to reference rather than 
dig through ugly launchpad whiteboards or incomplete gerrit reviews 
where what was merged might differ from what was originally proposed in 
the spec in Juno.


3. Is the Hyper-V CI now testing with gen-2 images?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Question on Cinder client exception handling

2015-05-07 Thread Matt Riedemann



On 5/6/2015 7:02 AM, Chen CH Ji wrote:

Hi
In order to work on [1] , nova need to know what kind of
exception are raised when using cinderclient so that it can handle like
[2] did?
In this case, we don't need to distinguish the error
case based on string compare , it's more accurate and less error leading
Anyone is doing it or any other methods I can use to
catch cinder specified  exception in nova? Thanks


[1] https://bugs.launchpad.net/nova/+bug/1450658
[2]
https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L64

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is there anything preventing us from adding a more specific exception to 
cinderclient and then once that's in and released, we can pin the 
minimum version of cinderclient in global-requirements so nova can 
safely use it?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Question on Cinder client exception handling

2015-05-07 Thread Matt Riedemann



On 5/7/2015 3:21 PM, Chen CH Ji wrote:

no, I only want to confirm whether cinder folks is doing this or there
are already tricks can be used that before submit the change ... thanks

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC

Inactive hide details for Matt Riedemann ---05/07/2015 10:12:21 PM---On
5/6/2015 7:02 AM, Chen CH Ji wrote:  HiMatt Riedemann ---05/07/2015
10:12:21 PM---On 5/6/2015 7:02 AM, Chen CH Ji wrote:  Hi

From: Matt Riedemann mrie...@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Date: 05/07/2015 10:12 PM
Subject: Re: [openstack-dev] [cinder][nova] Question on Cinder client
exception handling







On 5/6/2015 7:02 AM, Chen CH Ji wrote:
  Hi
  In order to work on [1] , nova need to know what kind of
  exception are raised when using cinderclient so that it can handle like
  [2] did?
  In this case, we don't need to distinguish the error
  case based on string compare , it's more accurate and less error leading
  Anyone is doing it or any other methods I can use to
  catch cinder specified  exception in nova? Thanks
 
 
  [1] https://bugs.launchpad.net/nova/+bug/1450658
  [2]
 
https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L64
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
  District, Beijing 100193, PRC
 
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Is there anything preventing us from adding a more specific exception to
cinderclient and then once that's in and released, we can pin the
minimum version of cinderclient in global-requirements so nova can
safely use it?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I added some notes to the bug after looking into the cinder code.  I 
think this would actually be a series of changes if you want something 
more specific than the 500 you're getting back from the cinder API 
today, and that's going to be several changes (cinder to raise a more 
specific error, cinderclient to translate that to a specific exception, 
and then nova to handle that).


I'd probably just go with a change to nova to handle the 500 from cinder 
and not completely puke and orphan the instance.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] libvirt.remove_unused_kernels config option - default to true now?

2015-05-07 Thread Matt Riedemann

I came across this today:

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagecache.py#L50

That was added back in grizzly:

https://review.openstack.org/#/c/22777/

With a note in the code that we should default it to true at some point. 
 Is 2+ years long enough for this to change to true?


This change predates my involvement in the project so ML it is.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit downtime and upgrade on Saturday 2015-05-09 at 1600 UTC

2015-05-11 Thread Matt Riedemann



On 5/11/2015 11:02 AM, Kevin L. Mitchell wrote:

On Mon, 2015-05-11 at 15:56 +, Jeremy Stanley wrote:

On 2015-05-11 01:45:30 + (+), Jeremy Stanley wrote:

For what it's worth, I tried changing Gerrit's canonicalweburl
setting to not include a trailing slash, but it doesn't help. I have
a feeling this is not a misconfiguration, but something intrinsic to
the OpenID implementation in Gerrit which has changed since 2.8.


I've tested https://review.openstack.org/181949 on review-dev, and
it will solve those 404 hyperlinks once it merges. We still need to
track down what's causing the OpenID callback URL to end up with a
second trailing slash however.


As a point of information, I logged in to review a python-novaclient
review, and now I find I can't load any nova reviews at all; I get a
page with the top bar, but below that is a Toggle CI button and
nothing else.  Reloading the page has no effect, and the same behavior
applies to all nova reviews I've tried to load.  Restarting by browser
did not have any effect.



I was having this problem until I went to Preferences and switched to 
the 'new' style.


However, I get 404s every time I try to view a -Workflow change.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] python-novaclient 2.25.0

2015-05-12 Thread Matt Riedemann



On 5/12/2015 10:35 AM, Matt Riedemann wrote:

https://launchpad.net/python-novaclient/+milestone/2.25.0

mriedem@ubuntu:~/git/python-novaclient$ git log --no-merges --oneline
2.24.1..2.25.0
0e35f2a Drop use of 'oslo' namespace package
667f1af Reuse uuidutils frim oslo_utils
4a7cf96 Sync latest code from oslo-incubator
d03a85a Updated from global requirements
02c04c5 Make _discover_extensions public
99fcc69 Updated from global requirements
bf6fbdb nova client now support limits subcommand
95421a3 Don't use SessionClient for version-list API
86ec0c6 Add min/max microversions to version-list cmd
61ef35f Deprecate v1.1 and remove v3
4f9e65c Don't lookup service url when bypass_url is given
098116d Revert nova flavor-show command is inconsistent
af7c850 Update README to work with release tools
420dc28 refactor functional test base class to no inherit from tempest_lib
2761606 Report better error message --ephemeral poor usage
a63aa51 Fix displaying of an unavailable flavor of a showing instance
19d4d35 Handle binary userdata files such as gzip
14cada7 Add --all-tenants option to 'nova delete'




And stable/kilo is busted, as I predicted would happen in the nova 
channel around 10am before the release:


https://bugs.launchpad.net/python-openstackclient/+bug/1454397

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] gate wedged by tox = 2.0

2015-05-14 Thread Matt Riedemann



On 5/14/2015 5:46 AM, Sean Dague wrote:

On 05/14/2015 04:16 AM, Robert Collins wrote:

Tox 2.0 just came out, and it isolates environment variables - which
is good, except if you use them (which we do). So everything is
broken.

https://review.openstack.org/182966

Should fix it until projects have had time to fix up their local
tox.ini's to let through the needed variables.

As an aside it might be nice to get this specifier from
global-requirements, so that its managed in the same place as all our
other specifiers.


This will only apply to tempest jobs, and I see lots of tempest jobs
passing without it. Do we have a bug with some failures linked because
of it?

If this is impacting unit tests, that has to be directly fixed there.

-Sean



python-novaclient, neutron and python-manilaclient are being tracked 
against bug https://bugs.launchpad.net/neutron/+bug/1455102.


Heat is being tracked against bug 
https://bugs.launchpad.net/heat/+bug/1455065.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?

2015-05-14 Thread Matt Riedemann



On 5/14/2015 2:59 PM, Kris G. Lindgren wrote:

How would this impact someone running juno nova-compute on rhel 6 boxes?
Or installing the python2.7 from SCL and running kilo+ code on rhel6?

For [3] it couldn't we get the exact same information from /proc/cpuinfo?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.



On 5/14/15, 1:23 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


The minimum required version of libvirt in the driver is 0.9.11 still
[1].  We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno.

The libvirt distro support matrix is here: [2]

Can we safely assume the people aren't going to be running Libvirt
compute nodes on RHEL  7.1 or Ubuntu Precise?

Regarding RHEL, I think this is a safe bet because in Kilo nova dropped
python 2.6 support and RHEL  6 doesn't have py26 so you'd be in trouble
running kilo+ nova on RHEL 6.x anyway.

There are some workarounds in the code [3] I'd like to see removed by
bumping the minimum required version.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver
.py?id=2015.1.0#n335
[2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
[3]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.p
y?id=2015.1.0#n754

--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




This would be Liberty, so when you upgrade nova-compute to Liberty you'd 
also need to upgrade the host OS to something that supports libvirt = 
1.2.2.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?

2015-05-14 Thread Matt Riedemann
The minimum required version of libvirt in the driver is 0.9.11 still 
[1].  We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno.


The libvirt distro support matrix is here: [2]

Can we safely assume the people aren't going to be running Libvirt 
compute nodes on RHEL  7.1 or Ubuntu Precise?


Regarding RHEL, I think this is a safe bet because in Kilo nova dropped 
python 2.6 support and RHEL  6 doesn't have py26 so you'd be in trouble 
running kilo+ nova on RHEL 6.x anyway.


There are some workarounds in the code [3] I'd like to see removed by 
bumping the minimum required version.


[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py?id=2015.1.0#n335

[2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
[3] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.py?id=2015.1.0#n754


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what is the ticket tracking system openstack community use?

2015-05-14 Thread Matt Riedemann



On 5/14/2015 2:24 PM, Chen He wrote:

I am new to openstack community. Any reply will be appreciated.

Regards!

Chen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Launchpad. More info here:

https://wiki.openstack.org/wiki/Bugs

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?

2015-05-14 Thread Matt Riedemann



On 5/14/2015 3:35 PM, Matt Riedemann wrote:



On 5/14/2015 2:59 PM, Kris G. Lindgren wrote:

How would this impact someone running juno nova-compute on rhel 6 boxes?
Or installing the python2.7 from SCL and running kilo+ code on rhel6?

For [3] it couldn't we get the exact same information from /proc/cpuinfo?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.



On 5/14/15, 1:23 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


The minimum required version of libvirt in the driver is 0.9.11 still
[1].  We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno.

The libvirt distro support matrix is here: [2]

Can we safely assume the people aren't going to be running Libvirt
compute nodes on RHEL  7.1 or Ubuntu Precise?

Regarding RHEL, I think this is a safe bet because in Kilo nova dropped
python 2.6 support and RHEL  6 doesn't have py26 so you'd be in trouble
running kilo+ nova on RHEL 6.x anyway.

There are some workarounds in the code [3] I'd like to see removed by
bumping the minimum required version.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver

.py?id=2015.1.0#n335
[2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
[3]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.p

y?id=2015.1.0#n754

--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




This would be Liberty, so when you upgrade nova-compute to Liberty you'd
also need to upgrade the host OS to something that supports libvirt =
1.2.2.



Here is the patch to see what this would look like:

https://review.openstack.org/#/c/183220/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?

2015-05-15 Thread Matt Riedemann



On 5/15/2015 6:28 AM, Daniel P. Berrange wrote:

On Fri, May 15, 2015 at 11:51:22AM +0100, Daniel P. Berrange wrote:

On Thu, May 14, 2015 at 02:23:25PM -0500, Matt Riedemann wrote:

The minimum required version of libvirt in the driver is 0.9.11 still [1].
We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno.

The libvirt distro support matrix is here: [2]

Can we safely assume the people aren't going to be running Libvirt compute
nodes on RHEL  7.1 or Ubuntu Precise?


I don't really think so - at the very least Fedora 20 and RHEL 7.0 are still
actively supported platforms by their vendors, which both have older libvirt
versions (1.1.3 and 1.1.1 respectively).

I'm not sure whether SUSE team consider any of the 12.x versions to still be
actively supported platforms or not, likewise which 13.x versions are under
active support.



Regarding RHEL, I think this is a safe bet because in Kilo nova dropped
python 2.6 support and RHEL  6 doesn't have py26 so you'd be in trouble
running kilo+ nova on RHEL 6.x anyway.


There are add-on repos for RHEL-6 and RHEL-7 that provide newer python
versions so (py27, various py3x), so it is not unreasonable for people
to consider sticking with RHEL-6 if that is what their existing deployment
is based on. Certainly new deployments though I'd expect to be RHEL-7 based.


There are some workarounds in the code [3] I'd like to see removed by
bumping the minimum required version.


Sure, its nice to remove workarounds from a cleanliness POV, but I'm generally
pretty conservative about doing so, because in the majority of case (while it
looks ugly) it is not really a significant burden on maintainers to keep it
around.

This example is really just that. It certainly looks ugly, but we have the
code there now, it is doing the job for people who have that problem and it
isn't really having any measurable impact on our ability to maintain the
libvirt code. Removing this code won't lessen our maintainance burden in
any way, but it will unquestionably impact our users by removing support for
the platform they may be currently deployed on.

The reason why we picked 0.9.11 as the current minimum version was that we
needed to be able to switch over to using libvirt-python from PyPi instead
of relying on the version that shipped with the distro. 0.9.11 is the min
version supported by libvirt-python on PyPi. This had significant benefits
to Nova maintainence, as it our gate jobs deploy libvirt-python from PyPi
in common with all other python packages we depend on. It also unlocked the
ability to run libvirt with python3 bindings.  There was some small amount
of pain for people running Ubuntu 12.04 LTS, but this was mitigated by the
fact that Canonical provided the Cloud Archive repositories for that LTS
version which gave users direct access to new enough libvirt. So in the end
users were not negatively impacted in any serious way - certainly not by
enough to outweigh the benefits Nova maintainence saw.


In this case, I just don't see compelling benefits to Nova libvirt maint
to justify increasing the minimum version to the level you suggest, and
it has a clear negative impact on our users which they will not be able
to easily deal with. They will be left with 3 options all of which are
unsatisfactory

  - Upgrade from RHEL-6 to RHEL-7 - a major undertaking for most organizations
  - Upgrade libvirt on RHEL-6 - they essentially take on the support burden
for the hypervisor themselves, loosing support from the vendor
  - Re-add the code we remove from Nova - we've given users a maint burden,
to rid ourselves of code that was posing no real maint burden on ourselves.


As a more general point, I think we are lacking clear guidance on our
policies around hypervisor platform support and thus have difficulty
in deciding when it is reasonable for us to drop support for platforms.

I think it is important to distinguish the hypervisor platform, from
the openstack platform because there are different support implications
there for users. With direct python dependancies for openstack, I we
are generally pretty aggressive at updating to newer versions of packages
from PyPi. There are a number of reasons why this is reasonable to do.
Foremost is that many python modules do a very bad job at maintaining
API compatibility, so we are essentially forced to upgrade and drop old
version support whether we like it or not. Second, the provisioning tools
we use (devstack, packstack, triple-o, and many vendor tools) all handle
deployment of arbitrary newer python modules without any trouble. OpenStack
itself and the vendors all do quite comprehensive testing on the python
modules we use, so we have confidence that the versions we're depending
on are functioning correctly.

The hypervisor platform is very different. While OpenStack does achieve
some level of testing coverage of the hypervisor platform version used
in the gate, this testing is inconsequential compared to the level

Re: [openstack-dev] [Openstack-operators] [nova] Are we happy with libvirt-python = 1.2.0 ?

2015-05-15 Thread Matt Riedemann



On 5/15/2015 9:52 AM, Jeremy Stanley wrote:

On 2015-05-15 14:54:37 +0100 (+0100), Daniel P. Berrange wrote:

Hmm, I didn't know it was listed in global-requirements.txt - I only
checked the requirements.txt and test-requirements.txt in Nova itself
which does not list libvirt-python.

Previously test-requirements.txt did have it, but we dropped it, since
the unit tests now exclusively use fakelibvirt.

To answer your question though, if global-requirements.txt is enforcing
that we have libvirt-python 1.2.5, then we can drop that particular
workaround.


If it's listed in openstack/requirements:global-requirements.txt but
not in any individual repo's requirements.txt or
test-requirements.txt then it's not actually doing anything. It's
just cruft which someone failed to remove once nova stopped
explicitly referencing it as a test requirement.

Note I've added a script in the requirements repo (tools/cruft.sh)
to find things like this, so that interested parties can use it as a
starting point to research possible cleanup work there.



https://review.openstack.org/#/c/183706/ adds libvirt-python back into 
nova's test-requirements.txt.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Minimum VC version

2015-05-15 Thread Matt Riedemann



On 5/15/2015 4:50 PM, Gary Kotton wrote:

Hi,
We would like to indicate that we do not support versions below 5.1.0 of
the VC. Is anyone aware of people using versions below with OpenStack.
Patch https://review.openstack.org/#/c/183711/ proposes exiting Nova
compute if a lower version is used.
Thanks
Gary


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Cross-posting to the operators mailing list.

Also note the kilo docs mention supporting less than 5.0:

http://docs.openstack.org/kilo/config-reference/content/vmware.html

But 4.x was EOL over a year ago:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=2039567

Also note that the NSX CI (vmware CI) originally ran with vcenter 5.1 
and is now running 5.5.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Are we happy with libvirt-python = 1.2.0 ?

2015-05-15 Thread Matt Riedemann



On 5/15/2015 8:54 AM, Daniel P. Berrange wrote:

On Fri, May 15, 2015 at 02:45:06PM +0100, John Garbutt wrote:

On 15 May 2015 at 13:28, Daniel P. Berrange berra...@redhat.com wrote:

On Fri, May 15, 2015 at 11:51:22AM +0100, Daniel P. Berrange wrote:

On Thu, May 14, 2015 at 02:23:25PM -0500, Matt Riedemann wrote:

There are some workarounds in the code [3] I'd like to see removed by
bumping the minimum required version.


Sure, its nice to remove workarounds from a cleanliness POV, but I'm generally
pretty conservative about doing so, because in the majority of case (while it
looks ugly) it is not really a significant burden on maintainers to keep it
around.

This example is really just that. It certainly looks ugly, but we have the
code there now, it is doing the job for people who have that problem and it
isn't really having any measurable impact on our ability to maintain the
libvirt code. Removing this code won't lessen our maintainance burden in
any way, but it will unquestionably impact our users by removing support for
the platform they may be currently deployed on.


BTW, the code you quote here:


http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.py?id=2015.1.0#n754

is not actually working around a bug the libvirt hypervisor. It is in fact
a bug in the libvirt-python API binding. As such we don't actually need to
increase the min required libvirt to be able to remove that check. In fact
increasing the min required libvirt is the wrong thing todo, because it is
possible for someone to have the min required libvirt, but by accessing it
via an older libvirt-python which still has the bug.

So what's really needed is a dep on libvirt-python = 1.2.0, not libvirt.

We don't express min required versions for libvirt-python in the
requirements.txt file though, since it is an optional package and we
don't have any mechanism for recording min versions for those AFAIK.


Does this mean we can drop the above [3] code?
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L56


Hmm, I didn't know it was listed in global-requirements.txt - I only
checked the requirements.txt and test-requirements.txt in Nova itself
which does not list libvirt-python.

Previously test-requirements.txt did have it, but we dropped it, since
the unit tests now exclusively use fakelibvirt.

To answer your question though, if global-requirements.txt is enforcing
that we have libvirt-python 1.2.5, then we can drop that particular
workaround.

Regards,
Daniel



Right, I plan to add libvirt-python back to nova's test-requirements.txt 
to remove the workaround in host.py.


We originally removed libvirt-python from nova's test-requirements.txt 
because it was mucking up nova unit tests which were at the time doing a 
conditional import of libvirt-python so if you had that, the unit tests 
ran against whatever version you got and if it didn't import, you ran 
against fakelibvirt.  We should be using fakelibvirt everywhere in the 
unit tests now so we can add libvirt-python back to 
test-requirements.txt as an indication of the minimum required version 
of an optional dependency (required if you're using libvirt).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] python-novaclient 2.25.0

2015-05-12 Thread Matt Riedemann

https://launchpad.net/python-novaclient/+milestone/2.25.0

mriedem@ubuntu:~/git/python-novaclient$ git log --no-merges --oneline 
2.24.1..2.25.0

0e35f2a Drop use of 'oslo' namespace package
667f1af Reuse uuidutils frim oslo_utils
4a7cf96 Sync latest code from oslo-incubator
d03a85a Updated from global requirements
02c04c5 Make _discover_extensions public
99fcc69 Updated from global requirements
bf6fbdb nova client now support limits subcommand
95421a3 Don't use SessionClient for version-list API
86ec0c6 Add min/max microversions to version-list cmd
61ef35f Deprecate v1.1 and remove v3
4f9e65c Don't lookup service url when bypass_url is given
098116d Revert nova flavor-show command is inconsistent
af7c850 Update README to work with release tools
420dc28 refactor functional test base class to no inherit from tempest_lib
2761606 Report better error message --ephemeral poor usage
a63aa51 Fix displaying of an unavailable flavor of a showing instance
19d4d35 Handle binary userdata files such as gzip
14cada7 Add --all-tenants option to 'nova delete'


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-17 Thread Matt Riedemann



On 5/16/2015 10:52 PM, Alex Glikson wrote:

If system containers is a viable use-case for Nova, and if Magnum is
aiming at both application containers and system containers, would it
make sense to have a new virt driver in nova that would invoke Magnum
API for container provisioning and life cycle? This would avoid (some of
the) code duplication between Magnum and whatever nova virt driver would
support system containers (such as nova-docker). Such an approach would
be conceptually similar to nova virt driver invoking Ironic API,
replacing nova-baremetal (here again, Ironic surfaces various
capabilities which don't make sense in Nova).
We have recently started exploring this direction, and would be glad to
collaborate with folks if this makes sense.

Regards,
Alex


Adrian Otto adrian.o...@rackspace.com wrote on 09/05/2015 07:55:47 PM:

  From: Adrian Otto adrian.o...@rackspace.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 09/05/2015 07:57 PM
  Subject: Re: [openstack-dev] [nova-docker] Status update
 
  John,
 
  Good questions. Remarks in-line from the Magnum perspective.
 
  On May 9, 2015, at 2:51 AM, John Garbutt j...@johngarbutt.com wrote:
 
   On 1 May 2015 at 16:14, Davanum Srinivas dava...@gmail.com wrote:
   Anyone still interested in this work? :)
  
   * there's a stable/kilo branch now (see
   http://git.openstack.org/cgit/stackforge/nova-docker/).
   * CI jobs are running fine against both nova trunk and nova's
   stable/kilo branch.
   * there's an updated nova-spec to get code back into nova tree (see
   https://review.openstack.org/#/c/128753/)
  
   To proxy the discussion from the etherpad onto the ML, we need to work
   out why this lives in nova, given Magnum is the place to do container
   specific things.
 
  To the extent that users want to control Docker containers through
  the Nova API (without elaborate extensions), I think a stable in-
  tree nova-docker driver makes complete sense for that.
 
[...]
 
   Now whats the reason for adding the Docker driver, given Nova is
   considering container specific APIs out of scope, and expecting
   Magnum to own that kind of thing.
 
  I do think nova-docker should find it’s way into the Nova tree. This
  makes containers more accessible in OpenStack, and appropriate for
  use cases where users want to treat containers like they treat
  virtual machines. On the subject of extending the Nova API to
  accommodate special use cases of containers that are beyond the
  scope of the Nova API, I think we should resist that, and focus
  those container-specific efforts in Magnum. That way, cloud
  operators can choose whether to use Nova or Magnum for their
  container use cases depending on the range of features they desire
  from the API. This approach should also result in less overlap of
efforts.
 
[...]
  To sum up, I strongly support merging in nova-docker, with the
  caveat that it operates within the existing Nova API (with few minor
  exceptions). For features that require API features that are truly
  container specific, we should land those in Magnum, and keep the
  Nova API scoped to operations that are appropriate for “all instance
types.
 
  Adrian
 
  
   Thanks,
   John
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I was wondering the exact same thing, why not work on a nova virt driver 
that talks to the magnum API?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] can we deprecate the volume CLIs in novaclient?

2015-05-14 Thread Matt Riedemann
This came up while talking about bug 1454369 [1].  This also came up at 
one point in kilo when we found out the volume CLIs in novaclient didn't 
work at one point and we broke the cells devstack exercises job because 
of it.


python-novaclient uses cinder API to handle the volume CLI rather than 
going to the nova volume API.  There are issues with this because 
novaclient needs a certain endpoint/service_type setup in the service 
catalog to support cinder v1/v2 APIs (whatever devstack sets up today). 
 novaclient defaults to volume (v1) and if you disable that in cinder 
then novaclient doesn't work because it's not using volumev2.


So like anyone might ask, why doesn't novaclient talk to nova volume 
APIs to do volume thingies and the answer is because the nova volume API 
doesn't handle all of the volume thingies like snapshots and volume types.


So I got to to thinking, why the hell are we still supporting volume 
operations via novaclient anyway?  Isn't that cinderclient's job?  Or 
python-openstackclient's job?  Can't we deprecate the volume CLIs in 
novaclient and tell people to use cinderclient instead since it now has 
version discovery [2] so that problem would be handled for us.


Since we have nova volume APIs maybe we can't remove the volume CLIs in 
novaclient, but could they be limited to just operations that the nova 
API supports and then we make novaclient talk to nova volume APIs rather 
than cinder APIs (because the nova API will talk to cinderclient which 
again has the version discovery done for us).


Or assuming we could deprecate the volume CLIs in novaclient, what would 
the timeline on deprecation be since it's not a server project with a 6 
month release cycle?  I'm assuming we'd still have 6-12 months 
deprecation on a client like this because of all of the tooling 
potentially written around it.


[1] https://bugs.launchpad.net/python-novaclient/+bug/1454369
[2] https://review.openstack.org/#/c/145613/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] why is evacuate marked as missing for libvirt?

2015-04-14 Thread Matt Riedemann
This came up in IRC this morning, but the hypervisor support matrix is 
listing evacuate as 'missing' for the libvirt driver:


http://docs.openstack.org/developer/nova/support-matrix.html#operation_evacuate

Does anyone know why that is?  The rebuild method in the compute manager 
just re-uses other virt driver operations so by default it's implemented 
by all drivers.  The only one that overrides rebuild for evacuate is the 
ironic driver.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] reminder to update kilo release notes for UpgradeImpact/DocImpact changes

2015-04-14 Thread Matt Riedemann
 support for --interface option in iscsiadm.
fbf0806 Adds barbican keymgr wrapper
10c510e Use a workarounds option to disable rootwrap
7517108 Support for ext4 as default filesystem for ephemeral disks
609b2df Change default value of multi_instance_display_name_template
b1a21b3 Add missing policy for nova in policy.json
5477faa Fix live migration RPC compatibility with older versions
cff14b3 replace httplib.HTTPSConnection in EC2KeystoneAuth
536e990 VMware: enable a cache prefix configuration parameter
c0ea53c Enforce unique instance uuid in data model
aea0140 VMware: ephemeral disk support
561f8af Libvirt: SMB volume driver
f268be9 GET servers API sorting REST API updates
9ff1a56 Add API validation schema for volume_attachments
fb9b205 Retry ebtables on race
04d7a72 Eventlet green threads not released back to pool
73213ac VMware: enable backward compatibility with existing clusters
4919269 Use session in cinderclient
f5943ad Specify storage IP for iscsi connector
bc516eb Switch default cinder API to V2
fb559a3 Create Nova Scheduler IO Ops Weighter
daf278c Add notification for server group operations
641de56 Replace outdated oslo-incubator middleware
a98dcc6 VMware: Improve logging on failure due to invalid guestId
79bfb1b Fix libvirt watchdog support
7ad0a79 VMware: add support for default pbm policy
a2f843e vfs: guestfs logging integration
542d885 console: make unsupported ws scheme in python  2.7.4


[1] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-14 Thread Matt Riedemann



On 4/10/2015 10:29 AM, Matt Riedemann wrote:



On 4/8/2015 4:07 PM, Michael Still wrote:

I just wanted to send a note about John running in the PTL election
for Nova.

I want to make it clear that I think having more than one candidate is
a good thing -- its a healthy part of a functional democracy, and it
also means regardless of the outcome we have at least one succession
planning option should a PTL need to step down at some point in the
future.

That said, I think there are a few things we need to do in Liberty,
regardless of who is PTL. I started this as a Google doc to share with
John if he won so that we didn’t drop the ball, but then I realised
that nothing here is secret. So, here is my brain dump of things we
need to do in Liberty, in no particular order:

nova-coresec reboot


The nova-coresec team has been struggling recently to keep up with
their workload. We need to drop people off this team who haven’t had
time recently to work on security bugs, and we need to find new people
to volunteer for this team, noting that the team is kept deliberately
small because of embargoed security vulnerabilities. If I am not
re-elected as PTL, I will probably volunteer for this team.

priorities and specs
===

I think the current spec process is starting to work well for us, and
that priorities was a success. We should continue with specs, but with
an attempt to analyse why so many approved specs don’t land (we have
had about 50% of our approved specs not land in Juno and Kilo). Is
that as simple as code review bandwidth? Or is the problem more
complicated than that? We just don’t know until someone goes digging.

Priorities worked well. We need to start talking about what should be
a priority in Liberty now, and the first step is to decide as a team
what we think the big problems we’re trying to solve in Liberty are.

nova-core


I think there are a couple of things to be done here.

There are still a few idle cores, particularly people who haven’t done
less than ten reviews in the last 90 days. We should drop those people
from core and thank them for their work in the past noting once again
that this is a natural part of the Open Source process -- those people
are off working on other problems now and that’s cool.

We also need to come up with a way to grow more cores. Passive
approaches like asking existing cores to keep an eye out for talent
they trust haven’t worked, so I think its time to actively start
mentoring core candidates.

I am not convinced that just adding cores will solve our review
bandwidth problems though. We have these conversations about why
people’s reviews sit around without a lot of data to back them up, and
I feel like we often jump to conclusions that feel intuitive but that
aren’t supported by the data.

nova-net
===

OMG, this is still a thing. We need to actually work out what we’re
doing here, and then do it. The path isn’t particularly clear to me
any more, I thought I understood what we needed to do in Kilo, but it
turns out that operators don’t feel that plan meets their needs.
Somehow we need to get this work done. This is an obvious candidate
for a summit session, if we can come up with a concrete proposal to
discuss.

bugs


Trivial bug monkey’ing has worked well for us in Kilo, but one of our
monkeys is off running as a PTL. We need to ensure we have this
staffed with people who understand the constraints on the bugs we’re
willing to let through this process. It would be sad to see this die
on the vine.

We also need to fix more bugs. I know we always say this, but we don’t
have enough senior developers just kicking around looking at bugs to
fix in a systematic way. This is something I used to do when I had
more time before PTL’ing became a thing. If I am not elected this is
the other thing I’ll probably go back to spending time on.

conclusion


I make no claim that my list is exhaustive. What else do you think we
should be tackling in Liberty?

Michael



For better test coverage, we have a few things to do:

1. Make the ceph job voting: https://review.openstack.org/#/c/170913/

2. Get the aiocpu job (which tests live block migrate) stable and
voting, it has already caught some live migrate regressions recently.

3. Get the cells job stable and voting (we're pretty close).

4. We can get the Fedora 21 job in Nova's experimental queue:

https://review.openstack.org/#/c/171795/

That allows us to test libvirt features (like live snapshot) on a newer
version than what's in the gate on ubuntu 14.04.



An update:

1. The ceph job is voting on master (and eventually stable/kilo) for 
nova/cinder/glance.


2. The aiopcpu job is in the process of being stabilized and should soon 
be voting.


3. WIP

4. The fc21 job is in the nova experimental queue now.

--

Adding 5:

We don't have any evacuate testing in Tempest today.  We have rebuild, 
but not evacuate.  I'll start working on that as a per-requisite for 
Dan's

Re: [openstack-dev] [nova] Mysql db connection leaking?

2015-04-16 Thread Matt Riedemann



On 4/16/2015 12:27 PM, Jay Pipes wrote:

On 04/16/2015 09:54 AM, Sean Dague wrote:

On 04/16/2015 05:20 PM, Qiming Teng wrote:


Wondering if there is something misconfigured in my devstack
environment, which was reinstalled on RHEL7 about 10 days ago.
I'm often running into mysql connections problem as shown below:

$ mysql
ERROR 1040 (HY000): Too many connections

When I try dump the mysql connection list, I'm getting the followng
result after a 'systemctl restart mariadb.service':

$ mysqladmin processlist | grep nova | wc -l
125

Most of the connections are at Sleep status:

$ mysqladmin processlist | grep nova | grep Sleep | wc -l
123

As for the workload, I'm currently only running two VMs in a multi-host
devstack environment.

So, my questions:

   - Why do we have so many mysql connections from nova?
   - Is it possible this is caused some misconfigurations?
   - 125 connections in such a toy setup is insane, any hints on nailing
 down the connections to the responsible nova components?

Thanks.

Regards,
   Qiming


No, that's about right. It's 1 connection per worker. By default most
daemons start 1 worker per processor. Each OpenStack service has a bunch
of daemons. It all adds up pretty quick.


And just to add to what Sean says above, there's nothing inherently
wrong with sleeping connections to MySQL. What *is* wrong, however, is
that the default max_connections setting in my.cnf is 150. :( I
frequently recommend upping that to 2000 or more on any modern hardware
or decent sized VM.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What do you consider a decent sized VM?  In devstack we default 
max_connections for postgresql to 200 because we were having connection 
timeout failures in the gate for pg back in the day:


http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/databases/postgresql#n15

But we don't change this for mysql:

http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/databases/mysql

I think the VMs in the gate are running 8 VCPU + 8 GB RAM, not sure 
about disk.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Maintaining focus on Kilo

2015-04-14 Thread Matt Riedemann



On 4/14/2015 6:34 PM, Michael Still wrote:

On Wed, Apr 15, 2015 at 7:22 AM, Dan Smith d...@danplanet.com wrote:

we need to make sure we continue to progress on bugs targeted as
likely to need backport to Kilo. The current list is here --
https://bugs.launchpad.net/nova/+bugs?field.tag=kilo-rc-potential .

Bugs on that list which have fixes merged and backports prepared will
stand a very good chance of getting released in any RC2 we do, or in
the first stable point release for Kilo.


IMHO, this list is already too long. If we decide to do an rc2, I'd like
to see that slimmed down to as little as possible and do the remainder
as backports.

If we decide to have an rc2, can we agree to have an informal meeting to
scrub that list to only critical things?


Yes, I think this is a good idea. So... Let's wait to see if a RC2
happens and then we can schedule an IRC meeting if it does.

Michael



Here is the current list of proposed/kilo cherry picks:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:proposed/kilo,n,z

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-14 Thread Matt Riedemann



On 4/14/2015 12:22 PM, Clint Byrum wrote:

Hello! There's been some recent progress on python3 compatibility for
core libraries that OpenStack depends on[1], and this is likely to open
the flood gates for even more python3 problems to be found and fixed.

Recently a proposal was made to make oslo.messaging start to run python3
tests[2], and it was found that qpid-python is not python3 compatible yet.

This presents us with questions: Is anyone using QPID, and if so, should
we add gate testing for it? If not, can we deprecate the driver? In the
most recent survey results I could find [3] I don't even see message
broker mentioned, whereas Databases in use do vary somewhat.

Currently it would appear that only oslo.messaging runs functional tests
against QPID. I was unable to locate integration testing for it, but I
may not know all of the places to dig around to find that.

So, please let us know if QPID is important to you. Otherwise it may be
time to unburden ourselves of its maintenance.

[1] https://pypi.python.org/pypi/eventlet/0.17.3
[2] https://review.openstack.org/#/c/172135/
[3] 
http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



FWIW IBM Cloud Manager with OpenStack is still shipping qpid 0.30.  We 
switched to the default deployment being RabbitMQ in Kilo though (maybe 
even Juno).  But we do have a support matrix tested with qpid as the rpc 
backend.  Our mainline paths are tested with rabbitmq though since 
that's the default backend for us now.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Host maintenance notification

2015-04-06 Thread Matt Riedemann



On 4/6/2015 9:46 AM, Chris Friesen wrote:

On 04/06/2015 07:56 AM, Ed Leafe wrote:

On Apr 6, 2015, at 1:21 AM, Chris Friesen chris.frie...@windriver.com
wrote:


Please feel free to add a blueprint in Launchpad. I don't see this as
needing a full spec, really. It shouldn't be more than a few lines of
code to send a new notification message.


Wouldn't a new notification message count as an API change?  Or are we
saying that it's such a small API change that any discussion can
happen in
the blueprint?


I don't think that the notification system is the same as the API. It is
something that you can subscribe to or not, and is distinct from the API.


It's certainly not the same as the REST API.  I think an argument could
be made that the notification system is part of the API, where API is
defined more generally as something that expresses a software component
in terms of its operations, inputs, outputs, and underlying types.

If we don't exercise any control over the contents of the notifications
messages, that would make it difficult for consumers of the
notifications to do anything interesting with them.  At a minimum it
might make sense to do something like REST API microversions, with a
version number and a place to look up what changed with each version.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The events and their payloads are listed in the wiki here [1].

In the past people have added new notifications with just bug reports, 
I'm not sure a new spec is required for a host going into maintenance 
mode (as long as it's new and not changing something).


And yes, we have to be careful about making changes to existing 
notifications (the event name or the payload) since we have to treat 
them like APIs, but (1) they aren't versioned today and (2) we don't 
have any kind of integration testing on the events that I'm aware of, 
unless it's through something like ceilometer trying to do something 
with them in a tempest scenario test, but I doubt that.


[1] https://wiki.openstack.org/wiki/SystemUsageData

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Host maintenance notification

2015-04-07 Thread Matt Riedemann



On 4/6/2015 2:07 PM, Chris Friesen wrote:

On 04/06/2015 12:52 PM, Matthew Treinish wrote:

On Mon, Apr 06, 2015 at 01:17:20PM -0500, Matt Riedemann wrote:

On 4/6/2015 9:46 AM, Chris Friesen wrote:

On 04/06/2015 07:56 AM, Ed Leafe wrote:

On Apr 6, 2015, at 1:21 AM, Chris Friesen
chris.frie...@windriver.com
wrote:


Please feel free to add a blueprint in Launchpad. I don't see
this as
needing a full spec, really. It shouldn't be more than a few
lines of
code to send a new notification message.


Wouldn't a new notification message count as an API change?  Or
are we
saying that it's such a small API change that any discussion can
happen in
the blueprint?


I don't think that the notification system is the same as the API.
It is
something that you can subscribe to or not, and is distinct from
the API.


It's certainly not the same as the REST API.  I think an argument could
be made that the notification system is part of the API, where API is
defined more generally as something that expresses a software
component
in terms of its operations, inputs, outputs, and underlying types.

If we don't exercise any control over the contents of the notifications
messages, that would make it difficult for consumers of the
notifications to do anything interesting with them.  At a minimum it
might make sense to do something like REST API microversions, with a
version number and a place to look up what changed with each version.


The events and their payloads are listed in the wiki here [1].

In the past people have added new notifications with just bug
reports, I'm
not sure a new spec is required for a host going into maintenance
mode (as
long as it's new and not changing something).


Yeah, in it's current state without real versioning on notifications I
think
just adding it with a bug is fine. If nova actually had a versioning
mechanism
and made stability guarantees on notifications it would be a different
story.


I'm fine either way...just wanted to be sure the decision was made
consciously.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Nice timing on the discussion here, I'd been meaning to upstream a 
devref [1] that we had internally since grizzly (so it's outdated a bit) 
that moves the event notifications out of the wiki and into the nova docs.


Comments welcome on that.  I just got it posted today though so I know 
it needs quite a bit of updating.


[1] https://review.openstack.org/#/c/171291/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] unit tests result in false negatives on system z platform CI

2015-04-02 Thread Matt Riedemann



On 4/2/2015 2:37 AM, Markus Zoeller wrote:

Michael Still mi...@stillhq.com wrote on 04/01/2015 11:01:51 PM:


From: Michael Still mi...@stillhq.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 04/01/2015 11:06 PM
Subject: Re: [openstack-dev] [nova] unit tests result in false
negatives on system z platform CI

Thanks for the detailed email on this. How about we add this to the
agenda for this weeks nova meeting?


Yes, that would be great. I've seen you already put it on the agenda.
I will be in todays meeting.

Regards,
Markus Zoeller (markus_z)


One option would be to add a fixture to some higher level test class,
but perhaps someone has a better idea than that.

Michael

On Wed, Apr 1, 2015 at 8:54 PM, Markus Zoeller mzoel...@de.ibm.com

wrote:

[...]
I'm looking for a way to express
the assumption that x86 should be the default platform in the unit

tests

and prevent calls to the underlying system. This has to be rewritable

if

platform specific code like in [2] has to be tested.

I'd like to discuss how that could be achieved in a maintainable way.


References
--
[1] https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz
[2] test_driver.py; test_get_guest_config_with_type_kvm_on_s390;

https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/

libvirt/test_driver.py#L2592





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



It's simple, don't run the unit tests on z.  We don't require the other 
virt driver CI to run unit tests, I don't see why we'd make zKVM do it. 
 Any platform-specific code should be exercised via the APIs in Tempest 
runs and the zKVM CI should be focusing on running the Tempest tests 
that hit the APIs they support (which should be listed in the hypervisor 
support matrix).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] request to disable xenserver CI account

2015-04-09 Thread Matt Riedemann



On 4/9/2015 4:27 PM, Jeremy Stanley wrote:

On 2015-04-09 16:13:13 -0500 (-0500), Matt Riedemann wrote:

The XenServer/XenProject third party CI job has been voting -1 on
nova changes for over 24 hours without a response from the
maintainers so I'd like to request that we disable for now while
it's being worked since it's a voting job and causing noise at
kind of a hairy point in the release.


According to Gerrit, you personally (as a member of the nova-release
group) have access to remove them from the nova-ci group to stop
them being able to -1/+1 nova changes. You should be able to do it
via https://review.openstack.org/#/admin/groups/511,members but let
me know if that's not working for some reason.



Great, done, thanks.

https://review.openstack.org/#/admin/groups/511,members

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] request to disable xenserver CI account

2015-04-09 Thread Matt Riedemann
The XenServer/XenProject third party CI job has been voting -1 on nova 
changes for over 24 hours without a response from the maintainers so I'd 
like to request that we disable for now while it's being worked since 
it's a voting job and causing noise at kind of a hairy point in the release.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release for Kilo: 2.4.0

2015-04-09 Thread Matt Riedemann



On 4/9/2015 3:14 PM, Kyle Mestery wrote:

The Neutron team is proud to announce the release of the latest version
of python-neutronclient. This release includes the following bug fixes
and improvements:

aa1215a Merge Fix one remaining E125 error and remove it from ignore list
cdfcf3c Fix one remaining E125 error and remove it from ignore list
b978f90 Add Neutron subnetpool API
d6cfd34 Revert Remove unused AlreadyAttachedClient
5b46457 Merge Fix E265 block comment should start with '# '
d32298a Merge Remove author tag
da804ef Merge Update hacking to 0.10
8aa2e35 Merge Make secgroup rules more readable in security-group-show
a20160b Merge Support fwaasrouterinsertion extension
ddbdf6f Merge Allow passing None for subnetpool
5c4717c Merge Add Neutron subnet-create with subnetpool
c242441 Allow passing None for subnetpool
6e10447 Add Neutron subnet-create with subnetpool
af3fcb7 Adding VLAN Transparency support to neutronclient
052b9da 'neutron port-create' missing help info for --binding:vnic-type
6588c42 Support fwaasrouterinsertion extension
ee929fd Merge Prefer argparse mutual exclusion
f3e80b8 Prefer argparse mutual exclusion
9c6c7c0 Merge Add HA router state to l3-agent-list-hosting-router
e73f304 Add HA router state to l3-agent-list-hosting-router
07334cb Make secgroup rules more readable in security-group-show
639a458 Merge Updated from global requirements
631e551 Fix E265 block comment should start with '# '
ed46ba9 Remove author tag
e2ca291 Update hacking to 0.10
9b5d397 Merge security-group-rule-list: show all info of rules briefly
b56c6de Merge Show rules in handy format in security-group-list
c6bcc05 Merge Fix failures when calling list operations using Python
binding
0c9cd0d Updated from global requirements
5f0f280 Fix failures when calling list operations using Python binding
c892724 Merge Add commands from extensions to available commands
9f4dafe Merge Updates pool session persistence options
ce93e46 Merge Added client calls for the lbaas v2 agent scheduler
c6c788d Merge Updating lbaas cli for TLS
4e98615 Updates pool session persistence options
a3d46c4 Merge Change Creates to Create in help text
4829e25 security-group-rule-list: show all info of rules briefly
5a6e608 Show rules in handy format in security-group-list
0eb43b8 Add commands from extensions to available commands
6e48413 Updating lbaas cli for TLS
942d821 Merge Remove unused AlreadyAttachedClient
a4a5087 Copy functional tests from tempest cli
dd934ce Merge exec permission to port_test_hook.sh
30b198e Remove unused AlreadyAttachedClient
a403265 Merge Reinstate Max URI length checking to V2_0 Client
0e9d1e5 exec permission to port_test_hook.sh
4b6ed76 Reinstate Max URI length checking to V2_0 Client
014d4e7 Add post_test_hook for functional tests
9b3b253 First pass at tempest-lib based functional testing
09e27d0 Merge Add OS_TEST_PATH to testr
7fcb315 Merge Ignore order of query parameters when compared in
MyUrlComparator
ca52c27 Add OS_TEST_PATH to testr
aa0042e Merge Fixed pool and health monitor create bugs
45774d3 Merge Honor allow_names in *-update command
17f0ca3 Ignore order of query parameters when compared in MyUrlComparator
aa0c39f Fixed pool and health monitor create bugs
6ca9a00 Added client calls for the lbaas v2 agent scheduler
c964a12 Merge Client command extension support
e615388 Merge Fix lbaas-loadbalancer-create with no --name
c61b1cd Merge Make some auth error messages more verbose
779b02e Client command extension support
e5e815c Fix lbaas-loadbalancer-create with no --name
7b8c224 Honor allow_names in *-update command
b9a7d52 Updated from global requirements
62a8a5b Make some auth error messages more verbose
8903cce Change Creates to Create in help text

For more details on the release, please see the LP page and the detailed
git log history.

https://launchpad.net/python-neutronclient/2.4/2.4.0

Please report any bugs in LP.

Thanks!
Kyle


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



And the gate has exploded on kilo-rc1:

http://goo.gl/dnfSPC

Proposed: https://review.openstack.org/#/c/172150/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Friday summit meetup etherpad

2015-05-19 Thread Matt Riedemann
I was looking for this earlier in IRC and bauzas was nice enough to give 
me the link, so here is the Friday summit meetup etherpad for people 
that want to drop some notes:


https://etherpad.openstack.org/p/YVR-nova-contributor-meetup

I had a couple of smaller issues that I didn't want to forget about so I 
added those at the bottom.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Tempest failure help

2015-05-19 Thread Matt Riedemann



On 5/19/2015 2:18 PM, Sam Morrison wrote:

Hi nova devs,

I have a patch https://review.openstack.org/#/c/181776/ where I have a
failing tempest job which I can’t figure out. Can anyone help me?

Cheers,
Sam





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I dug a little bit in your change and I think the problem is a random az 
getting set as the device_owner.  If you look at a change that's passing 
the neutron job, you'll see the device_owner on the port is 
'compute:None' so instance.availability_zone is None in these test runs.


In your change, it's a random number for the az which is screwing 
something up somewhere.  Could be a problem elsewhere in nova, or 
neutron, or tempest, or devstack, that's something I don't know right 
now. :(


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Targeting icehouse-eol?

2015-06-03 Thread Matt Riedemann
Following on the thread about no longer doing stable point releases [1] 
at the summit we talked about doing icehouse-eol pretty soon [2].


I scrubbed the open stable/icehouse patches last week and we're down to 
at least one screen of changes now [3].


My thinking was once we've processed that list, i.e. either approved 
what we're going to approve or -2 what we aren't, then we should proceed 
with doing the icehouse-eol tag and deleting the branch.


Is everyone generally in agreement with doing this soon?  If so, then 
I'm thinking target a week from today for the stable maint core team to 
scrub the list of open reviews in the next week and we then get the 
infra team to tag the branch and close it out.


The only open question I have is if we need to do an Icehouse point 
release prior to the tag and dropping the branch, but I don't think 
that's happened in the past with branch end of life - the eol tag 
basically serves as the placeholder to the last 'release'.


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
[2] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch
[3] https://review.openstack.org/#/q/status:open+branch:stable/icehouse,n,z

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Is volume connection_info modeled/documented anywhere?

2015-06-10 Thread Matt Riedemann
While investigating/discussing bug 1463525 [1] I remembered how little I 
know about what can actually come out of the connection_info dict 
returned from the os-initialize_connection cinder API call.


So we added some debug logging in nova and I remembered that there are 
potentially credentials (auth_password) stored in connection_info, so we 
have a bug to clean that up in Nova [2].


The plan is to model connection_info using objects where we have a 
parent object BdmConnectionInfo containing the common keys, like 
'driver_volume_type' and 'data', and then child objects for the 
vendor-specific connection_info objects, like RbdBdmConnectionInfo, 
ISCSIBdmConnectionInfo, etc.


The problem I have right now is knowing what can all be in there, since 
there are a ton of vendor drivers in Cinder.


Is anyone aware of a wiki page or devref or anything that documents what 
can be in that wild west connection_info dict?  If not, the first thing 
I was going to do was start documenting that - but where?  It seems it 
should really be modeled in Cinder since it is part of the API contract 
and if a vendor driver were to say rename or drop a key from the 
connection_info they were returning in os-initialize_connection it would 
be a backwards incompatible change.


Is devref best for this with a listing for each vendor driver?  At least 
then it would be versioned with the code and updates could be made as 
new keys are added to connection_info or new drivers are added to Cinder.


I'm looking for any advice here in how to get started since I don't 
primarily work on Cinder and don't have a full history here.


[1] https://bugs.launchpad.net/cinder/+bug/1463525
[2] https://bugs.launchpad.net/nova/+bug/1321785

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] Do we turn on voting for the tempest-dsvm-cells job?

2015-06-22 Thread Matt Riedemann



On 6/22/2015 4:32 PM, Russell Bryant wrote:

On 06/22/2015 05:23 PM, Matt Riedemann wrote:

The check-tempest-dsvm-cells job has been in nova's check queue since
January as non-voting and has been stable for a couple of weeks now, so
before it's regressed melwitt proposed a change to making it voting and
gating on nova changes [1].

I raised a concern in that change that the tempest-dsvm-cells job is not
in the check queue for tempest or devstack changes, so if a change is
merged in tempest/devstack which breaks the cells job, it will block
nova changes from merging.

mtreinish noted that tempest already has around 30 jobs running against
it right now in the check queue, so he'd prefer that another one isn't
added since the nova API shouldn't be different in the case of cells,
but we know there are quirks.  That can be seen from the massive regex
of excluded tests for the tempest-dvsm-cells job [2].

If we could turn that regex list into tempest configurations, I think
that would make it possible to not have to run tempest changes through
the cells job in the check queue but also feel reasonably confident that
changes to tempest that use the config options properly won't break the
cells job (and block nova in the gate).

This is actually something we should do regardless of voting or not on
nova since new tests get added which might not fall in the regex and
break the cells job.  So I'm going to propose some changes so that the
regex will be moved to devstack-gate (regex exodus (tm)) and we'll work
on whittling down the regex there (and run those d-g changes against the
tempest-dsvm-cells job in the experimental queue).

The question for the nova team is, shall we make the tempest-dsvm-cells
job voting on nova changes knowing that the gate can be broken with a
change to tempest that isn't caught in the regex?  In my opinion I think
we should make it voting so we don't regress cells with changes to nova
that go unnoticed with the non-voting job today.  Cells v2 is a nova
priority for Liberty so we don't want setbacks now that it's stable.

If a change does land in tempest which breaks the cells job and blocks
nova, we (1) fix it or (2) modify the regex so it's excluded until fixed
as has been done up to this point.

I think we should probably mull this over in the ML and then vote on it
in this week's nova meeting.

[1] https://review.openstack.org/#/c/190894/
[2]
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n1004




Regarding your regex exodus, I recently added something for this.  In
another project, I'm setting the regex in a file I keep in the code repo
instead of project-config.

support for DEVSTACK_GATE_SETTINGS in devstack-gate:
https://review.openstack.org/190321

usage in a job definition: https://review.openstack.org/190325

a DEVSTACK_GATE_SETTINGS file that sets DEVSTACK_GATE_TEMPEST_REGEX:
https://review.openstack.org/186894

It all seems to be working for me, except I still need to tweak my regex
to get the job passing, but at least I can do that without updating
project-config now.



Awesome, that is way cleaner.  I'll go that route instead, thanks!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] Do we turn on voting for the tempest-dsvm-cells job?

2015-06-22 Thread Matt Riedemann
The check-tempest-dsvm-cells job has been in nova's check queue since 
January as non-voting and has been stable for a couple of weeks now, so 
before it's regressed melwitt proposed a change to making it voting and 
gating on nova changes [1].


I raised a concern in that change that the tempest-dsvm-cells job is not 
in the check queue for tempest or devstack changes, so if a change is 
merged in tempest/devstack which breaks the cells job, it will block 
nova changes from merging.


mtreinish noted that tempest already has around 30 jobs running against 
it right now in the check queue, so he'd prefer that another one isn't 
added since the nova API shouldn't be different in the case of cells, 
but we know there are quirks.  That can be seen from the massive regex 
of excluded tests for the tempest-dvsm-cells job [2].


If we could turn that regex list into tempest configurations, I think 
that would make it possible to not have to run tempest changes through 
the cells job in the check queue but also feel reasonably confident that 
changes to tempest that use the config options properly won't break the 
cells job (and block nova in the gate).


This is actually something we should do regardless of voting or not on 
nova since new tests get added which might not fall in the regex and 
break the cells job.  So I'm going to propose some changes so that the 
regex will be moved to devstack-gate (regex exodus (tm)) and we'll work 
on whittling down the regex there (and run those d-g changes against the 
tempest-dsvm-cells job in the experimental queue).


The question for the nova team is, shall we make the tempest-dsvm-cells 
job voting on nova changes knowing that the gate can be broken with a 
change to tempest that isn't caught in the regex?  In my opinion I think 
we should make it voting so we don't regress cells with changes to nova 
that go unnoticed with the non-voting job today.  Cells v2 is a nova 
priority for Liberty so we don't want setbacks now that it's stable.


If a change does land in tempest which breaks the cells job and blocks 
nova, we (1) fix it or (2) modify the regex so it's excluded until fixed 
as has been done up to this point.


I think we should probably mull this over in the ML and then vote on it 
in this week's nova meeting.


[1] https://review.openstack.org/#/c/190894/
[2] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n1004


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] Do we turn on voting for the tempest-dsvm-cells job?

2015-06-22 Thread Matt Riedemann



On 6/22/2015 4:38 PM, Matt Riedemann wrote:



On 6/22/2015 4:32 PM, Russell Bryant wrote:

On 06/22/2015 05:23 PM, Matt Riedemann wrote:

The check-tempest-dsvm-cells job has been in nova's check queue since
January as non-voting and has been stable for a couple of weeks now, so
before it's regressed melwitt proposed a change to making it voting and
gating on nova changes [1].

I raised a concern in that change that the tempest-dsvm-cells job is not
in the check queue for tempest or devstack changes, so if a change is
merged in tempest/devstack which breaks the cells job, it will block
nova changes from merging.

mtreinish noted that tempest already has around 30 jobs running against
it right now in the check queue, so he'd prefer that another one isn't
added since the nova API shouldn't be different in the case of cells,
but we know there are quirks.  That can be seen from the massive regex
of excluded tests for the tempest-dvsm-cells job [2].

If we could turn that regex list into tempest configurations, I think
that would make it possible to not have to run tempest changes through
the cells job in the check queue but also feel reasonably confident that
changes to tempest that use the config options properly won't break the
cells job (and block nova in the gate).

This is actually something we should do regardless of voting or not on
nova since new tests get added which might not fall in the regex and
break the cells job.  So I'm going to propose some changes so that the
regex will be moved to devstack-gate (regex exodus (tm)) and we'll work
on whittling down the regex there (and run those d-g changes against the
tempest-dsvm-cells job in the experimental queue).

The question for the nova team is, shall we make the tempest-dsvm-cells
job voting on nova changes knowing that the gate can be broken with a
change to tempest that isn't caught in the regex?  In my opinion I think
we should make it voting so we don't regress cells with changes to nova
that go unnoticed with the non-voting job today.  Cells v2 is a nova
priority for Liberty so we don't want setbacks now that it's stable.

If a change does land in tempest which breaks the cells job and blocks
nova, we (1) fix it or (2) modify the regex so it's excluded until fixed
as has been done up to this point.

I think we should probably mull this over in the ML and then vote on it
in this week's nova meeting.

[1] https://review.openstack.org/#/c/190894/
[2]
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n1004





Regarding your regex exodus, I recently added something for this.  In
another project, I'm setting the regex in a file I keep in the code repo
instead of project-config.

support for DEVSTACK_GATE_SETTINGS in devstack-gate:
https://review.openstack.org/190321

usage in a job definition: https://review.openstack.org/190325

a DEVSTACK_GATE_SETTINGS file that sets DEVSTACK_GATE_TEMPEST_REGEX:
https://review.openstack.org/186894

It all seems to be working for me, except I still need to tweak my regex
to get the job passing, but at least I can do that without updating
project-config now.



Awesome, that is way cleaner.  I'll go that route instead, thanks!



Here is the change that moves the regex into nova:

https://review.openstack.org/#/c/194411/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] proposal to send bulk hypervisor stats data in periodic notifications

2015-06-21 Thread Matt Riedemann



On 6/20/2015 3:35 PM, Daniel P. Berrange wrote:

On Sat, Jun 20, 2015 at 01:50:53PM -0500, Matt Riedemann wrote:

Waking up from a rare nap opportunity on a Saturday, this is what was
bothering me:

The proposal in the etherpad assumes that we are just getting bulk
host/domain/guest VM stats from the hypervisor and sending those in a
notification, but how do we go about filtering those out to only instances
that were booted through Nova?


In general I would say that is an unsupported deployment scenario to
have other random virt guests running on a nova compute node.

Having said that, when nova uses libguestfs, it will create some temp
guests via libvirt, so we do have to consider that possibility.

Even today with the general list domains virt driver call, we could be
getting domains that weren't launched by Nova I believe.


Jason pointed out the ceilometer code gets all of the non-error state
instances from nova first [1] and then for each of those it does the domain
lookup from libvirt, filtering out any that are in SHUTOFF state [2].

When talking about the new virt driver API for bulk stats, danpb said to use
virConnectGetAllDomainStats with libvirt [3] but I'm not aware of that being
able to filter out instances that weren't created by nova.  I don't think we
want a notification from nova about the hypervisor stats to include things
that were created outside nova, like directly through virsh or vCenter.

For at least libvirt, if virConnectGetAllDomainStats returns the domain
metadata then we can filter those since there is nova-specific metadata in
the domains created through nova [4] but I'm not sure that's true about the
other virt types in nova (I think the vCenter driver tags VMs somehow as
being created by OpenStack/Nova, but not sure about xen/hyper-v/ironic).


The nova database hsa a list of domains that it owns, so if you query the
database for a list of valid UUIDs for the host, you can use that to filter
the domains that libvirt reports by comparing UUIDs.

Regards,
Daniel



Dan, is virsh domstats using virConnectGetAllDomainStats?  I have 
libvirt 1.2.8 on RHEL 7.1, created two m1.tiny instances through nova 
and got this from virsh domstats:


http://paste.openstack.org/show/310874/

Is that similar to what we'd see from virConnectGetAllDomainStats?  I 
haven't yet written any code in the libvirt driver to use 
virConnectGetAllDomainStats to see what that looks like.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] The unbearable lightness of specs

2015-06-24 Thread Matt Riedemann
the specs process unncessarily imho.

I've repeatedly stated that the fact that we created an even smaller
clique of people to approve specs (nova-drivers which is a tiny subset
of the already fr too small nova-core) is madness, as it creates
an even worse review burden on them, and thus worsens the bottleneck
than we already have.

In reviewing specs, we also often get into far too much detail about
the actual implementation, which is really counterproductive. eg when
get down to bikeshedding about the names of classes, types of object
attributes, and so forth, it is really insane. That level of detail
and bikesheeding belongs in the code review, not the design for the
most part.

Specs are also inflexible when it comes to dealing with features that
cross multiple projects, because we silo'd spec reviews against the
individual projects. This is sub-optimal, but ultimately not a show
stopper.


There is the openstack-specs repo for cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/

That'd be a good place for things like your os-vif library spec which 
requires input from both nova and neutron teams, although I think it's 
currently OK to have it in nova-specs since a lot of the forklift is 
going to come from there and we can add neutron reviewers as needed.




I also think the way we couple spec approval  reviews to the dev
cycles is counterproductive. We should be willing to accept and
review specs at any point in any cycle, and once approved they should
remain valid for a prolonged period of time - not require us to go
through re-review every new dev cycle as again that's just creating
extra burden. We should of course though reserve the right to unapprove
specs if circumstances change, invlidating the design for the previous
approval.

In short specs are far from perfect, but to say specs don't solve/help
anything todo with design in nova is really ignoring our history from
before the time specs existed. We must continue to improve our process
overall the biggest thing we lost IMHO is agility and pragmatism in
our decision making - I think we can regain that without throwing away
the specs idea entirely.

Regards,
Daniel



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] The unbearable lightness of specs

2015-06-24 Thread Matt Riedemann



On 6/24/2015 8:23 AM, Sahid Orentino Ferdjaoui wrote:

On Wed, Jun 24, 2015 at 11:28:59AM +0100, Nikola Đipanov wrote:

Hey Nova,

I'll cut to the chase and keep this email short for brevity and clarity:

Specs don't work! They do nothing to facilitate good design happening,
if anything they prevent it. The process layered on top with only a
minority (!) of cores being able to approve them, yet they are a prereq
of getting any work done, makes sure that the absolute minimum that
people can get away with will be proposed. This in turn goes and
guarantees that no good design collaboration will happen. To add insult
to injury, Gerrit and our spec template are a horrible tool for
discussing design. Also the spec format itself works for only a small
subset of design problems Nova development is faced with.


I do not consider specs don't work, personnaly I refer myself to this
relatively good documentation [1] instead of to dig in code to
remember how work a feature early introduced.

I guess we have some efforts to do about the level of details we want
before a spec is approved. We should just consider the general
idea/design, options introduced, API changed and keep in mind the
contributors who will implement the feature can/have to update it
during the developpement phase.

[1] http://specs.openstack.org/openstack/nova-specs/specs/kilo/

s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I agree completely. The nicely rendered feature docs which is a 
byproduct of the specs process in gerrit is a great part of it. So when 
someone is trying to use a new feature or trying to fix a bug in said 
feature 1-2 years later and trying to understand the big picture idea, 
they can refer to the original design spec - assuming it was accurate at 
the time that the code was actually merged. Like you said, it's 
important to keep the specs up to date based on what was actually 
approved in the code.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] The unbearable lightness of specs

2015-06-24 Thread Matt Riedemann



On 6/24/2015 9:09 AM, Kashyap Chamarthy wrote:

On Wed, Jun 24, 2015 at 02:51:38PM +0100, Nikola Đipanov wrote:

On 06/24/2015 02:33 PM, Matt Riedemann wrote:


[. . .]


I agree completely. The nicely rendered feature docs which is a
byproduct of the specs process in gerrit is a great part of it. So when
someone is trying to use a new feature or trying to fix a bug in said
feature 1-2 years later and trying to understand the big picture idea,
they can refer to the original design spec - assuming it was accurate at
the time that the code was actually merged. Like you said, it's
important to keep the specs up to date based on what was actually
approved in the code.


Of course documentation is good. Make that kind of docs a requirement
for merging a feature, by all means.

But the approval process we have now is just backwards. It's only result
is preventing useful work getting done.

In addition to what Daniel mentioned elsewhere:

Why do cores need approved specs for example - and indeed for many of us
- it's just a dance we do. I refuse to believe that a core can be
trusted to approve patches but not to write any code other than a bugfix
without a written document explaining themselves, and then have a yet
more exclusive group of super cores approve that. It makes no sense.


This is one of the _baffling_ aspects -- that a so-called super core
has to approve specs with *no* obvious valid reasons.  As Jay Pipes
mentioned once, this indeed seems like a vestigial remnant from old
times.

FWIW, I agree with others on this thread, Nova should get rid of this
specific senseless non-process.  At least a couple of cycles ago.


Specs were only added a couple of cycles ago... :)  And they were added 
to fill a gap, which has already been pointed out in this thread.  So if 
we remove them without a replacement for that gap, we regress.




[Snip, some sensible commentary.]




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] DevStack switching from MySQL-python to PyMySQL

2015-06-18 Thread Matt Riedemann



On 6/15/2015 6:30 AM, Sean Dague wrote:

On 06/11/2015 06:29 AM, Sean Dague wrote:

On 06/09/2015 06:42 PM, Jeremy Stanley wrote:

As discussed in the Liberty Design Summit Moving apps to Python 3
cross-project workshop, the way forward in the near future is to
switch to the pure-python PyMySQL library as a default.

 https://etherpad.openstack.org/p/liberty-cross-project-python3

To that end, support has already been implemented and tested in
DevStack, and when https://review.openstack.org/184493 merges in a
day or two this will become its default. Any last-minute objections
or concerns?

Note that similar work is nearing completion is oslo.db with
https://review.openstack.org/184392 and there are also a lot of
similar changes in flight for other repos under the same review
topic (quite a few of which have already merged).


Ok, we've had 2 days fair warning, I'm pushing the merge button here.
Welcome to the world of pure python mysql.


As a heads up for where we stand. The switch was flipped, but a lot of
neutron jobs (rally  tempest) went into a pretty high failure rate
after it was (all the other high volume jobs seemed fine).

We reverted the change here to unwedge things -
https://review.openstack.org/#/c/191010/

After a long conversation with Henry and Armando we came up with a new
plan, because we want the driver switch, and we want to figure out why
it causes a high Neutron failure rate, but we don't want to block
everything.

https://review.openstack.org/#/c/191121/ - make the default Neutron jobs
set some safe defaults (which are different than non Neutron job
defaults), but add a flag to make it possible to expose these issues.

Then add new non-voting check jobs to Neutron queue to expose these
issues - https://review.openstack.org/#/c/191141/. Hopefully allowing
interested parties to get to the bottom of these issues around the db
layer. It's in the check queue instead of the experimental queue to get
enough volume to figure out the pattern for the failures, because they
aren't 100%, and they seem to move around a bit.

Once https://review.openstack.org/#/c/191121/ is landed we'll revert
revert - https://review.openstack.org/#/c/191113/ and get everything
else back onto pymysql.

-Sean



It's not only neutron, I saw some pymysql failures in nova the other day 
for 'too many connections' or some such related error.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DB2 CI enablement on Keystone

2015-06-18 Thread Matt Riedemann
 District, Beijing, P.R.China


Inactive hide details for Brant L Knudson---2015/06/02 21:50:42---You
need to provide some evidence that this is working for keBrant L
Knudson---2015/06/02 21:50:42---You need to provide some evidence that
this is working for keystone before I'll bring it forward. Th

From: Brant L Knudson/Rochester/IBM@IBMUS
To: Feng Xi BJ Yan/China/IBM@IBMCN
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date: 2015/06/02 21:50
Subject: Re: Could you help to talk about DB2 CI enablement on keystone
meeting




You need to provide some evidence that this is working for keystone
before I'll bring it forward. There should be logs for successful
keystone runs.

Brant Knudson, OpenStack Development - Keystone core member
Phone:   507-253-8621 T/L:553-8621



Inactive hide details for Feng Xi BJ Yan---06/02/2015 04:43:59 AM---Hi,
Brant, Today is the time for keystone online meeting. BFeng Xi BJ
Yan---06/02/2015 04:43:59 AM---Hi, Brant, Today is the time for keystone
online meeting. But it's too late for me(3 AM in my time z

From: Feng Xi BJ Yan/China/IBM@IBMCN
To: Brant L Knudson/Rochester/IBM@IBMUS
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date: 06/02/2015 04:43 AM
Subject: Could you help to talk about DB2 CI enablement on keystone meeting



Hi, Brant,
Today is the time for keystone online meeting. But it's too late for
me(3 AM in my time zone). Could you help to talk abut the DB2 CI
enablement for keystone? Appriciate it.

The following issues are all fixed:
1) merged failure is solved.
2) Log rotaion time is now 60 days.
3) Logs are browsable.
An example of cinder:
http://dal05.objectstorage.softlayer.net/v1/AUTH_58396f85-2c60-47b9-aaf8-e03bc24a1a6f/cilog/94/182994/6/check/ibm-db2-ci-cinder/66f2502/logs/
4) Tempests are run by testr, not nosetests.
5) Mainterner keeps an eye on and handle the test results in time.

Best Regard:)
Bruce Yan

Yan, Fengxi (闫凤喜)
Openstack Platform Team
IBM China Systems  Technology Lab, Beijing
E-Mail: yanfen...@cn.ibm.com
Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software
Park,No.8
DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



You might as well hold off since 
https://review.openstack.org/#/c/190289/ is breaking devstack with a DB2 
backend since the nova changes aren't there yet to run the new nova_api 
DB migrations with a DB2 backend.


I'll have to fold that into https://review.openstack.org/#/c/69047/ once 
I figure out why it's busted.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] proposal to send bulk hypervisor stats data in periodic notifications

2015-06-20 Thread Matt Riedemann



On 6/17/2015 10:52 AM, Matt Riedemann wrote:

Without getting into the details from the etherpad [1], a few of us in
IRC today were talking about how the ceilometer compute-agent polls
libvirt directly for guest VM statistics and how ceilometer should
really be getting this information from nova via notifications sent from
a periodic task in the nova compute manager.

Nova already has the get_instance_diagnostics virt driver API which is
nice in that it has structured versioned instance diagnostic information
regardless of virt driver (unlike the v2 os-server-diagnostics API which
is a free-form bag of goodies depending on which virt driver is used,
which makes it mostly untestable and not portable).  The problem is the
get_instance_diagnostics virt driver API is per-instance, so it's not
efficient in the case that you want bulk instance data for a given
compute host.

So the idea is to add a new virt driver API to get the bulk data and
emit that via a structured versioned payload similar to
get_instance_diagnostics but for all instances.

Eventually the goal is for nova to send what ceilometer is collecting
today [2] and then ceilometer can just consume that notification rather
than doing the direct hypervisor polling it has today.

Anyway, this is the high level idea, the details/notes are in the
etherpad along with next steps.

Feel free to chime in now with reasons why this is crazy and will never
work and we shouldn't waste our time on it.

[1] https://etherpad.openstack.org/p/nova-hypervisor-bulk-stats-notify
[2]
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-compute-meters.html




Waking up from a rare nap opportunity on a Saturday, this is what was 
bothering me:


The proposal in the etherpad assumes that we are just getting bulk 
host/domain/guest VM stats from the hypervisor and sending those in a 
notification, but how do we go about filtering those out to only 
instances that were booted through Nova?


Jason pointed out the ceilometer code gets all of the non-error state 
instances from nova first [1] and then for each of those it does the 
domain lookup from libvirt, filtering out any that are in SHUTOFF state [2].


When talking about the new virt driver API for bulk stats, danpb said to 
use virConnectGetAllDomainStats with libvirt [3] but I'm not aware of 
that being able to filter out instances that weren't created by nova.  I 
don't think we want a notification from nova about the hypervisor stats 
to include things that were created outside nova, like directly through 
virsh or vCenter.


For at least libvirt, if virConnectGetAllDomainStats returns the domain 
metadata then we can filter those since there is nova-specific metadata 
in the domains created through nova [4] but I'm not sure that's true 
about the other virt types in nova (I think the vCenter driver tags VMs 
somehow as being created by OpenStack/Nova, but not sure about 
xen/hyper-v/ironic).


I guess adding support for something like bulk guest VM stats in a nova 
virt driver would have a pre-requisite of being able to uniquely 
identify guest VMs from the hypervisor as being created by nova.


Otherwise we'd just basically be moving the same thing that ceilometer 
is doing today into nova, as a per-instance loop and then using the 
os-server-diagnostics virt driver calls from the compute manager (at 
least saving us a compute API call from ceilometer every 10 minutes).


Thoughts?

[1] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/discovery.py#L35
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/virt/libvirt/inspector.py#L111
[3] 
http://libvirt.org/html/libvirt-libvirt-domain.html#virConnectGetAllDomainStats

[4] http://paste.openstack.org/show/308305/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends

2015-06-26 Thread Matt Riedemann
Tempest has the TestEncryptedCinderVolumes scenario test [1] which 
creates an encrypted volume type, creates a volume from that volume 
type, boots a server instance and then attaches/detaches the 'encrypted' 
volume to/from the server instance.


This works fine in the integrated gate because LVM is used as the 
backend and the encryption providers used in the test are implemented in 
nova to work with the iscsi libvirt volume driver - it sets the 
'device_path' key in the connection_info['data'] dict that the 
encryption provider code is checking.


The calls to the encryption providers in nova during volume attached are 
based on whether or not the 'encrypted' key is set in 
connection_info['data'] returned from the os-initialize_connection 
cinder API.  In the case of iscsi and several other volume drivers in 
cinder this key is set to True if the volume's 'encryption_key_id' field 
is set in the volume object.


It was noticed that the encrypted volume tests were passing the ceph job 
even though the libvirt volume driver in nova wasn't setting the 
device_path key, so it's not actually doing encryption on the attached 
volume - but the test wasn't failing, so it's a big false positive.


Upon further inspection, it is passing because it isn't doing anything, 
and it isn't doing anything because the rbd volume driver in cinder 
isn't setting the 'encrypted' key in connection_info['data'] in it's 
initialize_connection() method.


So we got to this cinder change [2] which originally was just setting 
the encrypted key for the rbd volume driver until it was pointed out 
that we should set that key globally in the volume manager if the volume 
driver isn't setting it, so that's what the latest version of the change 
does.


The check-tempest-dsvm-full-ceph job is passing on that change because 
of a series of dependent changes [3].  Basically, a config option is 
needed in tempest to tell it whether or not to run the 
TestEncryptedCinderVolumes tests. This defaults to True for backwards 
compatibility. Then there is a devstack change to set the flag in 
tempest.conf based on an environment variable to devstack. Then there is 
a change to devstack-gate to set that flag to False for the Ceph job.
Finally, the cinder change depends on the devstack-gate change so 
everything is in order and it doesn't blow up after marking the rbd 
volume connection as encrypted - which would fail if we didn't skip the 
test.


Now the issue is going to be, there are lots of other volume drivers in 
cinder that are going to be getting this encrypted key set to True which 
is going to blow up without the support in nova for encrypting the 
volume during attach.


The glusterfs and sheepdog jobs are failing in that patch for different 
reasons actually, but we expect third party CI to fail if they don't 
configure tempest by setting TEMPEST_ATTACH_ENCRYPTED_VOLUME=False in 
their devstack run.


So the question is, is everyone OK with this and ready to make that change?

An alternative to avoid the explosion is when nova detects that it 
should use an encryption provider but the 'device_path' key isn't set in 
connection_info, it could use the noop encryption provider and just 
ignore it, but that's putting our heads in the sand and the test is 
passing with a false positive - you're not actually getting encrypted 
attached volumes to your server instances which is the point of the test.


I'll get this on the cinder meeting agenda for next week for discussion 
before the cinder change is approved, unless we come up with other 
alternatives, like a 'supports_encryption' capability flag in cinder 
(something like that) which could tell the cinder API during a request 
to create a volume from an encrypted type that the volume driver doesn't 
support it and the request fails with a 400.  That'd be an API change 
but might be acceptable given the API is pretty much broken today already.


[1] 
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/test_encrypted_cinder_volumes.py

[2] https://review.openstack.org/#/c/193673/
[3] 
https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1463525,n,z


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] proposal to send bulk hypervisor stats data in periodic notifications

2015-06-26 Thread Matt Riedemann



On 6/22/2015 4:55 AM, Daniel P. Berrange wrote:

On Sun, Jun 21, 2015 at 11:14:00AM -0500, Matt Riedemann wrote:



On 6/20/2015 3:35 PM, Daniel P. Berrange wrote:

On Sat, Jun 20, 2015 at 01:50:53PM -0500, Matt Riedemann wrote:

Waking up from a rare nap opportunity on a Saturday, this is what was
bothering me:

The proposal in the etherpad assumes that we are just getting bulk
host/domain/guest VM stats from the hypervisor and sending those in a
notification, but how do we go about filtering those out to only instances
that were booted through Nova?


In general I would say that is an unsupported deployment scenario to
have other random virt guests running on a nova compute node.

Having said that, when nova uses libguestfs, it will create some temp
guests via libvirt, so we do have to consider that possibility.

Even today with the general list domains virt driver call, we could be
getting domains that weren't launched by Nova I believe.


Jason pointed out the ceilometer code gets all of the non-error state
instances from nova first [1] and then for each of those it does the domain
lookup from libvirt, filtering out any that are in SHUTOFF state [2].

When talking about the new virt driver API for bulk stats, danpb said to use
virConnectGetAllDomainStats with libvirt [3] but I'm not aware of that being
able to filter out instances that weren't created by nova.  I don't think we
want a notification from nova about the hypervisor stats to include things
that were created outside nova, like directly through virsh or vCenter.

For at least libvirt, if virConnectGetAllDomainStats returns the domain
metadata then we can filter those since there is nova-specific metadata in
the domains created through nova [4] but I'm not sure that's true about the
other virt types in nova (I think the vCenter driver tags VMs somehow as
being created by OpenStack/Nova, but not sure about xen/hyper-v/ironic).


The nova database hsa a list of domains that it owns, so if you query the
database for a list of valid UUIDs for the host, you can use that to filter
the domains that libvirt reports by comparing UUIDs.

Regards,
Daniel



Dan, is virsh domstats using virConnectGetAllDomainStats?  I have libvirt
1.2.8 on RHEL 7.1, created two m1.tiny instances through nova and got this
from virsh domstats:

http://paste.openstack.org/show/310874/

Is that similar to what we'd see from virConnectGetAllDomainStats?  I
haven't yet written any code in the libvirt driver to use
virConnectGetAllDomainStats to see what that looks like.


Yes, that's the kind of data you'd expect.


Regards,
Daniel



Here is another issue I just thought of.  There are limits to the size 
of a message you can send through RPC right?  So what if you have a lot 
of instances running and you're pulling bulk stats on them and sending 
over rpc via a notification?  Is there the possibility that we blow that 
up on message size limits?


For libvirt/xen/hyper-v this is maybe not a big deal since the compute 
node is 1:1 with the hypervisor and I'd think in most cases you don't 
have enough instances running on that compute host to blow the size 
limit on the message payload, unless you have a big ass compute host.


But what about clustered virt drivers like vcenter and ironic?  That one 
compute node could be getting bulk stats on an entire cloud (vcenter 
cluster at least).


Maybe we could just chunk the messages/notifications if we know the rpc 
message limit?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] proposal to send bulk hypervisor stats data in periodic notifications

2015-06-26 Thread Matt Riedemann



On 6/26/2015 2:17 PM, Matt Riedemann wrote:



On 6/22/2015 4:55 AM, Daniel P. Berrange wrote:

On Sun, Jun 21, 2015 at 11:14:00AM -0500, Matt Riedemann wrote:



On 6/20/2015 3:35 PM, Daniel P. Berrange wrote:

On Sat, Jun 20, 2015 at 01:50:53PM -0500, Matt Riedemann wrote:

Waking up from a rare nap opportunity on a Saturday, this is what was
bothering me:

The proposal in the etherpad assumes that we are just getting bulk
host/domain/guest VM stats from the hypervisor and sending those in a
notification, but how do we go about filtering those out to only
instances
that were booted through Nova?


In general I would say that is an unsupported deployment scenario to
have other random virt guests running on a nova compute node.

Having said that, when nova uses libguestfs, it will create some temp
guests via libvirt, so we do have to consider that possibility.

Even today with the general list domains virt driver call, we could be
getting domains that weren't launched by Nova I believe.


Jason pointed out the ceilometer code gets all of the non-error state
instances from nova first [1] and then for each of those it does
the domain
lookup from libvirt, filtering out any that are in SHUTOFF state [2].

When talking about the new virt driver API for bulk stats, danpb
said to use
virConnectGetAllDomainStats with libvirt [3] but I'm not aware of
that being
able to filter out instances that weren't created by nova.  I don't
think we
want a notification from nova about the hypervisor stats to include
things
that were created outside nova, like directly through virsh or
vCenter.

For at least libvirt, if virConnectGetAllDomainStats returns the
domain
metadata then we can filter those since there is nova-specific
metadata in
the domains created through nova [4] but I'm not sure that's true
about the
other virt types in nova (I think the vCenter driver tags VMs
somehow as
being created by OpenStack/Nova, but not sure about
xen/hyper-v/ironic).


The nova database hsa a list of domains that it owns, so if you
query the
database for a list of valid UUIDs for the host, you can use that to
filter
the domains that libvirt reports by comparing UUIDs.

Regards,
Daniel



Dan, is virsh domstats using virConnectGetAllDomainStats?  I have
libvirt
1.2.8 on RHEL 7.1, created two m1.tiny instances through nova and got
this
from virsh domstats:

http://paste.openstack.org/show/310874/

Is that similar to what we'd see from virConnectGetAllDomainStats?  I
haven't yet written any code in the libvirt driver to use
virConnectGetAllDomainStats to see what that looks like.


Yes, that's the kind of data you'd expect.


Regards,
Daniel



Here is another issue I just thought of.  There are limits to the size
of a message you can send through RPC right?  So what if you have a lot
of instances running and you're pulling bulk stats on them and sending
over rpc via a notification?  Is there the possibility that we blow that
up on message size limits?

For libvirt/xen/hyper-v this is maybe not a big deal since the compute
node is 1:1 with the hypervisor and I'd think in most cases you don't
have enough instances running on that compute host to blow the size
limit on the message payload, unless you have a big ass compute host.

But what about clustered virt drivers like vcenter and ironic?  That one
compute node could be getting bulk stats on an entire cloud (vcenter
cluster at least).

Maybe we could just chunk the messages/notifications if we know the rpc
message limit?



With respect to message size limit, I found a thread in the rabbitmq 
mailing list [1] talking about message size limits which basically says 
you're only bounded by resources available, but sending things too large 
is obviously a bad idea since you starve the system and can potentially 
screw up the heartbeat checking.


The actual 64K size limit I was really thinking of originally was a Qpid 
limitation that was fixed in the long long ago by bnemec [2].


So I guess for the purpose of a bulk stats notification, we'd probably 
be safe to keep the messages under 64K and just chunk through the list 
of instances.


[1] 
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2012-March/018699.html

[2] https://review.openstack.org/#/c/28711/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] modeling connection_info with a versioned object in os-brick

2015-06-10 Thread Matt Riedemann
This is a follow-on to the thread [1] asking about modeling the 
connection_info dict returned from the os-initialize_connection API.


The more I think about modeling that in Nova, the more I think it should 
really be modeled in Cinder with an oslo.versionedobject since it is an 
API contract with the caller (Nova in this case) and any changes to the 
connection_info should require a version change (new/renamed/dropped 
fields).


That got me thinking that if both Cinder and Nova are going to use this 
model, it needs to live in a library, so that would be os-brick now, right?


In terms of modeling, I don't think we want an object for each vendor 
specific backend since (1) there are a ton of them so it'd be like 
herding cats and (2) most are probably sharing common attributes.  So I 
was thinking something more along the lines of classes or types of 
backends, like local vs shared storage, fibre channel, etc.


I'm definitely not a storage guy so I don't know the best way to 
delineate all of these, but here is a rough idea so far. [2]  This is 
roughly based on how I see things modeled in the 
nova.virt.libvirt.volume module today, but there isn't a hierarchy there.


os-brick could contain the translation shim for converting the 
serialized connection_info dict into a hydrated ConnectionInfo object 
based on the type (have some kind of factory pattern in os-brick that 
does the translation based on driver_volume_type maybe given some mapping).


Then when Nova gets the connection_info back from Cinder 
os-initialize_connection, it can send that into os-brick's translator 
utility and get back the ConnectionInfo object and access the attributes 
from that.


Thoughts?

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066450.html
[2] 
https://docs.google.com/drawings/d/1geSKQXz4SqfXllq1Pk5o2YVCycZVf_i6ThY88r9YF4A/edit?usp=sharing


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Follow up actions from the Summit: please help

2015-06-11 Thread Matt Riedemann



On 6/5/2015 4:47 AM, John Garbutt wrote:

Hi,

So in the interests of filling up your inbox yet further...

We have lots of etherpads from the summit:
https://wiki.openstack.org/wiki/Design_Summit/Liberty/Etherpads#Nova

I have extracted all the action items here:
https://etherpad.openstack.org/p/YVR-nova-liberty-summit-action-items

Please do add any actions that might be missing.

Matt Riedemann wins the prize[1] for the first[2][3] completed action
item, by releasing python-novaclient with the volume actions
deprecated.


I will take any and all prizes, virtual or not.



Its has been noted that I greedily took most of the actions for
myself. The name is purely the person who gets to make sure the action
happens. If you want to help (please do help!), contact the person
named, who might be able to hand over that task.

Thanks,
John

[1] its a virtual trophy, here you go: --|
[2] may not have been the first, but whatever
[3] no, there is no prize for the last person

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Matt Riedemann
The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all 
very similar.


I want to extract a common base class that abstracts some of the common 
code and then let the sub-classes provide overrides where necessary.


As part of this, I'm wondering if we could just have a single 
'mount_point_base' config option rather than one per backend like we 
have today:


nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per 
compute host right?  So it seems to make sense that we could have one 
option used for all 4 different driver implementations and reduce some 
of the config option noise.


I checked the os-brick change [1] proposed to nova to see if there would 
be any conflicts there and so far that's not touching any of these 
classes so seems like they could be worked in parallel.


Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Matt Riedemann



On 6/16/2015 4:21 PM, Matt Riedemann wrote:

The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
very similar.

I want to extract a common base class that abstracts some of the common
code and then let the sub-classes provide overrides where necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like we
have today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per
compute host right?  So it seems to make sense that we could have one
option used for all 4 different driver implementations and reduce some
of the config option noise.

I checked the os-brick change [1] proposed to nova to see if there would
be any conflicts there and so far that's not touching any of these
classes so seems like they could be worked in parallel.

Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/



I threw together a quick blueprint [1] just for tracking.

I'm assuming I don't need a spec for this.

[1] 
https://blueprints.launchpad.net/nova/+spec/consolidate-libvirt-fs-volume-drivers


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to microversion API code which is not in API layer

2015-06-12 Thread Matt Riedemann



On 6/12/2015 11:11 AM, Chen CH Ji wrote:

Hi
  We have [1] in the db layer and it's directly used by API
layer , the filters is directly from client's input
  In this case, when doing [2] or similar changes, do we
need to consider microversion usage when we change options?
  Thanks

[1]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4440
[2] https://review.openstack.org/#/c/144883

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Sean has started documenting some of this here:

https://review.openstack.org/#/c/191188/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]

2015-06-17 Thread Matt Riedemann



On 6/17/2015 3:53 PM, Sourabh Patwardhan wrote:

Hello,

I'm working on a new vif driver [1].
As part of the review comments, it was mentioned that a generic VIF
driver will be introduced in Liberty, which may render custom VIF
drivers obsolete.

Can anyone point me to blueprints / specs for the generic driver work?


I think that's being proposed here:

https://review.openstack.org/#/c/162468/


Alternatively, any guidance on how to proceed on my patch is most welcome.

Thanks,
Sourabh

[1] https://review.openstack.org/#/c/157616/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Matt Riedemann



On 6/16/2015 5:56 PM, Michael Still wrote:

I don't think you need a spec for this (its a refactor). That said,
I'd be interested in exploring how you deprecate the old flags. Can
you have more than one deprecated name for a single flag?

Michael

On Wed, Jun 17, 2015 at 7:29 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:



On 6/16/2015 4:21 PM, Matt Riedemann wrote:


The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
very similar.

I want to extract a common base class that abstracts some of the common
code and then let the sub-classes provide overrides where necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like we
have today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per
compute host right?  So it seems to make sense that we could have one
option used for all 4 different driver implementations and reduce some
of the config option noise.

I checked the os-brick change [1] proposed to nova to see if there would
be any conflicts there and so far that's not touching any of these
classes so seems like they could be worked in parallel.

Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/



I threw together a quick blueprint [1] just for tracking.

I'm assuming I don't need a spec for this.

[1]
https://blueprints.launchpad.net/nova/+spec/consolidate-libvirt-fs-volume-drivers


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






You use the deprecated option stuff in oslo.config on the existing (old) 
names so that if someone uses them they get a warning but it 
automatically just uses the new option.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Matt Riedemann



On 6/17/2015 8:14 AM, Duncan Thomas wrote:



On 17 June 2015 at 15:36, Dmitry Guryanov dgurya...@parallels.com
mailto:dgurya...@parallels.com wrote:

On 06/17/2015 02:14 PM, Duncan Thomas wrote:

On 17 June 2015 at 00:21, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:

 The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume
drivers are
 all very similar.

 I want to extract a common base class that abstracts some
of the
 common code and then let the sub-classes provide overrides
where
 necessary.

 As part of this, I'm wondering if we could just have a single
 'mount_point_base' config option rather than one per
backend like
 we have today:

 nfs_mount_point_base
 glusterfs_mount_point_base
 smbfs_mount_point_base
 quobyte_mount_point_base

 With libvirt you can only have one of these drivers
configured per
 compute host right?  So it seems to make sense that we
could have
 one option used for all 4 different driver implementations and
 reduce some of the config option noise.


I can't claim to have tried it, but from a cinder PoV there is
nothing
stopping you having both e.g. an NFS and a gluster backend at
the same
time, and I'd expect nova to work with it. If it doesn't, I'd
consider
it a bug.


I agree, if 2 volume backends will use the same share definition,
like 10.10.2.3:/public you'll get the same mountpoint for them.


I meant that you should be able to have two complete separate backends,
with two different mount points (e.g. /mnt/nfs, /mnt/gluster) and use
both simultaneously, e.g. two different volume types.

--
Duncan Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK, I forgot about the multiple volume backend ability in Cinder so I'll 
drop the idea of having a single mount_point_base option (danpb also 
mentioned this in this thread).


I'll need to remember to put a comment in the base class about why we 
have similar but different options here.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Matt Riedemann



On 6/17/2015 7:36 AM, Dmitry Guryanov wrote:

On 06/17/2015 12:21 AM, Matt Riedemann wrote:

The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
very similar.

I want to extract a common base class that abstracts some of the
common code and then let the sub-classes provide overrides where
necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like we
have today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per
compute host right?  So it seems to make sense that we could have one
option used for all 4 different driver implementations and reduce some
of the config option noise.

I checked the os-brick change [1] proposed to nova to see if there
would be any conflicts there and so far that's not touching any of
these classes so seems like they could be worked in parallel.



os-brick has ability to mount different filesystems, you could find it
in the os_brick/remotefs/remotefs.py file. This module is already used
in cinder's FS volume drivers, which you've mentioned.


Yeah, and nova has the same thing, albeit a much older version:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/remotefs.py

In nova only the LibvirtSMBFSVolumeDriver is using it though, the other 
3 just have a very similar mount/unmount method which I'm looking to 
consolidate as part of this effort.





Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/






--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Matt Riedemann



On 6/17/2015 4:46 AM, Daniel P. Berrange wrote:

On Tue, Jun 16, 2015 at 04:21:16PM -0500, Matt Riedemann wrote:

The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all very
similar.

I want to extract a common base class that abstracts some of the common code
and then let the sub-classes provide overrides where necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like we have
today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per compute
host right?  So it seems to make sense that we could have one option used
for all 4 different driver implementations and reduce some of the config
option noise.


Doesn't cinder support multiple different backends to be used ? I was always
under the belief that it did, and thus Nova had to be capable of using any
of its volume drivers concurrently.


Yeah, I forgot about this and it was pointed out elsewhere in this 
thread so I'm going to drop the common mount_point_base option idea.





Are there any concerns with this?


Not a concern, but since we removed the 'volume_drivers' config parameter,
we're now free to re-arrange the code too. I'd like use to create a subdir
nova/virt/libvirt/volume and create one file in that subdir per driver
that we have.


Sure, I'll do that as part of this work, the remotefs and quobyte 
modules can probably also live in there.  We could also arguably move 
the nova.virt.libvirt.lvm and nova.virt.libvirt.dmcrypt modules into 
nova/virt/libvirt/volume as well.





Is a blueprint needed for this refactor?


Not from my POV. We've just done a huge libvirt driver refactor by adding
the Guest.py module without any blueprint.

Regards,
Daniel



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ceilometer] proposal to send bulk hypervisor stats data in periodic notifications

2015-06-17 Thread Matt Riedemann
Without getting into the details from the etherpad [1], a few of us in 
IRC today were talking about how the ceilometer compute-agent polls 
libvirt directly for guest VM statistics and how ceilometer should 
really be getting this information from nova via notifications sent from 
a periodic task in the nova compute manager.


Nova already has the get_instance_diagnostics virt driver API which is 
nice in that it has structured versioned instance diagnostic information 
regardless of virt driver (unlike the v2 os-server-diagnostics API which 
is a free-form bag of goodies depending on which virt driver is used, 
which makes it mostly untestable and not portable).  The problem is the 
get_instance_diagnostics virt driver API is per-instance, so it's not 
efficient in the case that you want bulk instance data for a given 
compute host.


So the idea is to add a new virt driver API to get the bulk data and 
emit that via a structured versioned payload similar to 
get_instance_diagnostics but for all instances.


Eventually the goal is for nova to send what ceilometer is collecting 
today [2] and then ceilometer can just consume that notification rather 
than doing the direct hypervisor polling it has today.


Anyway, this is the high level idea, the details/notes are in the 
etherpad along with next steps.


Feel free to chime in now with reasons why this is crazy and will never 
work and we shouldn't waste our time on it.


[1] https://etherpad.openstack.org/p/nova-hypervisor-bulk-stats-notify
[2] 
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-compute-meters.html


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-novaclient 2.26.0 released

2015-06-03 Thread Matt Riedemann
The Nova team is beside themselves with glee to release 
python-novaclient 2.26.0.


https://pypi.python.org/pypi/python-novaclient/2.26.0

Changelog:

acf6d1f Remove unused novaclient.tests.unit.v2.utils module
3502c8a Add documentation on command deprecation process
23f1343 Deprecate volume/volume-type/volume-snapshot CRUD CLIs/APIs
e649cea Do not check requirements when loading entry points
0a327ce Eliminate test comprehensions
aa4c947 Remove redundant check for version of `requests`
22569f2 Use clouds.yaml for functional test credentials
6379287 pass credentials via config file instead of magic
9cfecf9 server-group-list support 'all_projects' parameter
de4e40a add ips to novaclient server manager

NOTE: The volume APIs/CLIs are now deprecated and will be removed in the 
first python-novaclient release after the Nova 2016.1 Mujina release.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matt Riedemann



On 5/29/2015 10:30 AM, Dave Walker wrote:

On 29 May 2015 at 14:41, Thierry Carrez thie...@openstack.org wrote:

Hi everyone,

TL;DR:
- We propose to stop tagging coordinated point releases (like 2015.1.1)
- We continue maintaining stable branches as a trusted source of stable
updates for all projects though

Long version:

At the stable branch session in Vancouver we discussed recent
evolutions in the stable team processes and how to further adapt the
work of the team in a big tent world.

One of the key questions there was whether we should continue doing
stable point releases. Those were basically tags with the same version
number (2015.1.1) that we would periodically push to the stable
branches for all projects.

Those create three problems.

(1) Projects do not all follow the same versioning, so some projects
(like Swift) were not part of the stable point releases. More and more
projects are considering issuing intermediary releases (like Swift
does), like Ironic. That would result in a variety of version numbers,
and ultimately less and less projects being able to have a common
2015.1.1-like version.

(2) Producing those costs a non-trivial amount of effort on a very small
team of volunteers, especially with projects caring about stable
branches in various amounts. We were constantly missing the
pre-announced dates on those ones. Looks like that effort could be
better spent improving the stable branches themselves and keeping them
working.

(3) The resulting stable point releases are mostly useless. Stable
branches are supposed to be always usable, and the released version
did not undergo significantly more testing. Issuing them actually
discourages people from taking whatever point in stable branches makes
the most sense for them, testing and deploying that.

The suggestion we made during that session (and which was approved by
the session participants) is therefore to just get rid of the stable
point release concept altogether for non-libraries. That said:

- we'd still do individual point releases for libraries (for critical
bugs and security issues), so that you can still depend on a specific
version there

- we'd still very much maintain stable branches (and actually focus our
efforts on that work) to ensure they are a continuous source of safe
upgrades for users of a given series

Now we realize that the cross-section of our community which was present
in that session might not fully represent the consumers of those
artifacts, which is why we expand the discussion on this mailing-list
(and soon on the operators ML).

If you were a consumer of those and will miss them, please explain why.
In particular, please let us know how consuming that version (which was
only made available every n months) is significantly better than picking
your preferred time and get all the current stable branch HEADs at that
time.

Thanks in advance for your feedback,

[1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch

--
Thierry Carrez (ttx)


This is generally my opinion as-well, I always hoped that *every*
commit would be considered a release rather than an arbitrary tagged
date.  This empowers vendors and distributors to create their own
service pack style update on a cadence that suits their requirements
and users, rather than feeling tied to cross-vendor schedule or
feeling bad picking interim commits.

The primary push back on this when we started the stable branches was
a vendor wanting to have known release versions for their customers,
and I don't think we have had comment from that (or all) vendors.  I
hope this is seen as a positive thing, as it really is IMO.

I have a question about still having library releases you mentioned,
as generally - everything in python is a library.  I don't think we
have a definition of what in OpenStack is considered a mere library,
compared to a project that wouldn't have a release.


A library from an OpenStack POV, from my POV :), is anything that the 
'server' projects, e.g. nova, cinder, keystone, glance, etc, depend on. 
 Primarily the oslo libraries, the clients, and everything they depend on.


It's probably easier to think of it as anything in the 
global-requirements list:


https://github.com/openstack/requirements/blob/master/global-requirements.txt

Note that nova, keystone, glance, cinder, etc aren't in that list.



I also wondered if it might make sense for us to do a better job of
storing metadata of what the shasums of projects used to pass gate for
a given commit - as this might be both useful as a known good state
but also, slightly unrelated, might be helpful in debugging gate
blockages in the future.

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt

Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matt Riedemann



On 5/29/2015 8:41 AM, Thierry Carrez wrote:

Hi everyone,

TL;DR:
- We propose to stop tagging coordinated point releases (like 2015.1.1)
- We continue maintaining stable branches as a trusted source of stable
updates for all projects though

Long version:

At the stable branch session in Vancouver we discussed recent
evolutions in the stable team processes and how to further adapt the
work of the team in a big tent world.

One of the key questions there was whether we should continue doing
stable point releases. Those were basically tags with the same version
number (2015.1.1) that we would periodically push to the stable
branches for all projects.

Those create three problems.

(1) Projects do not all follow the same versioning, so some projects
(like Swift) were not part of the stable point releases. More and more
projects are considering issuing intermediary releases (like Swift
does), like Ironic. That would result in a variety of version numbers,
and ultimately less and less projects being able to have a common
2015.1.1-like version.

(2) Producing those costs a non-trivial amount of effort on a very small
team of volunteers, especially with projects caring about stable
branches in various amounts. We were constantly missing the
pre-announced dates on those ones. Looks like that effort could be
better spent improving the stable branches themselves and keeping them
working.

(3) The resulting stable point releases are mostly useless. Stable
branches are supposed to be always usable, and the released version
did not undergo significantly more testing. Issuing them actually
discourages people from taking whatever point in stable branches makes
the most sense for them, testing and deploying that.

The suggestion we made during that session (and which was approved by
the session participants) is therefore to just get rid of the stable
point release concept altogether for non-libraries. That said:

- we'd still do individual point releases for libraries (for critical
bugs and security issues), so that you can still depend on a specific
version there

- we'd still very much maintain stable branches (and actually focus our
efforts on that work) to ensure they are a continuous source of safe
upgrades for users of a given series

Now we realize that the cross-section of our community which was present
in that session might not fully represent the consumers of those
artifacts, which is why we expand the discussion on this mailing-list
(and soon on the operators ML).

If you were a consumer of those and will miss them, please explain why.
In particular, please let us know how consuming that version (which was
only made available every n months) is significantly better than picking
your preferred time and get all the current stable branch HEADs at that
time.

Thanks in advance for your feedback,

[1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch



To reiterate what I said in the session, for my team personally (IBM), 
we don't align with the point release schedules on stable anyway, we 
release our own stable release fix packs as needed on our own schedules, 
so in that regard I don't see a point in the stable point releases - 
especially since most of the time I don't know when those are going to 
be anyway so we can't plan for them accurately.


Having said that, what I mentioned in IRC the other day is the one 
upside I see to the point releases is it is a milestone that requires 
focus from the stable maintainers, which means if stable has been broken 
for a few weeks and no one has really noticed, converging on a stable 
point release at least forces attention there.


I don't think that is a very good argument for keeping stable point 
releases though, since as you said we don't do any additional testing 
above and beyond what normally happens in the Jenkins runs.  Some of the 
distributions might have extra regression testing scenarios, I'm not 
sure, but no one really spoke to that in the session from the distros 
that were present - I assume they do, but they can do that on their own 
schedule anyway IMO.


I am a bit cynical about thinking that dropping point releases will make 
people spend more time on caring about the health of the stable branches 
(persistent gate failures) or stale changes out for review.  I combed 
through a lot of open stable/icehouse changes yesterday and there were 
many that should have been abandoned 6 months ago but were just sitting 
there, and others that were good fixes to have and should have been 
merged by now.


Personally I've been trying to point out some of these in the 
#openstack-stable IRC channel when I see them so that we don't wait so 
long on these that they fall into a stable support phase where we don't 
think they are appropriate for merging anymore, but if we had acted 
sooner they'd be in.


But I'm also the new guy on the team so I've got belly fire, feel free 
to tell me to shut up. :)


--

Thanks,

Matt Riedemann

Re: [openstack-dev] [all] upcoming oslo releases

2015-06-01 Thread Matt Riedemann



On 6/1/2015 12:04 PM, Doug Hellmann wrote:

We have a bunch of Oslo libraries ready for releases tomorrow, Tuesday 2 June.

I will be releasing:

0.5.0 8685171 debtcollector
1.10.0 9a963a9 oslo.concurrency
1.12.0 02a86d2 oslo.config
0.4.0 4c9b37d oslo.context
1.10.0 42dc936 oslo.db
1.7.0 a02d901 oslo.i18n
1.3.0 6754d13 oslo.log
1.12.0 27efb36 oslo.messaging
0.5.0 757857b oslo.policy
1.8.0 2335e63 oslo.rootwrap
1.6.0 74b3f97 oslo.utils
0.3.0 a03c635 oslo.versionedobjects

If you are interested in more detail, the full set of changes to be included is 
available in http://paste.openstack.org/show/253257/

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Was there agreement on capping oslo.serialization  2.0 in 
global-requirements on master so that nova doesn't pick up the latest 
and breaks with the new iso time format stuff?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] stable/kilo and stable/juno blocked on zake 0.2.2 release

2015-05-27 Thread Matt Riedemann
All changes to stable/kilo (and probably stable/juno) are broken due to 
a zake 0.2.2 release today which excludes kazoo 2.1.


tooz 0.12 requires uncapped zake and kazoo so it's pulling in kazoo 2.1 
which zake 0.2.2 doesn't allow.


ceilometer pulls in tooz.

There is no stable/juno branch for tooz so that we can cap requirements 
for kazoo (since stable/juno g-r caps kazoo=2.0).


We need the oslo team to create a stable/juno branch for tooz, sync g-r 
from stable/juno to tooz on stable/juno and then do a release of tooz 
that will work for stable/juno - else fix kazoo and put out a 2.1.1 
release so that zake will start working with latest kazoo.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] stable/kilo and stable/juno blocked on zake 0.2.2 release

2015-05-27 Thread Matt Riedemann



On 5/27/2015 10:57 AM, Matt Riedemann wrote:

All changes to stable/kilo (and probably stable/juno) are broken due to
a zake 0.2.2 release today which excludes kazoo 2.1.

tooz 0.12 requires uncapped zake and kazoo so it's pulling in kazoo 2.1
which zake 0.2.2 doesn't allow.

ceilometer pulls in tooz.

There is no stable/juno branch for tooz so that we can cap requirements
for kazoo (since stable/juno g-r caps kazoo=2.0).

We need the oslo team to create a stable/juno branch for tooz, sync g-r
from stable/juno to tooz on stable/juno and then do a release of tooz
that will work for stable/juno - else fix kazoo and put out a 2.1.1
release so that zake will start working with latest kazoo.



Here is a link to the type of failure you'll see with this:

http://logs.openstack.org/56/183656/4/check/check-grenade-dsvm/3acba73/logs/old/screen-s-proxy.txt.gz

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] stable/kilo and stable/juno blocked on zake 0.2.2 release

2015-05-27 Thread Matt Riedemann



On 5/27/2015 10:58 AM, Matt Riedemann wrote:



On 5/27/2015 10:57 AM, Matt Riedemann wrote:

All changes to stable/kilo (and probably stable/juno) are broken due to
a zake 0.2.2 release today which excludes kazoo 2.1.

tooz 0.12 requires uncapped zake and kazoo so it's pulling in kazoo 2.1
which zake 0.2.2 doesn't allow.

ceilometer pulls in tooz.

There is no stable/juno branch for tooz so that we can cap requirements
for kazoo (since stable/juno g-r caps kazoo=2.0).

We need the oslo team to create a stable/juno branch for tooz, sync g-r
from stable/juno to tooz on stable/juno and then do a release of tooz
that will work for stable/juno - else fix kazoo and put out a 2.1.1
release so that zake will start working with latest kazoo.



Here is a link to the type of failure you'll see with this:

http://logs.openstack.org/56/183656/4/check/check-grenade-dsvm/3acba73/logs/old/screen-s-proxy.txt.gz




Here is the tooz bug I reported for tracking:

https://bugs.launchpad.net/python-tooz/+bug/1459322

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matt Riedemann



On 5/29/2015 10:00 AM, David Medberry wrote:


On Fri, May 29, 2015 at 7:41 AM, Thierry Carrez thie...@openstack.org
mailto:thie...@openstack.org wrote:

If you were a consumer of those and will miss them, please explain why.
In particular, please let us know how consuming that version (which was
only made available every n months) is significantly better than picking
your preferred time and get all the current stable branch HEADs at that
time.


If vendor packages are used (as many folks do) they will need to weigh
in before operators can really give valid feedback.
I've already heard from one vendor that they will continue to do
point-like releases that they will support, but we probably need a
more complete answer.

Another issue, operators pulling from stable will just need to do a bit
more diligence themselves (and this is probably appropriate.) One thing
we will do in this diligence is something of tracking rate of new bugs
and looking for windows of opportunity where there may be semi-quiescence.


This, IMO, is about the only time right now that I see doing point 
releases on stable as worthwhile.  In other words, things have been very 
touchy in stable for at least the last 6 months, so in the rare moments 
of stability with the gate on stable is when I'd cut a release before 
the next gate breaker.  You can get some examples of why here:


https://etherpad.openstack.org/p/stable-tracker



The other issue I'm aware of is that there will essentially be no
syncing across projects (except by the vendors). Operators using
upstream will need to do a better job (ie, more burden) in making sure
all of the packages work together.)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Using depends-on for patches which require an approved spec

2015-05-28 Thread Matt Riedemann



On 5/27/2015 7:21 PM, John Griffith wrote:



On Wed, May 27, 2015 at 12:15 PM, Joe Gordon joe.gord...@gmail.com
mailto:joe.gord...@gmail.com wrote:



On Tue, May 26, 2015 at 8:45 AM, Daniel P. Berrange
berra...@redhat.com mailto:berra...@redhat.com wrote:

On Fri, May 22, 2015 at 02:57:23PM -0700, Michael Still wrote:
 Hey,

 it would be cool if devs posting changes for nova which depend on us
 approving their spec could use Depends-On to make sure their code
 doesn't land until the spec does.

Does it actually bring any benefit ?  Any change for which there is
a spec is already supposed to be tagged with 'Blueprint:
foo-bar-wiz'
and nova core devs are supposed to check the blueprint is approved
before +A'ing it.  So also adding a Depends-on just feels redundant
to me, and so is one more hurdle for contributors to remember to
add. If we're concerned people forget the Blueprint tag, or forget
to check blueprint approval, then we'll just have same problem with
depends-on - people will forget to add it, and cores will forget
to check the dependant change. So this just feels like extra rules
for no gain and extra pain.


I think it does have a benefit. Giving a spec implementation
patches, commonly signals to reviewers to not review this patch (a
-2 looks scary). Instead of there was a depends-on no scary -2 is
needed, we also wouldn't need to hunt down the -2er and ask them to
remove it (can be a delay due to timezones). Anything that reduces
the number of procedural -2s we need is a good thing IMHO. But that
doesn't mean we should require folks to do this, we can try it out
on a few patches and see how it goes.


Regards,
Daniel
--
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o-
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-
http://live.gnome.org/gtk-vnc :|


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​Seems ok, but I'm wondering if maybe others are doing specs
differently.  What I mean is, we seem to be growing a long process tail:
1. spec
2. blueprint
3. patch with link to blueprint
and now
4. patch with tag Depends-On: spec

I think we used to say if there's a bp link and it's not approved don't
merge which seems similar.  We've had so many procedural steps
added/removed that who knows if I'm just completely out of sync or not.​

Certainly not saying I oppose the idea, just wondering about how much
red-tape we create and what we do with it all.

John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I agree with jgriffith here, I don't really want to see yet another 
layer added to the onion that is the blueprint process.


We have procedural -2 for a reason.  We have had features merged in code 
before the spec and blueprint is approved, so this happens and that's 
why I procedural -2 things when I see them (the Depends-On would be an 
insurance policy against merging code changes before the spec is 
approved, so if people want to use it, go nuts, but I don't think it 
should be a required part of the process).  As a core you could also 
find the spec change and add the Depends-On yourself since Gerrit makes 
that easy, if you're so inclined.


When I give a procedural -2 I leave a comment explaining why and that 
I'll remove it when the blueprint is approved, with a link to the 
blueprint process wiki.


I hope people's feelings aren't getting hurt with a procedural -2, but 
seriously, it's a big project and there isn't time for hand-holding 
everything and everyone, so if people have questions about the process 
they need to use our communication mediums like IRC for getting answers.


--

Thanks,

Matt Riedemann

Re: [openstack-dev] [stable] stable/kilo and stable/juno blocked on zake 0.2.2 release

2015-05-27 Thread Matt Riedemann



On 5/27/2015 11:00 AM, Matt Riedemann wrote:



On 5/27/2015 10:58 AM, Matt Riedemann wrote:



On 5/27/2015 10:57 AM, Matt Riedemann wrote:

All changes to stable/kilo (and probably stable/juno) are broken due to
a zake 0.2.2 release today which excludes kazoo 2.1.

tooz 0.12 requires uncapped zake and kazoo so it's pulling in kazoo 2.1
which zake 0.2.2 doesn't allow.

ceilometer pulls in tooz.

There is no stable/juno branch for tooz so that we can cap requirements
for kazoo (since stable/juno g-r caps kazoo=2.0).

We need the oslo team to create a stable/juno branch for tooz, sync g-r
from stable/juno to tooz on stable/juno and then do a release of tooz
that will work for stable/juno - else fix kazoo and put out a 2.1.1
release so that zake will start working with latest kazoo.



Here is a link to the type of failure you'll see with this:

http://logs.openstack.org/56/183656/4/check/check-grenade-dsvm/3acba73/logs/old/screen-s-proxy.txt.gz





Here is the tooz bug I reported for tracking:

https://bugs.launchpad.net/python-tooz/+bug/1459322



Now that https://review.openstack.org/#/c/173117/ is merged we *should* 
be good for a short while on stable.


Hurry and approve and recheck all the things now before the next library 
release breaks everything again.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty mid-cycle meetup

2015-06-02 Thread Matt Riedemann



On 5/11/2015 9:48 AM, Michael Still wrote:

Ok, given we've had a whole bunch people sign up already and no
complaints here, I think this is a done deal. So, you can now assume
that the dates are final. I will email people currently registered to
let them know as well.

I have added the mid-cycle to the wiki as well.

Cheers,
Michael

On Fri, May 8, 2015 at 4:49 PM, Michael Still mi...@stillhq.com wrote:

I thought I should let people know that we've had 14 people sign up
for the mid-cycle so far.

Michael

On Fri, May 8, 2015 at 3:55 PM, Michael Still mi...@stillhq.com wrote:

As discussed at the Nova meeting this morning, we'd like to gauge
interest in a mid-cycle meetup for the Liberty release.

To that end, I've created the following eventbrite event like we have
had for previous meetups. If you sign up, you're expressing interest
in the event and if we decide there's enough interest to go ahead we
will email you and let you know its safe to book travel and that
you're ticket is now a real thing.

To save you a few clicks, the proposed details are 21 July to 23 July,
at IBM in Rochester, MN.

So, I'd appreciate it if people could take a look at:

 
https://www.eventbrite.com.au/e/openstack-nova-liberty-mid-cycle-developer-meetup-tickets-16908756546

Thanks,
Michael

PS: I haven't added this to the wiki list of sprints because it might
not happen. When the decision is final, I'll add it to the wiki if we
decide to go ahead.

--
Rackspace Australia




--
Rackspace Australia






The wiki page has the details:

https://wiki.openstack.org/wiki/Sprints/NovaLibertySprint

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends

2015-07-01 Thread Matt Riedemann



On 6/30/2015 6:47 PM, Mike Perez wrote:

On 12:24 Jun 26, Matt Riedemann wrote:
snip

So the question is, is everyone OK with this and ready to make that change?


Thanks for all your work on this Matt.

I'm fine with this. I say bite the bullet and we'll see the CI's surface that
aren't skipping or failing this test.

I will communicate with CI maintainers on the CI list about failures as I've
been doing, and reference this thread and the meeting discussion.



Just a status report on this since there are a lot of moving parts:

1. The tempest change merged: https://review.openstack.org/#/c/193831/

2. The devstack change is approved: https://review.openstack.org/#/c/193834/

3. The d-g change is still under review: 
https://review.openstack.org/#/c/193835/


4. Tom Barron opened a couple of nova bugs for issues found by third 
party CI on the cinder change:


https://bugs.launchpad.net/nova/+bug/1470142

https://bugs.launchpad.net/nova/+bug/1470562

I have patches up in nova for both of those bugs.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends

2015-07-02 Thread Matt Riedemann



On 7/2/2015 4:12 AM, Deepak Shetty wrote:



On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com
mailto:thin...@gmail.com wrote:

On 12:24 Jun 26, Matt Riedemann wrote:
snip
 So the question is, is everyone OK with this and ready to make that 
change?

Thanks for all your work on this Matt.


+100, awesome debug, followup and fixing work by Matt


I'm fine with this. I say bite the bullet and we'll see the CI's
surface that
aren't skipping or failing this test.


Just curious, shouldn't this mean we need to have some way of Cinder
querying Nova
for do u have this capability and only then setting the 'encryption'
key in conn_info ?

Better communication between nova and cinder ?

thanx,
deepak



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I thought the same about some capability flag in cinder where the volume 
driver would tell the volume manager if it supported encryption and then 
the cinder volume manager would use that to tell if a request to create 
a volume from an encryption type was possible.  But the real problem in 
our case is the encryption provider support, which is currently the luks 
and cryptsetup modules in nova.  However, the encryption provider is 
completely pluggable [1] from what I can tell, the libvirt driver in 
nova just creates the provider class (assuming it can import it) and 
calls the methods defined in the VolumeEncryptor abstract base class [2].


So whether or not encryption is supported during attach is really up to 
the encryption provider implementation, the volume driver connector code 
(now in os-brick), and what the cinder volume driver is providing back 
to nova during os-initialize_connection.


I guess my point is I don't have a simple solution besides actually 
failing when we know we can't encrypt the volume during attach - which 
is at least better than the false positive we have today.


[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/__init__.py#n47
[2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/base.py#n28


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging] how to deal with the rename of config files in neutron on upgrade?

2015-07-02 Thread Matt Riedemann
This change in neutron [1] renames the linuxbridge and openvswitch 
plugin config files.  I'm familiar with the %config(noreplace) directive 
in rpm but I'm not sure if there is a special trick with rpm to rename a 
config file while not losing the changes in the config file during the 
upgrade.


Is this just something that has to be handled with trickery in the %post 
macro where we merge the contents together if the old config file 
exists?  Would symbolic links help?


Changes like this seem like a potential giant pain in the ass for packagers.

[1] https://review.openstack.org/#/c/195277/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp and/or spec required for new metadata service API version?

2015-07-06 Thread Matt Riedemann
Related to this change [1] which adds a new LIBERTY openstack version to 
the metadata service API, it's pretty trivial but it's akin to 
microversions in the nova-api v2.1 code, and we require blueprints and 
specs for those changes generally.


So do we require a blueprint and optionally a spec for this type of 
change, or is it simple enough as a bug fix on it's own?


[1] https://review.openstack.org/#/c/197185/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] how to deal with the rename of config files in neutron on upgrade?

2015-07-02 Thread Matt Riedemann



On 7/2/2015 10:39 AM, Kyle Mestery wrote:

On Thu, Jul 2, 2015 at 10:35 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:

This change in neutron [1] renames the linuxbridge and openvswitch
plugin config files.  I'm familiar with the %config(noreplace)
directive in rpm but I'm not sure if there is a special trick with
rpm to rename a config file while not losing the changes in the
config file during the upgrade.

Is this just something that has to be handled with trickery in the
%post macro where we merge the contents together if the old config
file exists?  Would symbolic links help?

Changes like this seem like a potential giant pain in the ass for
packagers.


While a pain in the ass, this should have been done when we deprecated
the agents two cycles ago, so this was really just bleeding the pain out
longer. I flagged this as DocImpact so we can add a documentation note,
and we'll update the Release Notes with this as well.

[1] https://review.openstack.org/#/c/195277/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, I'm just looking for ideas. Someone mentioned you could just copy 
the existing config and name it the new config so it'd have the old 
settings, and on install rpm won't overwrite it b/c of 
%config(noreplace).  That's something easy to do in %pre or %post.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] can we deprecate the volume CLIs in novaclient?

2015-05-22 Thread Matt Riedemann



On 5/15/2015 9:38 AM, Sean Dague wrote:

On 05/15/2015 12:28 PM, Everett Toews wrote:

On May 15, 2015, at 10:28 AM, John Griffith john.griffi...@gmail.com
mailto:john.griffi...@gmail.com wrote:




On Thu, May 14, 2015 at 8:29 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:

 This came up while talking about bug 1454369 [1].  This also came
 up at one point in kilo when we found out the volume CLIs in
 novaclient didn't work at one point and we broke the cells
 devstack exercises job because of it.

 python-novaclient uses cinder API to handle the volume CLI rather
 than going to the nova volume API.  There are issues with this
 because novaclient needs a certain endpoint/service_type setup in
 the service catalog to support cinder v1/v2 APIs (whatever
 devstack sets up today).  novaclient defaults to volume (v1) and
 if you disable that in cinder then novaclient doesn't work because
 it's not using volumev2.

 So like anyone might ask, why doesn't novaclient talk to nova
 volume APIs to do volume thingies and the answer is because the
 nova volume API doesn't handle all of the volume thingies like
 snapshots and volume types.

 So I got to to thinking, why the hell are we still supporting
 volume operations via novaclient anyway?  Isn't that
 cinderclient's job?  Or python-openstackclient's job?  Can't we
 deprecate the volume CLIs in novaclient and tell people to use
 cinderclient instead since it now has version discovery [2] so
 that problem would be handled for us.

 Since we have nova volume APIs maybe we can't remove the volume
 CLIs in novaclient, but could they be limited to just operations
 that the nova API supports and then we make novaclient talk to
 nova volume APIs rather than cinder APIs (because the nova API
 will talk to cinderclient which again has the version discovery
 done for us).

 Or assuming we could deprecate the volume CLIs in novaclient, what
 would the timeline on deprecation be since it's not a server
 project with a 6 month release cycle?  I'm assuming we'd still
 have 6-12 months deprecation on a client like this because of all
 of the tooling potentially written around it.

 [1] https://bugs.launchpad.net/python-novaclient/+bug/1454369
 [2] https://review.openstack.org/#/c/145613/

​I can't speak for the nova folks, however i do think removing the
volume calls from novaclient seems ok.  It was always sort of left
for compat I think, and not sure any of us really thought about just
removing it.  At this point it probably just introduces confusion and
as you're running into problems.

Seems like a good plan, and somewhat less confusing.  On a side note,
might be some other *things* in novaclient that we could look at as
well, particularly around networking.  ​


FWIW, this is already underway in jclouds-land. After a lengthy
deprecation period (still ongoing actually), we’ll be removing the Nova
volume calls but obviously keeping the volume attachment stuff.

Both the Nova and Cinder calls have coexisted for over a year with
documentation pointing from Nova to Cinder. The deprecation annotations
handle emitting warnings for the deprecated calls to increase visibility.


Everett, this is actually a different thing.

nova volume   does not talk to Nova's volume proxy, it goes
straight to cinder through the service catalog.

Deprecating this part of nova client is probably fine, but it should
have a lengthy deprecation cycle, as it's been like this for a very long
time. It feels like it won't go away before openstack client starts
taking hold anyway.

I think this raises a more important issue of Service Catalog
Standarization. The reason we're in a bind here has as much to do with
the fact that service catalog content isn't standardized for OpenStack
services. If so, having another cinder implementation in novaclient
wouldn't be such a bad thing, and not having to switch cli commands is
pretty handy (all hail our future osc overlords).

Fortunately, we're going to be talking about just this kind of problem
at Summit -
http://libertydesignsummit.sched.org/event/194b2589eca19956cb88ada45e985e29

-Sean



Here is the change for those following along at home:

https://review.openstack.org/#/c/185141/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker] Status update

2015-05-21 Thread Matt Riedemann



On 5/17/2015 1:22 PM, Adrian Otto wrote:

Good questions Matt and Alex. Currently Magnum creates Bays (places that can 
run containers, or pods of containers, and other high level resources such as 
service, replication controllers, etc.) composed of one or more Nova instances 
(Nodes). This way, we can potentially allow the creation and management for 
containers on any compute form factor (bare metal, VM, container, etc.). The 
Nova instances Magnum uses to form the Bays come from Heat.

NOTE: There is no such thing as a nova-magnum virt driver today. The following 
discussion is theoretical.

Understanding that, it would be possible to make a nova-magnum virt driver that 
talks to Magnum to ask for an instance of type container from an *existing* 
Bay, but then Magnum would need to have access to Nova instances that are NOT 
produced by the nova-magnum driver in order to scale out the Bay by adding more 
nodes to it. If we do this, and the cloud operator does not realize the 
circular dependency when setting Nova to use a nova-magnum virt driver, it 
would be possible to create a loop where nova-magnum provides containers to 
Magnum that come from the same bay we are attempting to scale out. This would 
prevent the Bay from actually scaling out because it will be sourcing capacity 
from itself. We could allow this to work by requiring anyone who uses 
nova-magnum to also have another Nova host aggregate that uses an alternate 
virt driver (Ironic, libvirt, etc.), and having some way for Magnum’s Heat 
template to ask only for instances produced without the Magnum virt driv!
er when 
forming or scaling Bays. I suppose a scheduling hint might be adequate for this.


Adrian


On May 17, 2015, at 11:48 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:



On 5/16/2015 10:52 PM, Alex Glikson wrote:

If system containers is a viable use-case for Nova, and if Magnum is
aiming at both application containers and system containers, would it
make sense to have a new virt driver in nova that would invoke Magnum
API for container provisioning and life cycle? This would avoid (some of
the) code duplication between Magnum and whatever nova virt driver would
support system containers (such as nova-docker). Such an approach would
be conceptually similar to nova virt driver invoking Ironic API,
replacing nova-baremetal (here again, Ironic surfaces various
capabilities which don't make sense in Nova).
We have recently started exploring this direction, and would be glad to
collaborate with folks if this makes sense.

Regards,
Alex


Adrian Otto adrian.o...@rackspace.com wrote on 09/05/2015 07:55:47 PM:


From: Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 09/05/2015 07:57 PM
Subject: Re: [openstack-dev] [nova-docker] Status update

John,

Good questions. Remarks in-line from the Magnum perspective.

On May 9, 2015, at 2:51 AM, John Garbutt j...@johngarbutt.com wrote:


On 1 May 2015 at 16:14, Davanum Srinivas dava...@gmail.com wrote:

Anyone still interested in this work? :)

* there's a stable/kilo branch now (see
http://git.openstack.org/cgit/stackforge/nova-docker/).
* CI jobs are running fine against both nova trunk and nova's
stable/kilo branch.
* there's an updated nova-spec to get code back into nova tree (see
https://review.openstack.org/#/c/128753/)


To proxy the discussion from the etherpad onto the ML, we need to work
out why this lives in nova, given Magnum is the place to do container
specific things.


To the extent that users want to control Docker containers through
the Nova API (without elaborate extensions), I think a stable in-
tree nova-docker driver makes complete sense for that.


[...]



Now whats the reason for adding the Docker driver, given Nova is
considering container specific APIs out of scope, and expecting
Magnum to own that kind of thing.


I do think nova-docker should find it’s way into the Nova tree. This
makes containers more accessible in OpenStack, and appropriate for
use cases where users want to treat containers like they treat
virtual machines. On the subject of extending the Nova API to
accommodate special use cases of containers that are beyond the
scope of the Nova API, I think we should resist that, and focus
those container-specific efforts in Magnum. That way, cloud
operators can choose whether to use Nova or Magnum for their
container use cases depending on the range of features they desire
from the API. This approach should also result in less overlap of

efforts.



[...]

To sum up, I strongly support merging in nova-docker, with the
caveat that it operates within the existing Nova API (with few minor
exceptions). For features that require API features that are truly
container specific, we should land those in Magnum, and keep the
Nova API scoped to operations that are appropriate for “all instance

types.


Adrian



Thanks,
John

Re: [openstack-dev] [CI] gate wedged by tox = 2.0

2015-05-26 Thread Matt Riedemann



On 5/14/2015 9:50 AM, Brant Knudson wrote:




On Thu, May 14, 2015 at 9:41 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:



On 5/14/2015 5:46 AM, Sean Dague wrote:

On 05/14/2015 04:16 AM, Robert Collins wrote:

Tox 2.0 just came out, and it isolates environment variables
- which
is good, except if you use them (which we do). So everything is
broken.

https://review.openstack.org/182966

Should fix it until projects have had time to fix up their local
tox.ini's to let through the needed variables.

As an aside it might be nice to get this specifier from
global-requirements, so that its managed in the same place
as all our
other specifiers.


This will only apply to tempest jobs, and I see lots of tempest jobs
passing without it. Do we have a bug with some failures linked
because
of it?

If this is impacting unit tests, that has to be directly fixed
there.

 -Sean


python-novaclient, neutron and python-manilaclient are being tracked
against bug https://bugs.launchpad.net/neutron/+bug/1455102.

Heat is being tracked against bug
https://bugs.launchpad.net/heat/+bug/1455065.

--

Thanks,

Matt Riedemann



Here's the fix in keystoneclient if you need an example:
https://review.openstack.org/#/c/182900/

It just added passenv =OS_*

If you're seeing jobs pass without the workaround then those jobs are
probably not running with tox=2.0.

- Brant



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There is a similar issue with horizon:

https://bugs.launchpad.net/horizon/+bug/1458928

It's specifically busted on stable/kilo.  I think it's not hitting on 
master because the jshint stuff has been cleaned up a bit on master and 
it's less strict in the run, which .jshintrc was handling before and why 
it fails on stable/kilo.


I'll be pushing a change soon.  We might think about capping tox2.0 on 
stable branches though...


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

2015-05-26 Thread Matt Riedemann



On 5/26/2015 9:53 AM, Davanum Srinivas wrote:

We are gleeful to announce the release of:

oslo.vmware 0.13.0: Oslo VMware library

With source available at:

 http://git.openstack.org/cgit/openstack/oslo.vmware

For more details, please see the git log history below and:

 http://launchpad.net/oslo.vmware/+milestone/0.13.0

Please report issues through launchpad:

 http://bugs.launchpad.net/oslo.vmware

Changes in oslo.vmware 0.12.0..0.13.0
-

5df9daa Add ToolsUnavailable exception
286cb9e Add support for dynamicProperty
7758123 Remove support for Python 3.3
11e7d71 Updated from global requirements
883c441 Remove run_cross_tests.sh
1986196 Use suds-jurko on Python 2
84ab8c4 Updated from global requirements
6cbde19 Imported Translations from Transifex
8d4695e Updated from global requirements
1668fef Raise VimFaultException for unknown faults
15dbfb2 Imported Translations from Transifex
c338f19 Add NoDiskSpaceException
25ec49d Add utility function to get profiles by IDs
32c61ee Add bandit to tox for security static analysis
f140b7e Add SPBM WSDL for vSphere 6.0

Diffstat (except docs and test files)
-

bandit.yaml|  130 +++
openstack-common.conf  |2 -
.../locale/fr/LC_MESSAGES/oslo.vmware-log-error.po |9 -
.../locale/fr/LC_MESSAGES/oslo.vmware-log-info.po  |3 -
.../fr/LC_MESSAGES/oslo.vmware-log-warning.po  |   10 -
oslo.vmware/locale/fr/LC_MESSAGES/oslo.vmware.po   |   86 +-
oslo.vmware/locale/oslo.vmware.pot |   48 +-
oslo_vmware/api.py |   10 +-
oslo_vmware/exceptions.py  |   13 +-
oslo_vmware/objects/datastore.py   |6 +-
oslo_vmware/pbm.py |   18 +
oslo_vmware/service.py |2 +-
oslo_vmware/wsdl/6.0/core-types.xsd|  237 +
oslo_vmware/wsdl/6.0/pbm-messagetypes.xsd  |  186 
oslo_vmware/wsdl/6.0/pbm-types.xsd |  806 ++
oslo_vmware/wsdl/6.0/pbm.wsdl  | 1104 
oslo_vmware/wsdl/6.0/pbmService.wsdl   |   16 +
requirements-py3.txt   |   27 -
requirements.txt   |8 +-
setup.cfg  |2 +-
test-requirements-bandit.txt   |1 +
tox.ini|   14 +-
27 files changed, 2645 insertions(+), 262 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 807bcfc..dd5a1aa 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-pbr=0.6,!=0.7,1.0
+pbr=0.11,2.0
@@ -23,3 +23,3 @@ PyYAML=3.1.0
-suds=0.4
-eventlet=0.16.1,!=0.17.0
-requests=2.2.0,!=2.4.0
+suds-jurko=0.6
+eventlet=0.17.3
+requests=2.5.2
diff --git a/test-requirements-bandit.txt b/test-requirements-bandit.txt
new file mode 100644
index 000..38c39e1
--- /dev/null
+++ b/test-requirements-bandit.txt
@@ -0,0 +1 @@
+bandit==0.10.1





There is now a blocking vmware unit tests bug in nova due to the 
oslo.vmware 0.13.0 release:


https://bugs.launchpad.net/nova/+bug/1459021

Since the vmware driver unit test code in nova likes to stub out 
external APIs there is probably a bug in the nova unit tests rather than 
an issue in oslo.vmware, but I'm not very familiar so I can't really say.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

2015-05-26 Thread Matt Riedemann



On 5/26/2015 4:19 PM, Matt Riedemann wrote:



On 5/26/2015 9:53 AM, Davanum Srinivas wrote:

We are gleeful to announce the release of:

oslo.vmware 0.13.0: Oslo VMware library

With source available at:

 http://git.openstack.org/cgit/openstack/oslo.vmware

For more details, please see the git log history below and:

 http://launchpad.net/oslo.vmware/+milestone/0.13.0

Please report issues through launchpad:

 http://bugs.launchpad.net/oslo.vmware

Changes in oslo.vmware 0.12.0..0.13.0
-

5df9daa Add ToolsUnavailable exception
286cb9e Add support for dynamicProperty
7758123 Remove support for Python 3.3
11e7d71 Updated from global requirements
883c441 Remove run_cross_tests.sh
1986196 Use suds-jurko on Python 2
84ab8c4 Updated from global requirements
6cbde19 Imported Translations from Transifex
8d4695e Updated from global requirements
1668fef Raise VimFaultException for unknown faults
15dbfb2 Imported Translations from Transifex
c338f19 Add NoDiskSpaceException
25ec49d Add utility function to get profiles by IDs
32c61ee Add bandit to tox for security static analysis
f140b7e Add SPBM WSDL for vSphere 6.0

Diffstat (except docs and test files)
-

bandit.yaml|  130 +++
openstack-common.conf  |2 -
.../locale/fr/LC_MESSAGES/oslo.vmware-log-error.po |9 -
.../locale/fr/LC_MESSAGES/oslo.vmware-log-info.po  |3 -
.../fr/LC_MESSAGES/oslo.vmware-log-warning.po  |   10 -
oslo.vmware/locale/fr/LC_MESSAGES/oslo.vmware.po   |   86 +-
oslo.vmware/locale/oslo.vmware.pot |   48 +-
oslo_vmware/api.py |   10 +-
oslo_vmware/exceptions.py  |   13 +-
oslo_vmware/objects/datastore.py   |6 +-
oslo_vmware/pbm.py |   18 +
oslo_vmware/service.py |2 +-
oslo_vmware/wsdl/6.0/core-types.xsd|  237 +
oslo_vmware/wsdl/6.0/pbm-messagetypes.xsd  |  186 
oslo_vmware/wsdl/6.0/pbm-types.xsd |  806 ++
oslo_vmware/wsdl/6.0/pbm.wsdl  | 1104

oslo_vmware/wsdl/6.0/pbmService.wsdl   |   16 +
requirements-py3.txt   |   27 -
requirements.txt   |8 +-
setup.cfg  |2 +-
test-requirements-bandit.txt   |1 +
tox.ini|   14 +-
27 files changed, 2645 insertions(+), 262 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 807bcfc..dd5a1aa 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-pbr=0.6,!=0.7,1.0
+pbr=0.11,2.0
@@ -23,3 +23,3 @@ PyYAML=3.1.0
-suds=0.4
-eventlet=0.16.1,!=0.17.0
-requests=2.2.0,!=2.4.0
+suds-jurko=0.6
+eventlet=0.17.3
+requests=2.5.2
diff --git a/test-requirements-bandit.txt b/test-requirements-bandit.txt
new file mode 100644
index 000..38c39e1
--- /dev/null
+++ b/test-requirements-bandit.txt
@@ -0,0 +1 @@
+bandit==0.10.1





There is now a blocking vmware unit tests bug in nova due to the
oslo.vmware 0.13.0 release:

https://bugs.launchpad.net/nova/+bug/1459021

Since the vmware driver unit test code in nova likes to stub out
external APIs there is probably a bug in the nova unit tests rather than
an issue in oslo.vmware, but I'm not very familiar so I can't really say.



I have a revert for oslo.vmware here:

https://review.openstack.org/#/c/185744/

And a block on the 0.13.0 version in global-requirements here:

https://review.openstack.org/#/c/185748/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Thoughts on ReleaseNoteImpact git commit message tag

2015-07-07 Thread Matt Riedemann
While reviewing a change in nova [1] I mentioned that we should have 
something in the release notes for Liberty on the change.  Typically for 
this I ask that the UpgradeImpact tag is added to the commit message 
because at the end of a release I search git/gerrit for any commits that 
have UpgradeImpact in them since the last major release (Kilo in this 
case) and then we should see if those need mentioning in the release 
notes upgrade impacts section for Nova (which they usually do).


The thing is, UpgradeImpact isn't always appropriate for the change, but 
DocImpact is used too broadly and as far as I can tell, it's not really 
for updating release notes [2].  It's for updating stuff found in 
docs.openstack.org.


So we kicked around the idea of a ReleaseNoteImpact tag so that we can 
search for those at the end of the release in addition to UpgradeImpact.


Are any other projects already doing something like this?  Or do we just 
stick with UpgradeImpact?  Per the docs [3] it mentions release notes 
but for configuration changes - which not everything in the release 
notes for an upgrade impact requires a config change, some are 
behavior/usage changes.  In this specific case in [1], it's actually 
probably an APIImpact, albeit indirectly.


Anyway, just putting this out there to see how other projects are 
handling tagging changes for inclusion in the release notes.


[1] https://review.openstack.org/#/c/189632/
[2] https://wiki.openstack.org/wiki/Documentation/DocImpact
[3] http://docs.openstack.org/infra/manual/developers.html#peer-review

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-12 Thread Matt Riedemann

Bug reported here:

https://bugs.launchpad.net/taskflow/+bug/1484267

We need a 0.6.2 release of taskflow from stable/juno with the g-r caps 
(for networkx specifically) to unblock the cinder py26 job in stable/juno.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-13 Thread Matt Riedemann



On 8/12/2015 7:04 PM, Robert Collins wrote:

On 13 August 2015 at 10:31, Mike Perez thin...@gmail.com wrote:

On Wed, Aug 12, 2015 at 1:13 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:

Bug reported here:

https://bugs.launchpad.net/taskflow/+bug/1484267

We need a 0.6.2 release of taskflow from stable/juno with the g-r caps (for
networkx specifically) to unblock the cinder py26 job in stable/juno.


Josh Harlow is on vacation.

I asked in #openstack-state-management channel who else can do a
release, but haven't heard back from anyone yet.


The library releases team manages all oslo releases; submit a proposed
release to openstack/releases. I need to pop out shortly but will look
in in my evening to see about getting the release tagged. If Dims or
Doug are around now they can do it too, obviously :)

-Rob




That's the easy part.  The hard part is finding someone that can create 
the stable/juno branch for the taskflow project.  I've only ever seen 
dhellmann do that for oslo libraries.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-deb] Devstack stable/juno fails to install

2015-08-19 Thread Matt Riedemann



On 8/18/2015 5:49 PM, Tony Breeds wrote:

On Tue, Aug 18, 2015 at 10:04:56PM +, Jeremy Stanley wrote:

On 2015-08-18 15:48:08 -0500 (-0500), Matt Riedemann wrote:
[...]

You'd also have to raise the cap on swiftclient in g-r stable/juno
to python-swiftclient=2.2.0,2.4.0.

[...]

Followed by stable point releases of everything with a stable/juno
branch and a {test-,}requirements.txt entry for python-swiftclient.
Oh, and some of those things might _also_ have overly-strict caps in
global-requirements.txt, so iterate until clean.


Oh boy ;P

Yours Tony.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, it sucks.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-deb] Devstack stable/juno fails to install

2015-08-18 Thread Matt Riedemann



On 8/17/2015 7:59 PM, Tony Breeds wrote:

On Mon, Aug 17, 2015 at 03:51:46PM -0500, Matt Riedemann wrote:


What version of taskflow is installed?  Cinder 2014.2.3 requires this
version of taskflow [1]:

taskflow=0.4,0.7.0

Which should get you taskflow 0.6.2, and taskflow 0.6.2 has this requirement
[2] for futures:

futures=2.2.0,=2.1.6

What version of futures is installed?  Run 'pip show futures'.


---
stack@stack01:~/projects/openstack/openstack-dev/devstack$ pip show futures
---
Metadata-Version: 2.0
Name: futures
Version: 3.0.3
Summary: Backport of the concurrent.futures package from Python 3.2
Home-page: https://github.com/agronholm/pythonfutures
Author: Alex Gronholm
Author-email: alex.gronholm+p...@nextday.fi
License: BSD
Location: /usr/local/lib/python2.7/dist-packages
Requires:
---

I think this is being pulled in by an uncapped dependancy in swiftclient:
---
2015-08-17 23:41:42.287 | Collecting futures=2.1.3 (from 
python-swiftclient=2.3.1,=2.2.0-glance==2014.2.4.dev6)
2015-08-17 23:41:42.326 |   Using cached futures-3.0.3-py2-none-any.whl
---

I know the devstack/juno install worked on Friday last week, so something 
changed over the weekend.

Ahh perhaps this https://review.openstack.org/#/c/212652/ ?

My solution would be to cap futures in swiftclient but I don't knwo that is 
correct.

Yours Tony.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, g-r for stable/juno caps python-swiftclient at 2.3.1 [1] and that 
version has an uncapped dependency on futures [2].


You'd have to get a stable/juno branch created for python-swiftclient 
from the 2.3.1 tag probably, then cap futures and release that as 2.3.2. 
 You'd also have to raise the cap on swiftclient in g-r stable/juno to 
python-swiftclient=2.2.0,2.4.0.


[1] 
https://github.com/openstack/requirements/blob/stable/juno/global-requirements.txt#L121
[2] 
https://github.com/openstack/python-swiftclient/blob/2.3.1/requirements.txt#L1


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] compute - conductor and version compatibility

2015-08-18 Thread Matt Riedemann



On 8/17/2015 10:19 AM, Dan Smith wrote:

Is this documented somewhere?

I did a bit of digging and couldn't find anywhere that explicitly
required that for the J-K upgrade.  Certainly it was documented for the
I-J upgrade.


It's our model, so I don't think we need to document it for each cycle
since we don't expect it to change. We may need more general coverage
for this topic, but I don't expect the release notes to always mention it.

This isn't formal documentation, but it's relevant:

http://www.danplanet.com/blog/2015/06/26/upgrading-nova-to-kilo-with-minimal-downtime/

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We have some very loose upgrade docs in the devref [1].  Under the 
Process section, steps 4 and 5 talk about upgrading services in order 
and says conductor (implied controller) first.  Granted we need to clean 
up this page and merge with Dan's more specific blog post, but there is 
*something* in tree docs.


[1] http://docs.openstack.org/developer/nova/upgrade.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we combine a set of minor microversion bump needed fix into one microversoin bump?

2015-08-19 Thread Matt Riedemann



On 8/19/2015 2:16 PM, Matt Riedemann wrote:



On 8/19/2015 1:33 PM, Matt Riedemann wrote:



On 8/19/2015 12:18 PM, Chen CH Ji wrote:

In doing [1] [2], some suggestions raised that those kind of change need
microversion bump which is fine
however, another concern raised on whether we need combine a set of
those kind of changes (which may only change some error code) into one
bump ?

apparently there are pros and cons for doing so, combine makes API
version bump not that frequent for minor changes
but makes it hard to review and backport ... so any suggestions on how
to handle ? Thanks


[1]https://review.openstack.org/#/c/198753/
[2]https://review.openstack.org/#/c/173985/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't see why https://review.openstack.org/#/c/198753/ would require a
microversion bump.  We've always allowed handling 500s and turning them
into more appropriate error codes, like a 400 in this case.

As noted:

http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html



Changing an error response code to be more accurate. is generally
acceptable.



https://review.openstack.org/#/c/173985/ doesn't require a version bump
for the same reasons, IMO.  If people are hung up on 400 vs 403 in that
change, just make it a 400, we do it both ways in the compute API.



I guess the problems are in the doc:

http://git.openstack.org/cgit/openstack/nova/tree/doc/source/api_microversion_dev.rst#n63

  - the list of status codes allowed for a particular request

Example: an API previously could return 200, 400, 403, 404 and the
change would make the API now also be allowed to return 409.

  - changing a status code on a particular response

Example: changing the return code of an API from 501 to 400.

So in the one change, just return 400.  In the service_get change where 
you want to return a 400 but it's only returning a 404 today, then I 
guess according to the doc you'd need a microversion bump.  But what do 
we do about fixing that bug in the v2 API?  Do we not fix it?  Do we 
return 404 but v2.1 would return 400 with a microversion bump?  That's 
equally inconsistent and gross IMO.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we combine a set of minor microversion bump needed fix into one microversoin bump?

2015-08-19 Thread Matt Riedemann



On 8/19/2015 12:18 PM, Chen CH Ji wrote:

In doing [1] [2], some suggestions raised that those kind of change need
microversion bump which is fine
however, another concern raised on whether we need combine a set of
those kind of changes (which may only change some error code) into one
bump ?

apparently there are pros and cons for doing so, combine makes API
version bump not that frequent for minor changes
but makes it hard to review and backport ... so any suggestions on how
to handle ? Thanks


[1]https://review.openstack.org/#/c/198753/
[2]https://review.openstack.org/#/c/173985/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't see why https://review.openstack.org/#/c/198753/ would require a 
microversion bump.  We've always allowed handling 500s and turning them 
into more appropriate error codes, like a 400 in this case.


As noted:

http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html

Changing an error response code to be more accurate. is generally 
acceptable.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we combine a set of minor microversion bump needed fix into one microversoin bump?

2015-08-19 Thread Matt Riedemann



On 8/19/2015 1:33 PM, Matt Riedemann wrote:



On 8/19/2015 12:18 PM, Chen CH Ji wrote:

In doing [1] [2], some suggestions raised that those kind of change need
microversion bump which is fine
however, another concern raised on whether we need combine a set of
those kind of changes (which may only change some error code) into one
bump ?

apparently there are pros and cons for doing so, combine makes API
version bump not that frequent for minor changes
but makes it hard to review and backport ... so any suggestions on how
to handle ? Thanks


[1]https://review.openstack.org/#/c/198753/
[2]https://review.openstack.org/#/c/173985/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't see why https://review.openstack.org/#/c/198753/ would require a
microversion bump.  We've always allowed handling 500s and turning them
into more appropriate error codes, like a 400 in this case.

As noted:

http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html


Changing an error response code to be more accurate. is generally
acceptable.



https://review.openstack.org/#/c/173985/ doesn't require a version bump 
for the same reasons, IMO.  If people are hung up on 400 vs 403 in that 
change, just make it a 400, we do it both ways in the compute API.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] libvirt + LXC CI - where's the beef?

2015-08-19 Thread Matt Riedemann
After spending a few hours on 
https://bugs.launchpad.net/nova/+bug/1370590 I'm annoyed by the fact we 
don't yet have a CI system for testing libvirt + LXC.


At the Juno midcycle in Portland I thought I remember some guy(s) from 
Rackspace talking about getting a CI job running, whatever happened with 
that?


It seems like we should be able to get this going using community infra, 
right?  Just need some warm bodies to get the parts together and figure 
out which Tempest tests can't be run with that setup - but we have the 
hypervisor support matrix to help us out as a starter.


It also seems unfair to require third party CI for libvirt + parallels 
(virtuozzo) but we don't have the same requirement for LXC.


What gives?!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] libvirt + LXC CI - where's the beef?

2015-08-20 Thread Matt Riedemann



On 8/20/2015 5:33 AM, John Garbutt wrote:

On 20 August 2015 at 03:08, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

After spending a few hours on https://bugs.launchpad.net/nova/+bug/1370590
I'm annoyed by the fact we don't yet have a CI system for testing libvirt +
LXC.


Bit thank you for raising this one.


At the Juno midcycle in Portland I thought I remember some guy(s) from
Rackspace talking about getting a CI job running, whatever happened with
that?


Now you mention it, I remember that.
I haven't heard any news about that, let me poke some people.


It seems like we should be able to get this going using community infra,
right?  Just need some warm bodies to get the parts together and figure out
which Tempest tests can't be run with that setup - but we have the
hypervisor support matrix to help us out as a starter.


+1


It also seems unfair to require third party CI for libvirt + parallels
(virtuozzo) but we don't have the same requirement for LXC.


The original excuse was that it didn't bring much value, as most of
the LXC differences were in libvirt.
But given the recent bugs that have cropped up, that is totally the wrong call.

I think we need to add a log message saying:
LXC support is untested, and will be removed during Mitka if we do
not get a CI in place.

Following the rules here:
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements

Does that make sense?


There should at least be a quality warning that it's untested.  I can 
push that up today.




John

PS
I must to kick off the feature classification push, so we can get
discuss that for real at the summit.

Really I am looking for folks to help with that, help monitor what
bits of the support matrix are actually tested.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   8   9   10   >