Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-03 Thread Alex Xu
Agree with Andrew about Microversions may overused. I thought similar thing
before. The Microversions should be the version control of API 'core'
protocol.

So I can say the protocol of send an action to server is: 'POST
/servers/{uuid}/action'. We won't bump microversion for any new action or
action removed. We only bump Microversion when the protocol changed, like
the new way send action: 'POST /servers/{uuid}/{action_name}'. Whether the
current cloud support specific action or not, that is done by capability
discover API. At this point, I think what I thought is same with Andrew (If
I understand this email correctly...). But I stuck at the point, I use
Microversion define the protocol, I use the capabilities API for action
discovery. What I should used for version the body of action? That sounds a
nest protocol. Another version for the body of action? Ooh, that is too
complex for user, too much layer control.

Another example is scheduler_hints, we discuss whether we should write each
hint into the schema. So we should say the protocol is: You can input the
scheduler_hint into the field 'scheduler_hints' of server boot body. Each
hint is a key and a value in a dict. What scheduler_hint supported in the
cloud, the kind of value is accepted by specific scheduler_hint, that
should be controlled by another thing.

So what I think we should design our API as a protocol, maybe that changed
the way of design our API.

But anyway, my thought just stop at this initial thought, I didn't get the
whole solution.

Thanks
Alex

2016-08-04 8:54 GMT+08:00 Andrew Laski :

> I've brought some of these thoughts up a few times in conversations
> where the Nova team is trying to decide if a particular change warrants
> a microversion. I'm sure I've annoyed some people by this point because
> it wasn't germane to those discussions. So I'll lay this out in it's own
> thread.
>
> I am a fan of microversions. I think they work wonderfully to express
> when a resource representation changes, or when different data is
> required in a request. This allows clients to make the same request
> across multiple clouds and expect the exact same response format,
> assuming those clouds support that particular microversion. I also think
> they work well to express that a new resource is available. However I do
> think think they have some shortcomings in expressing that a resource
> has been removed. But in short I think microversions work great for
> expressing that there have been changes to the structure and format of
> the API.
>
> I think microversions are being overused as a signal for other types of
> changes in the API because they are the only tool we have available. The
> most recent example is a proposal to allow the revert_resize API call to
> work when a resizing instance ends up in an error state. I consider
> microversions to be problematic for changes like that because we end up
> in one of two situations:
>
> 1. The microversion is a signal that the API now supports this action,
> but users can perform the action at any microversion. What this really
> indicates is that the deployment being queried has upgraded to a certain
> point and has a new capability. The structure and format of the API have
> not changed so an API microversion is the wrong tool here. And the
> expected use of a microversion, in my opinion, is to demarcate that the
> API is now different at this particular point.
>
> 2. The microversion is a signal that the API now supports this action,
> and users are restricted to using it only on or after that microversion.
> In many cases this is an artificial constraint placed just to satisfy
> the expectation that the API does not change before the microversion.
> But the reality is that if the API change was exposed to every
> microversion it does not affect the ability I lauded above of a client
> being able to send the same request and receive the same response from
> disparate clouds. In other words exposing the new action for all
> microversions does not affect the interoperability story of Nova which
> is the real use case for microversions. I do recognize that the
> situation may be more nuanced and constraining the action to specific
> microversions may be necessary, but that's not always true.
>
> In case 1 above I think we could find a better way to do this. And I
> don't think we should do case 2, though there may be special cases that
> warrant it.
>
> As possible alternate signalling methods I would like to propose the
> following for consideration:
>
> Exposing capabilities that a user is allowed to use. This has been
> discussed before and there is general agreement that this is something
> we would like in Nova. Capabilities will programatically inform users
> that a new action has been added or an existing action can be performed
> in more cases, like revert_resize. With that in place we can avoid the
> ambiguous use of microversions to do that. In the meantime I would like

Re: [openstack-dev] [gnocchi] Support for other drivers - influxdb

2016-08-03 Thread Sam Morrison
OK thanks Julien,

I’m about to go on holiday for a month so I’ll pick this up when I return. One 
of our devs is playing with this and thinking of ways to support the things 
currently not implemented/working.

Cheers,
Sam


> On 2 Aug 2016, at 8:35 PM, Julien Danjou  wrote:
> 
> On Tue, Aug 02 2016, Sam Morrison wrote:
> 
> Hi Sam!
> 
>> We have been using gnocchi for a while now with the influxDB driver
>> and are keen to get the influxdb driver back into upstream.
>> 
>> However looking into the code and how it’s arranged it looks like
>> there are a lot of assumptions that the backend storage driver is
>> carbonara based.
> 
> More or less. There is a separation layer (index/storage) and a full
> abstraction layer so it's possible to write a driver for any TSDB.
> Proof, we had an InfluxDB driver.
> Now the separation layer is not optimal for some TSDBs like InfluxDB,
> unfortunately nobody never stepped up to enhance it.
> 
>> Is gnocchi an API for time series DBs or is it a time series DB
>> itself?
> 
> Both. It's an API over TSDBs, and it also has its own TSDB based on
> Carbonara+{Ceph,File,Swift}.
> 
>> The tests that are failing are due to the way carbonara and influx handle the
>> retention and multiple granularities differently. (which we can work around
>> outside of gnocchi for now)
>> 
>> So I guess I’m wondering if there will be support for other drivers apart 
>> from carbonara?
> 
> Sure. We dropped the InfluxDB driver because nobody was maintaining it
> and it was not passing the tests anymore. But we'd be glad to have it
> in-tree I'd say.
> 
>> We use influx because we already use it for other stuff within our 
>> organisation
>> and don’t want to set up ceph or swift (which is quite an endeavour) to 
>> support
>> another time series DB.
> 
> That makes sense. If you don't need scaling, I can only encourage you
> taking a look at using Carbonara+file rather than InfluxDB in the
> future, which I think is still a better choice.
> 
> But in the meantime, feel free to send a patch to include back InfluxDB
> in Gnocchi. As long as you're ready to help us maintain it, we'll all
> open on that. :)
> 
> Cheers,
> -- 
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] nova resize on shared storage

2016-08-03 Thread Marcus Furlong
On 1 August 2016 at 14:02, Blair Bethwaite  wrote:
> On 1 August 2016 at 13:30, Marcus Furlong  wrote:
>> Looks like there is a bug open which suggests that it should be using
>> RPC calls, rather than commands executed over ssh:
>>
>> https://bugs.launchpad.net/nova/+bug/1459782
>
> I agree, no operator in their right mind wants to turn this on for a
> production cloud, but it's a capability that a lot of useful higher
> level tooling wants to exploit (e.g. right-sizing solutions). IIRC
> this was discussed some time ago and I thought there was something in
> the dev pipeline to address it. Looking at the bug it does mention the
> related live-migration cleanup work that happened ~Havana or so, I
> guess the cold-migrate/resize pathway was missed or did it get stuck
> in review?

Good question. CC:ing openstack-dev in the hope someone might know.

> On this point in the bug report:
> ==
> There's a complication though. In virt.libvirt.utils.copy_image() we
> also rely on passwordless authentication to do either "rsync" or "scp"
> to copy the image file over when doing cold migration with local
> storage. So for the case of local storage we'd still need to set up
> passwordless ssh between compute nodes to handle cold migration.
> ==
>
> Passwordless ssh for services need not be so scary, it just needs to
> be managed right... Fortunately OpenSSH has a rather cool feature
> (that lots of people seem not to know about) - it supports auth by
> certificate, by which I mean an appropriately configured sshd can
> authenticate a client's cert based on the fact that it was signed by a
> trusted SSH CA without any need to have a record of the client's
> public key. Signed certs are valid for a limited time, so you can
> imagine building some automation that created a short-lived cert on
> demand that was valid just long enough to establish the scp connection
> needed to complete a cold-migration or resize. See "man ssh-keygen" ->
> CERTIFICATES.

Would it also be possible to use glance to store the image for the
local storage scenario? That would remove the ssh access requirement
from the equation completely.  Upload from source, download to
destination, then delete?
-- 
Marcus Furlong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Be prepared for bumps in the road -- tenant_id columns have been renamed to project_id

2016-08-03 Thread Henry Gessau
Hello neutrinos and especially maintainers of consuming projects.

The patch [1] to rename all tenant_id columns to project_id in the Neutron DB
has merged. Although we have tried our best to check for dependencies and side
effects, this is a very fundamental change and there may be consequences we
did not predict.

Most consuming subprojects have been pre-emptively updated to absorb the
change, while others will need to adjust reactively. See [2].

Note that sqlalchemy models can still access the tenant_id property as a
synonym for project_id, but in the database schema the column name is
project_id in all tables that have it.

If you encounter a problem due to this change, please reach out to us. Reply
here or use the #openstack-neutron IRC channel.

[1] https://review.openstack.org/335786
[2] https://etherpad.openstack.org/p/neutron-stadium-tenant2project

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][cinder] os-brick 1.5.0 release (newton)

2016-08-03 Thread no-reply
We are frolicsome to announce the release of:

os-brick 1.5.0: OpenStack Cinder brick library for managing local
volume attaches

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-brick

With package available at:

https://pypi.python.org/pypi/os-brick

Please report issues through launchpad:

http://bugs.launchpad.net/os-brick

For more details, please see below.

1.5.0
^

New Features

* Add Windows iSCSI connector support.

* Local attach feature in RBD connector. We use RBD kernel module to
  attach and detach volumes locally without Nova.

Changes in os-brick 1.4.0..1.5.0


920880e Updated from global requirements
921ca67 Mock write and read operations to filesystem
7a3b796 Local attach feature in RBD connector
9aee7c2 Remove useless info logging in check_valid_device
1b4aee1 ScaleIO to get volume name from connection properties
94eae18 Add ignore for . directories
c5f31a1 Upgrade tox to 2.0
64fbe4d Add trace facility
e2506d1 Fix string interpolation to delayed to be handled by the logging code
2fa8f65 Replace assertEqual(None, *) with assertIsNone in tests
30f4fc7 Fix wrong path used in iscsi "multipath -l"
3979749 Updated from global requirements
14df0c7 Remove unused LOG to keep code clean
9d2bb5e Fix multipath iSCSI encrypted volume attach failure
02eda08 Updated from global requirements
528472d release note for windows iSCSI
585445e Add Windows iSCSI connector
5343bf0 Make code line length less than 79 characters
76c979c Updated from global requirements
5049929 Replace ip with portal to express more accurately
a177ec0 Fix argument order for assertEqual to (expected, observed)
0dcb996 Add fast8 to quickly test pep8 changes
d47508b Make RBDImageMetadata and RBDVolumeIOWrapper re-usable
54d4525 Disconnect multipath iscsi may logout session
bce886e Add support for processutils.execute
dc9bbb9 LVM: Create thin pool with 100%FREE


Diffstat (except docs and test files)
-

.gitignore |   6 +
os_brick/encryptors/base.py|   3 -
os_brick/encryptors/cryptsetup.py  |  40 +++-
os_brick/encryptors/nop.py |   4 -
os_brick/executor.py   |   9 +-
os_brick/initiator/connector.py| 221 +-
os_brick/initiator/linuxrbd.py |  38 +++-
os_brick/initiator/linuxscsi.py|   9 +-
os_brick/initiator/linuxsheepdog.py|   4 -
os_brick/initiator/windows/__init__.py |  43 
os_brick/initiator/windows/base.py | 110 +
os_brick/initiator/windows/iscsi.py| 159 +
os_brick/local_dev/lvm.py  |  35 ++-
os_brick/utils.py  |  61 +
.../notes/add-windows-iscsi-15d6b1392695f978.yaml  |   3 +
...l-attach-in-rbd-connector-c06347fb164b084a.yaml |   5 +
requirements.txt   |   5 +-
test-requirements.txt  |   5 +-
tools/fast8.sh |  15 ++
tox.ini|   9 +-
36 files changed, 1422 insertions(+), 188 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2344258..0a7ce32 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12 +12 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.privsep>=1.5.0 # Apache-2.0
+oslo.privsep>=1.9.0 # Apache-2.0
@@ -14 +14 @@ oslo.service>=1.10.0 # Apache-2.0
-oslo.utils>=3.11.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
@@ -18,0 +19 @@ castellan>=0.4.0 # Apache-2.0
+os-win>=0.2.3 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index cb2a1a5..b0c1dd2 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6,0 +7 @@ coverage>=3.6 # Apache-2.0
+ddt>=1.0.1 # MIT
@@ -8,2 +9,2 @@ python-subunit>=0.0.18 # Apache-2.0/BSD
-reno>=1.6.2 # Apache2
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+reno>=1.8.0 # Apache2
+sphinx!=1.3b1,<1.3,>=1.2.1 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Zhongjun (A)
+1  Tom will be a great addition to the core team.


发件人: Dustin Schoenbrun [mailto:dscho...@redhat.com]
发送时间: 2016年8月4日 4:55
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

+1

Tom will be a marvelous resource for us to learn from!

Dustin Schoenbrun
OpenStack Quality Engineer
Red Hat, Inc.
dscho...@redhat.com

On Wed, Aug 3, 2016 at 4:19 PM, Knight, Clinton 
> wrote:
+1

Tom will be a great asset for Manila.
Clinton

_
From: Ravi, Goutham >
Sent: Wednesday, August 3, 2016 3:01 PM
Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team
To: OpenStack Development Mailing List (not for usage questions) 
>

(Not a core member, so plus 0.02)

I’ve learned a ton of things from Tom and continue to do so!

From: Rodrigo Barbieri 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, August 3, 2016 at 2:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team


+1

Tom contributes a lot to the Manila project.

--
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos

On Aug 3, 2016 15:42, "Ben Swartzlander" 
> wrote:
Tom (tbarron on IRC) has been working on OpenStack (both cinder and manila) for 
more than 2 years and has spent a great deal of time on Manila reviews in the 
last release. Tom brings another package/distro point of view to the community 
as well as former storage vendor experience.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][openstack] os-client-config 1.19.0 release (newton)

2016-08-03 Thread no-reply
We are excited to announce the release of:

os-client-config 1.19.0: OpenStack Client Configuation Library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-client-config

With package available at:

https://pypi.python.org/pypi/os-client-config

Please report issues through launchpad:

http://bugs.launchpad.net/os-client-config

For more details, please see below.

1.19.0
^^


New Features


* Add a field to vendor cloud profiles to indicate active,
  deprecated and shutdown status.  A message to the user is triggered
  when attempting to use cloud with either deprecated or shutdown
  status.


Bug Fixes
*

* Refactor "OpenStackConfig._fix_backward_madness()" into
  "OpenStackConfig.magic_fixes()" that allows subclasses to inject
  more  fixup magic into the flow during "get_one_cloud()" processing.

* Reverse the order of option selction in
  "OpenStackConfig._validate_auth()" to prefer auth options passed in
  (from argparse) over those found in clouds.yaml. This allows the
  application to override config profile auth settings.


Other Notes
***

* Add citycloud regions for Buffalo, Frankfurt, Karlskrona and Los
  Angles

* Add new DreamCompute cloud and deprecate DreamHost cloud

Changes in os-client-config 1.18.0..1.19.0
--

37dcc7e Add release notes for 1.19.0 release
9c699ed Add the new DreamCompute cloud
05b3c93 Fix precedence for pass-in options
1f7ecbc Update citycloud to list new regions
d9e9bb7 Add support for listing a cloud as shut down
481be16 Add support for deprecating cloud profiles
891fa1c Refactor fix magic in get_one_cloud()


Diffstat (except docs and test files)
-

os_client_config/config.py | 92 +-
os_client_config/defaults.json |  2 +
os_client_config/vendor-schema.json| 11 +++
os_client_config/vendors/citycloud.json|  3 +
os_client_config/vendors/dreamcompute.json | 11 +++
os_client_config/vendors/dreamhost.json|  2 +
.../cloud-profile-status-e0d29b5e2f10e95c.yaml |  6 ++
.../notes/magic-fixes-dca4ae4dac2441a8.yaml|  6 ++
.../notes/option-precedence-1fecab21fdfb2c33.yaml  |  7 ++
.../notes/vendor-updates-f11184ba56bb27cf.yaml |  4 +
11 files changed, 126 insertions(+), 40 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Some thoughts on API microversions

2016-08-03 Thread Andrew Laski
I've brought some of these thoughts up a few times in conversations
where the Nova team is trying to decide if a particular change warrants
a microversion. I'm sure I've annoyed some people by this point because
it wasn't germane to those discussions. So I'll lay this out in it's own
thread.

I am a fan of microversions. I think they work wonderfully to express
when a resource representation changes, or when different data is
required in a request. This allows clients to make the same request
across multiple clouds and expect the exact same response format,
assuming those clouds support that particular microversion. I also think
they work well to express that a new resource is available. However I do
think think they have some shortcomings in expressing that a resource
has been removed. But in short I think microversions work great for
expressing that there have been changes to the structure and format of
the API.

I think microversions are being overused as a signal for other types of
changes in the API because they are the only tool we have available. The
most recent example is a proposal to allow the revert_resize API call to
work when a resizing instance ends up in an error state. I consider
microversions to be problematic for changes like that because we end up
in one of two situations:

1. The microversion is a signal that the API now supports this action,
but users can perform the action at any microversion. What this really
indicates is that the deployment being queried has upgraded to a certain
point and has a new capability. The structure and format of the API have
not changed so an API microversion is the wrong tool here. And the
expected use of a microversion, in my opinion, is to demarcate that the
API is now different at this particular point.

2. The microversion is a signal that the API now supports this action,
and users are restricted to using it only on or after that microversion.
In many cases this is an artificial constraint placed just to satisfy
the expectation that the API does not change before the microversion.
But the reality is that if the API change was exposed to every
microversion it does not affect the ability I lauded above of a client
being able to send the same request and receive the same response from
disparate clouds. In other words exposing the new action for all
microversions does not affect the interoperability story of Nova which
is the real use case for microversions. I do recognize that the
situation may be more nuanced and constraining the action to specific
microversions may be necessary, but that's not always true.

In case 1 above I think we could find a better way to do this. And I
don't think we should do case 2, though there may be special cases that
warrant it.

As possible alternate signalling methods I would like to propose the
following for consideration:

Exposing capabilities that a user is allowed to use. This has been
discussed before and there is general agreement that this is something
we would like in Nova. Capabilities will programatically inform users
that a new action has been added or an existing action can be performed
in more cases, like revert_resize. With that in place we can avoid the
ambiguous use of microversions to do that. In the meantime I would like
the team to consider not using microversions for this case. We have
enough of them being added that I think for now we could just wait for
the next microversion after a capability is added and document the new
capability there.

Secondly we could consider some indicator that exposes how new the code
in a deployment is. Rather than using microversions as a proxy to
indicate that a deployment has hit a certain point perhaps there could
be a header that indicates the date of the last commit in that code.
That's not an ideal way to implement it but hopefully it makes it clear
what I'm suggesting. Some marker that a user can use to determine that a
new behavior is to be expected, but not one that's more intended to
signal structural API changes.

Thoughts?

-Andrew

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Reviewing coverage jobs set up

2016-08-03 Thread Kiall Mac Innes

On 18/07/16 20:14, Jeremy Stanley wrote:

Note that this will only be true if the change's parent commit in
Gerrit was the branch tip at the time it landed. Otherwise (and
quite frequently in fact) you will need to identify the SHA of the
merge commit which was created at the time it merged and use that
instead to find the post job.

Without wanting to diverge too much from the topic at hand, I believe this
is why those of us who only occasionally want to look at post job output
usually just give up! Keeping this in your head for the once every few
months it's needed is hard ;)

A change being merged is always (AFAIK) part and parcel with a review
being closed, so - why not publish to /post/ (with some
level of dir sharding)?

Thanks,
Kiall


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-03 Thread Matt Riedemann

On 8/3/2016 1:44 AM, Alex Xu wrote:



2016-08-03 14:40 GMT+08:00 Alex Xu >:



2016-08-02 22:09 GMT+08:00 Matt Riedemann
>:

On 8/2/2016 2:41 AM, Alex Xu wrote:

A little strange we have two API endpoints, one is
'/servers/{uuid}/os-interfaces', another one is
'/servers/{uuid}/os-virtual-interfaces'.

I prefer to keep os-attach-interface. Due to I think we
should deprecate
the nova-network also. Actually we deprecate all the
nova-network
related API in the 2.36 also. And os-attach-interface didn't
support
nova-network, then it is the right choice.

So we can deprecate the os-virtual-interface in newton. And
in Ocata, we
correct the implementation to get the vif info and tag.
os-attach-interface actually accept the server_id, and there
is check
ensure the port belong to the server. So it shouldn't very
hard to get
the vif info and tag.

And sorry for I missed that when coding patches also...let
me if you
need any help at here.



--

Thanks,

Matt Riedemann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 





http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Alex,

os-interface will be deprecated, that's the APIs to show/list
ports for a given server.

os-virtual-interfaces is not the same, and was never a proxy for
neutron since before 2.32 we never stored anything in the
virtual_interfaces table in the nova database for neutron, but
now we do because that's where we store the VIF tags.

We have to keep os-attach-interface (attach/detach interface
actions on a server).

Are you suggesting we drop os-virtual-interfaces and change the
behavior of os-interfaces to use the nova virtual_interfaces
table rather than proxying to neutron?


Yes, but I missed the point your point out as below. The reason is
that if we only deprecate the GET of os-interface, then when user
want to add interface, user needs to send request 'POST
/servers/{uuid}/os-interface'. When user want to query the interface
which attach to the server, user needs send request to 'GET
/servers/{uuid}/os-virtual-interfaces'. That means user access one
resource but they are under different API endpoint.

Initially I think we can use virtual_interface table to reimplement
the GET of os-interface'. But as you pointed out, neutron ports
won't show if it is created before Newton. That means we change the
os-interface behaviour in old Microversion. Emm... I'm a little
hesitate.



Note that with os-virtual-interfaces even if we start showing
VIFs for neutron ports, any ports created before Newton won't be
in there, which might be a bit confusing.


If we keep the os-virtual-interfaces, but we can't ensure this API
works for all the instances forever. Due to we can't ensure there
won't any old instance which created before Newton in user's cloud.

So.. one idea, we keep the GET of os-interface, deprecate the
os-virtual-interfaces in Newton. In Ocata, we use virtual_interface
table/object instead of neutron API proxy, but we fallback to proxy
neutron API when the instance haven't virtual_interface table( this
is detected by compare the network_info_cache, instance have vif in
network_info_cace, but no entry in virtual_interface table.).


And remove the code about fallback in the future, when we believe there
isn't any old instance.






--

Thanks,

Matt Riedemann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  

Re: [openstack-dev] How to clone deleted branch, rre we keeping the history of the last commit id of deleted stable branches.

2016-08-03 Thread Matt Riedemann

On 8/3/2016 6:34 PM, Saju M wrote:

Hi,

I want to clone deleted icehouse branch of cinder.
I know that, we can't clone it directly.
Are we keeping the history of the last commit id of deleted stable branches.
Please share the last commit id of icehouse branch of cinder if somebody
know it.
I need that to check working code of one old feature.

https://github.com/openstack/cinder

Regards
Saju Madhavan
+91 09535134654


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Clone the repo and checkout the icehouse-eol tag, that will put you in a 
detached head state at which point you can create a branch from it.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-03 Thread Fei Long Wang

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On 03/08/16 21:15, Flavio Percoco wrote:
> On 02/08/16 15:13 +, Hayes, Graham wrote:
>> On 02/08/2016 15:42, Flavio Percoco wrote:
>>> On 01/08/16 10:19 -0400, Sean Dague wrote:
 On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
> Thierry, Ben, Doug,
>
> How can we distinguish between. "Project is doing the right thing, but
> others are not joining" vs "Project is actively trying to keep people
> out"?

 I think at some level, it's not really that different. If we treat them
 as different, everyone will always believe they did all the right
 things, but got no results. 3 cycles should be plenty of time to drop
 single entity contributions below 90%. That means prioritizing bugs /
 patches from outside groups (to drop below 90% on code commits),
 mentoring every outside member that provides feedback (to drop
below 90%
 on reviews), shifting development resources towards mentoring / docs /
 on ramp exercises for others in the community (to drop below 90% on
core
 team).

 Digging out of a single vendor status is hard, and requires making that
 your top priority. If teams aren't interested in putting that ahead of
 development work, that's fine, but that doesn't make it a sustainable
 OpenStack project.
>>>
>>>
>>> ++ to the above! I don't think they are that different either and we
might not
>>> need to differentiate them after all.
>>>
>>> Flavio
>>>
>>
>> I do have one question - how are teams getting out of
>> "team:single-vendor" and towards "team:diverse-affiliation" ?
>>
>> We have tried to get more people involved with Designate using the ways
>> we know how - doing integrations with other projects, pushing designate
>> at conferences, helping DNS Server vendors to add drivers, adding
>> drivers for DNS Servers and service providers ourselves, adding
>> features - the lot.
>>
>> We have a lot of user interest (41% of users were interested in using
>> us), and are quite widely deployed for a non tc-approved-release
>> project (17% - 5% in production). We are actually the most deployed
>> non tc-approved-release project.
>>
>> We still have 81% of the reviews done by 2 companies, and 83% by 3
>> companies.
>>
>> I know our project is not "cool", and DNS is probably one of the most
>> boring topics, but I honestly believe that it has a place in the
>> majority of OpenStack clouds - both public and private. We are a small
>> team of people dedicated to making Designate the best we can, but are
>> still one company deciding to drop OpenStack / DNS development from
>> joining the single-vendor party.
>>
>> We are definitely interested in putting community development ahead of
>> development work - but what that actual work is seems to difficult to
>> nail down. I do feel sometimes that I am flailing in the dark trying to
>> improve this.
>>
>> If projects could share how that got out of single-vendor or into
>> diverse-affiliation this could really help teams progress in the
>> community, and avoid being removed.
>>
>> Making grand statements about "work harder on community" without any
>> guidance about what we need to work on do not help the community.
>
> Zaqar has had the same issue ever since the project was created. The
team has
> been actively mentoring folks from the Outreachy program and Google
Summer of
> code whenever possible.
>
> Folks from other teams have also contributed to the project but
sometimes these
> folks were also part of the same company as the majority of Zaqar's
> contributors, which doesn't help much with this.
>
> It's not until recently that Zaqar has increased its diversity but I
believe
> it's in the edge and it's also related to the amount (or lack there of) of
> adoption it's gotten.
>
As for the adoption, IMHO, it's really depends on the service type and I
think which is one of the main reasons of lacking adoption for some
projects. For example,  you shouldn't expect every cloud deploying a
messaging service, like Zaqar. But every cloud based on OpenStack will
have Nova for sure. So it brings an interesting point,  and I think it's
related to an spec Equal Integration Chances for all Projects
 In that spec, seems(may I read it
by a wrong way) we got an agreement that not all the projects are equal.
However, we're always applying the same rules for every projects. That's
fine, actually my point is for those projects which are not *equal* as
Nova, Neutron or Cinder, it would be nice if we could give them more
time, more space, more support to help them grow. I would like to say
that again, OpenStack is a cloud ecosystem, the goal is not creating a
better VM-creator. Though it's not good if a project is single-vendor,
it's worse if we lost a useful service on the list, because we(at least
me) want to see a reasonable diversity for the service coverage.

> To me, one of the most important items is 

[openstack-dev] [nova] os-capabilities library created

2016-08-03 Thread Jay Pipes
Hi Novas and anyone interested in how to represent capabilities in a 
consistent fashion.


I spent an hour creating a new os-capabilities Python library this evening:

http://github.com/jaypipes/os-capabilities

Please see the README for examples of how the library works and how I'm 
thinking of structuring these capability strings and symbols. I intend 
os-capabilities to be the place where the OpenStack community catalogs 
and collates standardized features for hardware, devices, networks, 
storage, hypervisors, etc.


Let me know what you think about the structure of the library and 
whether you would be interested in owning additions to the library of 
constants in your area of expertise.


Next steps for the library include:

* Bringing in other top-level namespaces like disk: or net: and working 
with contributors to fill in the capability strings and symbols.
* Adding constraints functionality to the library. For instance, 
building in information to the os-capabilities interface that would 
allow a set of capabilities to be cross-checked for set violations. As 
an example, a resource provider having DISK_GB inventory cannot have 
*both* the disk:ssd *and* the disk:hdd capability strings associated 
with it -- clearly the disk storage is either SSD or spinning disk.


Anyway, lemme know your initial thoughts please.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [horizon-plugin] AngularJS 1.5.8

2016-08-03 Thread Thomas Goirand
On 08/03/2016 12:19 PM, Rob Cresswell wrote:
> Hi all,
> 
> Angular 1.5.8 is now updated in its XStatic
> repo: https://github.com/openstack/xstatic-angular
> 
> I've done some manual testing of the angular content and found no issues
> so far. I'll be checking that the JS tests and integration tests pass
> too; if they do, would it be desirable to release 1.5.8 this week, or
> wait until after N is released? It'd be nice to be in sync with current
> stable, but I don't want to cause unnecessary work a few weeks before
> plugin FF.
> 
> Thoughts?
> 
> Rob

Please go ahead. Debian Sid has 1.5.5, so even if we don't want to,
Debian will be using that version. It's better to be in sync.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to clone deleted branch, rre we keeping the history of the last commit id of deleted stable branches.

2016-08-03 Thread Saju M
Hi,

I want to clone deleted icehouse branch of cinder.
I know that, we can't clone it directly.
Are we keeping the history of the last commit id of deleted stable branches.
Please share the last commit id of icehouse branch of cinder if somebody
know it.
I need that to check working code of one old feature.

https://github.com/openstack/cinder

Regards
Saju Madhavan
+91 09535134654
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [horizon-plugin] AngularJS 1.5.8

2016-08-03 Thread Richard Jones
Let's do it, yep.

On 3 August 2016 at 20:19, Rob Cresswell 
wrote:

> Hi all,
>
> Angular 1.5.8 is now updated in its XStatic repo:
> https://github.com/openstack/xstatic-angular
>
> I've done some manual testing of the angular content and found no issues
> so far. I'll be checking that the JS tests and integration tests pass too;
> if they do, would it be desirable to release 1.5.8 this week, or wait until
> after N is released? It'd be nice to be in sync with current stable, but I
> don't want to cause unnecessary work a few weeks before plugin FF.
>
> Thoughts?
>
> Rob
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Migration APIs 2-phase vs. 1-phase

2016-08-03 Thread Rodrigo Barbieri
I would like to add more context to the mail so everybody is brought up to
speed before we get into more details on tomorrow's meeting.

So the main topic is what we do with the API 'migration-start', which is
currently either a 1-phase migration (migrates and completes) or 2-phase
(migrates up to before being disruptive, and administrator invokes API
'migration-complete')

Briefly addressing a specific detail in Ben's email, there is currently no
polling in the code to perform start+complete in 1 shot.

The 'complete' parameter is what indicates if migration is 1-phase or
2-phase. True indicates it is 1-phase, False indicates it is 2-phase.

For all variations *(A)*, *(B)* and *(C) *below, if 'complete' is False,
then the User has to invoke migration-complete to finish migration, thus
invoking the driver's migration-complete. In *(D)* is as 'complete' is
always False.

*A) What we have today*

User invokes migration_start(complete)
__Manager invokes driver's migration_start(complete)
Driver holds the request thread to perform migration, polling the
backend until the 1st phase is completed, or all migration is completed,
depending on 'complete' parameter value.
__Manager sets the corresponding status: 'driver_phase1_done' or
'migration_success' depending on 'complete' parameter value.


*B) Driver interface improvement*

We decided that it was not good to have a driver method behavior variation
based on the parameter, and change it to:

User invokes migration_start(complete)
__Manager invokes driver's migration_start()
Driver holds the request thread to perform migration, polling the
backend until the 1st phase is completed.
__Manager invokes driver's migration_complete or sets status to
'driver_phase1_done' depending on 'complete' parameter value.

*C) Async driver interface with Manager polling, possibly better recovery*

There is this concern that if the service is restarted during a migration,
it may be hard to continue migration from where it left off. So this new
idea came up:

User invokes migration_start(complete)
__Manager invokes driver's migration_start()
Driver invokes migration in storage and ends.
__Manager stays in a loop invoking driver's migration_continue until it
returns progress = 100%
Driver performs next vendor specific steps to migrate share and return
progress.
__Manager invokes driver's migration_complete or sets status to
'driver_phase1_done' depending on 'complete' parameter value.

*D) No 1-phase migration, always manually 2-phase*

Additionally, we discussed about removing the possibility of choosing
between 2-phase at all. If the user wants to immediately complete migration
while leaving the share migrating overnight, then he can run a script
outside of manila to monitor the status and trigger migration-complete.
This just simplifies the last Manager step in *(B)* or *(C)* to always set
the status to 'driver_phase1_done'.



Regards,

On Tue, Aug 2, 2016 at 2:03 PM, Ben Swartzlander 
wrote:

> It occurred to me that if we write the 2-phase migration APIs correctly,
> then it will be fairly trivial to implement 1-phase migration outside
> Manila (in the client, or even higher up).
>
> I would like to propose that we change the migration API to actually work
> that way, because I think it will have positive impact on the driver
> interface and it will make the internals for migration a lot simpler.
> Specifically, I'm proposing that the Manila REST API only supports
> starting/completing migrations, and querying the status of an ongoing
> migration -- there should be no automatic looping inside Manila to perform
> a start+complete in 1 shot.
>
> Additionally I think it makes sense to make all the migration driver
> interfaces more asynchronous, but that change is less urgent. Getting the
> driver interface exactly right is less important than getting the REST API
> right in Newton. Nevertheless, I think we should aim for a driver interface
> that expects all the migration calls to return quickly and for status
> polling to occur automatically on long running operations. This will enable
> much better behavior when restarting services during a migration.
>
> I'm going to put a topic on the meeting agenda for Thursday to discuss
> this in more detail, but if anyone has other feelings please chime in here.
>
> -Ben Swartzlander
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission

2016-08-03 Thread Matt Riedemann

On 8/3/2016 3:02 PM, Znoinski, Waldemar wrote:

Thanks
Much appreciated

Making use of the opportunity here... what's the next big thing a CI (like one 
testing NFV) should be doing? (multinode or there's something else more 
important?)



Moshe Levi might have more input here, but I think multinode is the 
biggest one so that we can test resize/migrate of servers with NFV features.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Project mascots update

2016-08-03 Thread Heidi Joy Tretheway
Steve Hardy wrote:
"I have one question regarding the review process for the logos when they are 
drafted? In the cases where projects have their existing community generated 
logos I can imagine there will be a preference to stick with something that’s 
somehow derived from their existing logo…" 

HJ: You’re right, Steve. That’s why every project that had an existing 
community-illustrated logo had first option to keep that mascot in the revised 
logo, and in most cases they chose to do this. So Oslo’s moose, Senlin’s 
forest, Tacker’s squid, and Cloudkitty’s cat (among others) will still be the 
prominent feature in their logo.

Steve Hardy wrote:
“In cases where a new logo is produced I'm sure community enthusiasm and 
acceptance will be greater if team members have played a part in the logo 
design process or at least provided some feedback prior to the designs being 
declared final?”

HJ: Absolutely. That’s why we encouraged project teams to work together to 
select their mascot. I received dozens of team etherpads and Condorcet polls 
from PTLs to show how the team decided their mascot candidates. The PTLs 
confirmed their winners for the list you see on 
http://www.openstack.org/project-mascots 
. You can also see an example of an 
illustration style there, and we expect to have the first five logos (with the 
final illustration style) in hand shortly. 

It’s going to be a major effort to complete 50 illustrations x 3 logo 
variations prior to Barcelona, but I think we can make it. That said, it’s not 
possible to do several rounds of revisions with each project team and the 
illustrators. What I’ve been doing instead is listening carefully to project 
team requests and pulling photos to share with the illustrators that best show 
what the teams intend. I’m happy to share that with anyone who asks.
  
Heidi Joy
__
Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769  |  skype: heidi.tretheway





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Dustin Schoenbrun
+1

Tom will be a marvelous resource for us to learn from!

Dustin Schoenbrun
OpenStack Quality Engineer
Red Hat, Inc.
dscho...@redhat.com

On Wed, Aug 3, 2016 at 4:19 PM, Knight, Clinton 
wrote:

> +1
>
> Tom will be a great asset for Manila.
>
> Clinton
>
> _
> From: Ravi, Goutham 
> Sent: Wednesday, August 3, 2016 3:01 PM
> Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> reviewer team
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
>
>
> (Not a core member, so plus 0.02)
>
>
>
> I’ve learned a ton of things from Tom and continue to do so!
>
>
>
> *From: *Rodrigo Barbieri 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, August 3, 2016 at 2:48 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> reviewer team
>
>
>
> +1
>
> Tom contributes a lot to the Manila project.
>
> --
> Rodrigo Barbieri
> Computer Scientist
> OpenStack Manila Core Contributor
> Federal University of São Carlos
>
>
>
> On Aug 3, 2016 15:42, "Ben Swartzlander"  wrote:
>
> Tom (tbarron on IRC) has been working on OpenStack (both cinder and
> manila) for more than 2 years and has spent a great deal of time on Manila
> reviews in the last release. Tom brings another package/distro point of
> view to the community as well as former storage vendor experience.
>
> -Ben Swartzlander
> Manila PTL
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread yang, xing
+1.  Tom will be an excellent addition to the core team!

Thanks,
Xing



From: Ben Swartzlander [b...@swartzlander.org]
Sent: Wednesday, August 3, 2016 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

Tom (tbarron on IRC) has been working on OpenStack (both cinder and
manila) for more than 2 years and has spent a great deal of time on
Manila reviews in the last release. Tom brings another package/distro
point of view to the community as well as former storage vendor experience.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Knight, Clinton
+1

Tom will be a great asset for Manila.

Clinton

_
From: Ravi, Goutham >
Sent: Wednesday, August 3, 2016 3:01 PM
Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team
To: OpenStack Development Mailing List (not for usage questions) 
>


(Not a core member, so plus 0.02)

I've learned a ton of things from Tom and continue to do so!

From: Rodrigo Barbieri 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, August 3, 2016 at 2:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team


+1

Tom contributes a lot to the Manila project.

--
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Core Contributor
Federal University of S?o Carlos

On Aug 3, 2016 15:42, "Ben Swartzlander" 
> wrote:
Tom (tbarron on IRC) has been working on OpenStack (both cinder and manila) for 
more than 2 years and has spent a great deal of time on Manila reviews in the 
last release. Tom brings another package/distro point of view to the community 
as well as former storage vendor experience.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission

2016-08-03 Thread Znoinski, Waldemar
Thanks
Much appreciated

Making use of the opportunity here... what's the next big thing a CI (like one 
testing NFV) should be doing? (multinode or there's something else more 
important?)

 >-Original Message-
 >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 >Sent: Wednesday, August 3, 2016 8:28 PM
 >To: openstack-dev@lists.openstack.org
 >Subject: Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission
 >
 >On 8/3/2016 8:18 AM, Znoinski, Waldemar wrote:
 >> [WZ] Hi Matt, do we need, a Gerrit-style, +2 on that from someone?
 >> Infra Team doesn't keep these settings in their puppet/config files on git -
 >all the Gerrit changes are done via Gerrit GUI so they rely on Cores to add 
 >CIs
 >to the appropriate ci group, nova-ci in this case.
 >>
 >>
 >
 >Sorry, I wasn't sure what the next step here was, I guess it was the nova-ci
 >membership change, which is done now:
 >
 >https://review.openstack.org/#/admin/groups/511,members
 >
 >--
 >
 >Thanks,
 >
 >Matt Riedemann
 >
 >
 >__
 >
 >OpenStack Development Mailing List (not for usage questions)
 >Unsubscribe: OpenStack-dev-
 >requ...@lists.openstack.org?subject:unsubscribe
 >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [meghdwar] meghdwar API and infra and repo integration issues and BOF help sought

2016-08-03 Thread prakash RAMCHANDRAN

Summary of Call and follow ups on Aug 3 for next Aug 10 irc call 
#openstack-meghdwar 
Conclusions on today's call:
1. Need to get Cloudlet python client working on CLI on Ubuntu 16.04
?Cloudlet failure on Ubuntu 16.04 to be reviewed after testing xenial VM on 
Openstack install
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
This using ssh keypair from testing linux terminal to openstcack host with
nova  keypair-add --pub-key ~/.ssh/id_rsa.pub defaultorneed a key upload via 
horizon dashboard and then create image and instance to log into 16.04 xenial 
for testing fabTest earlier "fab install" showed "useradd stack kvm" was 
failing and could be python3 issue or some library is missing refer
link http://forum.openedgecomputing.org/t/fab-module-support-for-ubuntu-16-04/92
If you find solution or answers post it to the above link or reply back and 
will post it.
2. Update README.rst file explaining the "Edge Cloud Gateway Services" and 
update using cookiecutter template from openstack to get master repo over infra 
linkOriginal mater: https://review.openstack.org/#/c/319466/New Ptach: 
https://review.openstack.org/#/c/350714/ 
Merge or rebase them as suggested by infra teamshould  merge the to original 
master 319466 after or before fixing errors
 3. Error in new patch 
ERROR:   py34-constraints: InterpreterNotFound: python3.4ERROR:   
py27-constraints: commands failedERROR:   pypy-constraints: 
InterpreterNotFound: pypy  pep8-constraints: commands succeededBut see python3 
-VPython 3.5.1+
so its installed on submitting host?Do we need to apply tox constraints as 
inhttps://pypi.python.org/pypi/tox-travis

and add to tox.ini the py27/34 constraints to help pass tox tests?
Any idea on this please respond?
4. doc & specs for Megdwar DevelopmentC-Create, R-Read, U-Update, D-Delete, 
X-Execute
Considering adding Specs for Edge Cloud Gateway services
CRUDX [Edge Cloud Gateway] = All-in-one devstack controller
CRUDX [Edge Server] = Compute nodesCLI = Cloudlet lib (fab) install as on KVM 
on Ubuntu 16.04 (see item 1) (Should this be Python-Cloudlet-clinet a seperate 
project?)
CRUDX [Cloudlet] = OEC Cloudlet 
Register/Deregister [ ECloud-CCloud] where ECloud is on Edge Cloud Gateway and 
C-Cloud on NSP (ASP registers to CSP registers to NSP)CRUDX Catalog [ECloud / 
CCLoud]For Catalog should we use Magnum or its Horizon application-catalog-ui 
with Image, Orchestration for both components and Applications (Cloudlet) or 
build our own?For Managing Cloudlet Should we ask Senlin to be modified for 
Cloudlet distribution as a new requirement for which Meghdwar will provide two 
node cluster with binding requirements one Application per VM on a compute node 
or build our own simple Catalog Binder to serve application from edge 
provisioning mobility with Handoff. 

Finally 
This specs for API will be debated and  pushed under doc specs.
5. Any volunteers to support BOF at Barcelona for Meghdwar please help by 
setting appropriate etherpad entry and let everyone know. We do intend a BOF 
session, how do we register for it?
ThanksPrakash

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Can we recheck mindfully?

2016-08-03 Thread Armando M.
Folks,

I have been noticing that some of us use 'recheck' as a knee jerk reaction.
Please STOP!

Please, take the time to look into the failure mode, see if there's a bug
reported already, help the triage process, and ultimately, once you're sure
that the issue is not introduced by your patch, consider to type 'recheck
bug #' mindfully. If the gate queue is hovering over at 400 jobs [0],
we need to be conscious of the fact that the gate is busy and we should
back off.

Besides, there is no point in jamming stuff through the gate if there's a
burning issue that needs to be flushed out.

For checking gate instabilities, tools [1,2,3] are your friends. If you
don't know how to use them, please reach out to me and I'll be happy to
walk you through.

Cheers,
Armando

[0] http://status.openstack.org/zuul/
[1] http://grafana.openstack.org/dashboard/db/neutron-failure-rate
[2]
http://status.openstack.org/openstack-health/#/g/project/openstack~2Fneutron
[3] http://status.openstack.org/elastic-recheck/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission

2016-08-03 Thread Matt Riedemann

On 8/3/2016 8:18 AM, Znoinski, Waldemar wrote:

[WZ] Hi Matt, do we need, a Gerrit-style, +2 on that from someone?
Infra Team doesn't keep these settings in their puppet/config files on git - 
all the Gerrit changes are done via Gerrit GUI so they rely on Cores to add CIs 
to the appropriate ci group, nova-ci in this case.




Sorry, I wasn't sure what the next step here was, I guess it was the 
nova-ci membership change, which is done now:


https://review.openstack.org/#/admin/groups/511,members

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-03 Thread Andrew Laski


On Wed, Aug 3, 2016, at 02:24 PM, Chris Friesen wrote:
> On 08/03/2016 10:53 AM, Andrew Laski wrote:
> > I think the discussion about whether or not this needs a microversion is 
> > missing
> > the bigger question of whether or not this should be in the API to begin 
> > with.
> 
> I don't think it's actually in the API.

It's allowing the revert_resize API call to be used in more situations
by relaxing the vm_state checks that occur. I didn't see anything that
calls that code in the event of an error, it just leaves it open for
users to call it.

> 
> > If it's safe to rollback from this error state why not just do that
> > automatically in Nova?
> 
> I think we do for other cases, my understanding is that the issue is
> whether we 
> would need a new microversion for the API because we're changing the 
> user-visible behaviour of the migrate/resize operation when something
> goes 
> wrong.  (IE rolling back to an active state on the source rather than
> staying in 
> an ERROR state.)
> 
> > To me it seems like adding a microversion because a policy rule was
> > changed.
> 
> I believe this is exactly what it is.
> 
>  > I know we should have some sort of signal here for users, but I think
> > we need to look at different ways to signal this type of change.
> 
> How?  The microversion is (currently) the only way the client has of
> determining 
> server behaviour.

Yes, currently this is the only mechanism we have. My preference would
be to be a bit lax on signaling  while we get other mechanisms in place.
For this case I think it would make sense to wait until we have
capability exposure in the API to present a strong signal that a user
could revert from a resize with an instance in an ERROR state. In the
meantime we could use documentation and piggyback on the next
microversion to say "If you're using an API with at least version 2.x
then this capability is available." 

This is driven by my desire to avoid cruft because when we do have
capabilities exposed in the API then we're left with with this odd
microversion that signals the same thing as capabilities. I think
microversions make a lot of sense for signaling different
representations that are available in the API, e.g. an instance looks
one way at version 2.3 and looks different at 2.5. The use of a
microversion to signal something that could apply across every version
of the API makes the meaning of a microversion imprecise. I would love
to be able to explain to someone that microversions mean x. But the
reality is they have no precise definition. All they signal is
"something changed, and it may or may not be visible to you and I can't
tell you what it is, please go check the docs." /rant

> 
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Stepping down from core

2016-08-03 Thread Joe Cropper
Thanks Taylor… thanks so much for your contributions!

Best,
Joe

> On Aug 3, 2016, at 11:32 AM, Taylor D Peoples  wrote:
> 
> Hi all,
> 
> I'm stepping down from Watcher core and will be leaving the OpenStack 
> community to pursue other opportunities.
> 
> I wasn't able to contribute to Watcher anywhere near the amount that I was 
> hoping, but I have enjoyed the little that I was able to contribute.
> 
> Good luck to the Watcher team going forward.
> 
> 
> Best,
> Taylor Peoples
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][keystone] keystonemiddleware 4.8.0 release (newton)

2016-08-03 Thread no-reply
We are jubilant to announce the release of:

keystonemiddleware 4.8.0: Middleware for OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystonemiddleware

With package available at:

https://pypi.python.org/pypi/keystonemiddleware

Please report issues through launchpad:

http://bugs.launchpad.net/keystonemiddleware

For more details, please see below.

Changes in keystonemiddleware 4.7.0..4.8.0
--

67dacad Updated from global requirements
9785700 Updated from global requirements
619b07d Fix description of option `cache`
41083a5 Remove oslo-incubator


Diffstat (except docs and test files)
-

keystonemiddleware/auth_token/_cache.py| 57 -
keystonemiddleware/auth_token/_opts.py |  6 +-
keystonemiddleware/openstack/__init__.py   |  0
keystonemiddleware/openstack/common/__init__.py|  0
keystonemiddleware/openstack/common/memorycache.py | 97 --
.../unit/auth_token/test_auth_token_middleware.py  |  6 +-
openstack-common.conf  |  7 --
requirements.txt   |  4 +-
tox.ini|  2 +-
9 files changed, 66 insertions(+), 113 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 8d21a86..73679d9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-keystoneauth1>=2.7.0 # Apache-2.0
+keystoneauth1>=2.10.0 # Apache-2.0
@@ -10 +10 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-03 Thread Devdatta Kulkarni
Hi,

As current PTL of one of the projects that has the team:single-vendor tag,
I have following thoughts/questions on this issue.

- Is the need for periodically deciding membership in the big tent primarily 
stemming
from the question of managing resources (for the future design summits and 
cross-project work)?
If so, have we thought about alternative solutions such, say, using the 
team:diverse-affiliation
tag for making such decisions? For instance, we could say that a project will 
get
space at the design summit only if it has the team:diverse-affiliation tag? 
That way, projects
are incentivized to purse this tag/goal if they want to participate in the 
design summit.
Also, adding/removing tag might be simpler than trying to get into big tent 
again (say, after a project
has been removed and then gains diverse affiliation afterwards and wants to 
participate in the
design summit, would they have to go through big tent application again?).

- Another issue with using the number of vendors as a metric 
to decide membership in big tent is that it rules out any project which may be 
independently started in the open (not by any specific vendor, but by a team of 
independent contributors),
and which is following the 4 opens, to be a part of the big tent.

- About diversity -- as Flavio has suggested on this thread, participating in 
Outreachy is a good option.
We have done it in Solum. However, that does not necessarily help with 
obtaining the diverse-affiliation
tag as defined currently (since Outreachy participants are students and not 
necessarily affiliated with
any vendor).

Regards,
Devdatta



From: Amrith Kumar 
Sent: Wednesday, August 3, 2016 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

To Steven's specific question:

> If PTLs can weigh in on this thread and commit to participation in such a
> cross-project subgroup, I'd be happy to lead it.

I would like to participate and help get this kind of a group going.

-amrith


> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: Tuesday, August 02, 2016 11:45 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
>
> Responses inline:
>
> On 8/2/16, 8:13 AM, "Hayes, Graham"  wrote:
>
> >On 02/08/2016 15:42, Flavio Percoco wrote:
> >> On 01/08/16 10:19 -0400, Sean Dague wrote:
> >>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
>  Thierry, Ben, Doug,
> 
>  How can we distinguish between. "Project is doing the right thing,
> but
>  others are not joining" vs "Project is actively trying to keep people
>  out"?
> >>>
> >>> I think at some level, it's not really that different. If we treat
> them
> >>> as different, everyone will always believe they did all the right
> >>> things, but got no results. 3 cycles should be plenty of time to drop
> >>> single entity contributions below 90%. That means prioritizing bugs /
> >>> patches from outside groups (to drop below 90% on code commits),
> >>> mentoring every outside member that provides feedback (to drop below
> >>>90%
> >>> on reviews), shifting development resources towards mentoring / docs /
> >>> on ramp exercises for others in the community (to drop below 90% on
> >>>core
> >>> team).
> >>>
> >>> Digging out of a single vendor status is hard, and requires making
> that
> >>> your top priority. If teams aren't interested in putting that ahead of
> >>> development work, that's fine, but that doesn't make it a sustainable
> >>> OpenStack project.
> >>
> >>
> >> ++ to the above! I don't think they are that different either and we
> >>might not
> >> need to differentiate them after all.
> >>
> >> Flavio
> >>
> >
> >I do have one question - how are teams getting out of
> >"team:single-vendor" and towards "team:diverse-affiliation" ?
> >
> >We have tried to get more people involved with Designate using the ways
> >we know how - doing integrations with other projects, pushing designate
> >at conferences, helping DNS Server vendors to add drivers, adding
> >drivers for DNS Servers and service providers ourselves, adding
> >features - the lot.
> >
> >We have a lot of user interest (41% of users were interested in using
> >us), and are quite widely deployed for a non tc-approved-release
> >project (17% - 5% in production). We are actually the most deployed
> >non tc-approved-release project.
> >
> >We still have 81% of the reviews done by 2 companies, and 83% by 3
> >companies.
>
> By the objective criteria of team:single-vendor Designate isn't a single
> vendor project.  By the objective criteria of team:diverse-affiliation
> your not a diversely affiliated project either.  This is why I had
> suggested we need a third tag which accurately 

Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Sturdevant, Mark
+1   Tom will be a great addition to the core team.



From: Rodrigo Barbieri [mailto:rodrigo.barbieri2...@gmail.com]
Sent: Wednesday, August 03, 2016 11:49 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team


+1

Tom contributes a lot to the Manila project.

--
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos

On Aug 3, 2016 15:42, "Ben Swartzlander" 
> wrote:
Tom (tbarron on IRC) has been working on OpenStack (both cinder and manila) for 
more than 2 years and has spent a great deal of time on Manila reviews in the 
last release. Tom brings another package/distro point of view to the community 
as well as former storage vendor experience.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-03 Thread Steven Dake (stdake)


On 8/3/16, 10:37 AM, "Steven Hardy"  wrote:

>On Wed, Aug 03, 2016 at 10:19:10AM -0700, Heidi Joy Tretheway wrote:
>>Thanks to all of you for contributing unique and inspiring ideas to
>>choose
>>a team mascot. You can view a list of projects that have chosen their
>>mascots here: http://www.openstack.org/project-mascots, and more
>>will be
>>added soon as the last handful of teams finalize theirs.
>>Illustrators are
>>working now to produce a family of great logos for you that will
>>come out
>>closer to the Barcelona Summit. Feel free to drop me a note with
>>questions.
>
>I have one question regarding the review process for the logos when they
>are drafted?
>
>In the cases where projects have their existing community generated logos
>I can imagine there will be a preference to stick with something that's
>somehow derived from their existing logo, and in cases where a new logo is
>produced I'm sure community enthusiasm and acceptance will be greater if
>team members have played a part in the logo design process or at least
>provided some feedback prior to the designs being declared final?
>
>Thanks!
>
>Steve Hardy

Heidi,

I received a related question from a Kolla core reviewer.  "Do the project
teams have input into the mascot design."

If this has already been discussed on the list, I may have missed it.

Thanks
-steve

>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Ravi, Goutham
(Not a core member, so plus 0.02)

I’ve learned a ton of things from Tom and continue to do so!

From: Rodrigo Barbieri 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, August 3, 2016 at 2:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team


+1

Tom contributes a lot to the Manila project.

--
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos

On Aug 3, 2016 15:42, "Ben Swartzlander" 
> wrote:
Tom (tbarron on IRC) has been working on OpenStack (both cinder and manila) for 
more than 2 years and has spent a great deal of time on Manila reviews in the 
last release. Tom brings another package/distro point of view to the community 
as well as former storage vendor experience.

-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Rodrigo Barbieri
+1

Tom contributes a lot to the Manila project.

--
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos

On Aug 3, 2016 15:42, "Ben Swartzlander"  wrote:

> Tom (tbarron on IRC) has been working on OpenStack (both cinder and
> manila) for more than 2 years and has spent a great deal of time on Manila
> reviews in the last release. Tom brings another package/distro point of
> view to the community as well as former storage vendor experience.
>
> -Ben Swartzlander
> Manila PTL
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-03 Thread Ben Swartzlander
Tom (tbarron on IRC) has been working on OpenStack (both cinder and 
manila) for more than 2 years and has spent a great deal of time on 
Manila reviews in the last release. Tom brings another package/distro 
point of view to the community as well as former storage vendor experience.


-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-03 Thread Chris Friesen

On 08/03/2016 10:53 AM, Andrew Laski wrote:

I think the discussion about whether or not this needs a microversion is missing
the bigger question of whether or not this should be in the API to begin with.


I don't think it's actually in the API.


If it's safe to rollback from this error state why not just do that
automatically in Nova?


I think we do for other cases, my understanding is that the issue is whether we 
would need a new microversion for the API because we're changing the 
user-visible behaviour of the migrate/resize operation when something goes 
wrong.  (IE rolling back to an active state on the source rather than staying in 
an ERROR state.)



To me it seems like adding a microversion because a policy rule was
changed.


I believe this is exactly what it is.

> I know we should have some sort of signal here for users, but I think

we need to look at different ways to signal this type of change.


How?  The microversion is (currently) the only way the client has of determining 
server behaviour.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] race in keystone unit tests

2016-08-03 Thread Sean Dague
On 08/03/2016 12:26 PM, Lance Bragstad wrote:
> Sending a follow-up because I think we ended up finding something
> relevant to this discussion.
> 
> As keystone moves towards making fernet the default, one of our work
> items was to mock the system clock in tests. This allows us to advance
> the clock by one second where we need to avoid sub-second race
> conditions. To do this we used freezegun [0]. We recently landed a bunch
> of fixes to do this.
> 
> It turns out that there is a possible race between when freezegun
> patches it's modules and when the test runs. This turned up in a patch I
> was working on locally and I noticed certain clock operations weren't
> using the fake time object from freezegun. As a work-around, we can
> leverage the set_time_override() method from oslo_utils.timeutils to
> make sure we are using the fake time from within the frozen time
> context. In my testing locally this worked.
> 
> If keystone requires a hybrid approach to patching
> (oslo_utils.timeutils.set_time_override() + freezegun), we should build
> it into a well documented hybrid context manager so that's its more
> apparent why we need it.
> 
> Sean, I can start working on this to see if it starts mitigating the
> races you're seeing.
> 
> [0] https://pypi.python.org/pypi/freezegun

Lance, thanks for digging into this! I think using the oslo
set_time_override sounds like the best approach, that's what I remember
other places to test time boundaries like this.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] App Catalog IRC meeting Thursday August 4th

2016-08-03 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for August 4th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Tomorrow we will be talking more about our review and merge policies,
and the next steps we will be taking in implementing GLARE as a
back-end for the Community App Catalog.

Hope to see you there tomorrow!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] race in keystone unit tests

2016-08-03 Thread Doug Hellmann
Excerpts from Lance Bragstad's message of 2016-08-03 11:26:56 -0500:
> Sending a follow-up because I think we ended up finding something relevant
> to this discussion.
> 
> As keystone moves towards making fernet the default, one of our work items
> was to mock the system clock in tests. This allows us to advance the clock
> by one second where we need to avoid sub-second race conditions. To do this
> we used freezegun [0]. We recently landed a bunch of fixes to do this.
> 
> It turns out that there is a possible race between when freezegun patches
> it's modules and when the test runs. This turned up in a patch I was
> working on locally and I noticed certain clock operations weren't using the
> fake time object from freezegun. As a work-around, we can leverage the
> set_time_override() method from oslo_utils.timeutils to make sure we are
> using the fake time from within the frozen time context. In my testing
> locally this worked.

Supporting mocking time operations in tests is the primary reason some
of those functions exist at all.

Doug

> 
> If keystone requires a hybrid approach to patching
> (oslo_utils.timeutils.set_time_override() + freezegun), we should build it
> into a well documented hybrid context manager so that's its more apparent
> why we need it.
> 
> Sean, I can start working on this to see if it starts mitigating the races
> you're seeing.
> 
> [0] https://pypi.python.org/pypi/freezegun
> 
> On Tue, Aug 2, 2016 at 9:21 AM, Lance Bragstad  wrote:
> 
> > Hi Sean,
> >
> > Thanks for the information. This obviously looks Fernet-related and I
> > would be happy to spend some cycles on it. We recently landed a bunch of
> > refactors in keystone to improve Fernet test coverage. This could be
> > related to those refactors. Just double checking - but you haven't opened a
> > bug in launchpad for this yet have you?
> >
> > Thanks for the heads up!
> >
> > On Tue, Aug 2, 2016 at 5:32 AM, Sean Dague  wrote:
> >
> >> One of my concerns about stacking up project unit tests in the
> >> requirements jobs, is the unit tests aren't as free of races as you
> >> would imagine. Because they only previously impacted the one project
> >> team, those teams are often just fast to recheck instead of get to the
> >> bottom of it. Cross testing with them in a voting way changes their
> >> impact.
> >>
> >> The keystone unit tests have a existing race condition in them, which
> >> recently failed an unrelated requirements bump -
> >>
> >> http://logs.openstack.org/50/348250/6/check/gate-cross-keystone-python27-db-ubuntu-xenial/962327d/console.html#_2016-08-02_03_52_14_537923
> >>
> >> I'm not fully sure where to go from here. But wanted to make sure that
> >> data is out there. Any keystone folks who can dive into and sort it out
> >> would be highly appreciated.
> >>
> >> -Sean
> >>
> >> --
> >> Sean Dague
> >> http://dague.net
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-03 Thread Steven Hardy
On Wed, Aug 03, 2016 at 10:19:10AM -0700, Heidi Joy Tretheway wrote:
>Thanks to all of you for contributing unique and inspiring ideas to choose
>a team mascot. You can view a list of projects that have chosen their
>mascots here: http://www.openstack.org/project-mascots, and more will be
>added soon as the last handful of teams finalize theirs. Illustrators are
>working now to produce a family of great logos for you that will come out
>closer to the Barcelona Summit. Feel free to drop me a note with
>questions.

I have one question regarding the review process for the logos when they
are drafted?

In the cases where projects have their existing community generated logos
I can imagine there will be a preference to stick with something that's
somehow derived from their existing logo, and in cases where a new logo is
produced I'm sure community enthusiasm and acceptance will be greater if
team members have played a part in the logo design process or at least
provided some feedback prior to the designs being declared final?

Thanks!

Steve Hardy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] New version of fuel-devops (2.9.23)

2016-08-03 Thread Alexey Stepanov
Hi all!
In the nearest future we are going to update the 'fuel-devops' framework on
our CI to the version 2.9.23.
It's bugfix-only version. As was written before, no new development is
planned for this thread.

Changes since 2.9.22:

- fix  *AddressPool._safe_create_network* method [1]

List of all changes is available on github [2].

[1] https://review.openstack.org/#/c/350400/
[2] https://github.com/openstack/fuel-devops/compare/2.9.22...release/2.9

-- 
Best regards,
Alexey Stepanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Project mascots update

2016-08-03 Thread Heidi Joy Tretheway
Thanks to all of you for contributing unique and inspiring ideas to choose a 
team mascot. You can view a list of projects that have chosen their mascots 
here: http://www.openstack.org/project-mascots 
, and more will be added soon as the 
last handful of teams finalize theirs. Illustrators are working now to produce 
a family of great logos for you that will come out closer to the Barcelona 
Summit. Feel free to drop me a note with questions.

Heidi Joy

__
Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769  |  skype: heidi.tretheway





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-03 Thread Ken'ichi Ohmichi
2016-08-03 9:53 GMT-07:00 Andrew Laski :

> I think the discussion about whether or not this needs a microversion is
> missing the bigger question of whether or not this should be in the API to
> begin with. If it's safe to rollback from this error state why not just do
> that automatically in Nova? If it's proposed for the API because it's not
> considered safe I don't agree it should be in the API. This is not an API
> that's restricted to admins by default.
>
> However if this is going to be exposed in the API I lean towards this not
> needing a microversion. It's a new policy in the usage of the API, not a
> change to the API. To me it seems like adding a microversion because a
> policy rule was changed. I know we should have some sort of signal here for
> users, but I think we need to look at different ways to signal this type of
> change.
>

Yeah, I feel a new microversion in this case seems a little overkill.
This is a negative case and rollback could be operated in all versions.
We have implemented rollback thing in years, and I guess we don't have any
negative feedback related to rollback from users.

Thanks
Ken Omichi

---


>
> -Andrew
>
>
> On Tue, Aug 2, 2016, at 02:11 AM, han.ro...@zte.com.cn wrote:
>
> patchset url: https://review.openstack.org/#/c/334747/
>
>
>
>
> Allow "revert_resize" to recover error instance after resize/migrate.
>
> When resize/migrate instance, if error occurs on source compute node,
> instance state can rollback to active currently. But if error occurs in
> "finish_resize" function on destination compute node, the instance state
> would not rollback to active.
>
> This patch is to rollback instance state from error to active when resize
> or migrate action failed on destination compute node..
>
>
>
>
> Best,
> Rong Han
>
> *__*
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-03 Thread Andrew Laski
I think the discussion about whether or not this needs a microversion is
missing the bigger question of whether or not this should be in the API
to begin with. If it's safe to rollback from this error state why not
just do that automatically in Nova? If it's proposed for the API because
it's not considered safe I don't agree it should be in the API. This is
not an API that's restricted to admins by default.

However if this is going to be exposed in the API I lean towards this
not needing a microversion. It's a new policy in the usage of the API,
not a change to the API. To me it seems like adding a microversion
because a policy rule was changed. I know we should have some sort of
signal here for users, but I think we need to look at different ways to
signal this type of change.

-Andrew


On Tue, Aug 2, 2016, at 02:11 AM, han.ro...@zte.com.cn wrote:
> patchset url: https://review.openstack.org/#/c/334747/
>
>
>
>
> Allow "revert_resize" to recover error instance after resize/migrate.
>
> When resize/migrate instance, if error occurs on source compute node,
> instance state can rollback to active currently. But if error occurs
> in "finish_resize" function on destination compute node, the instance
> state would not rollback to active.
>
> This patch is to rollback instance state from error to active when
> resize or migrate action failed on destination compute node..
>
>
>
>
> Best,
> Rong Han
> -
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova compute reports the disk usage

2016-08-03 Thread Chris Friesen

On 08/03/2016 09:13 AM, Tuan Luong wrote:

Hi Jay, Thank you for the reply. Actually, I did ask this question to make
sure that everything is still going well with disk usage report of Nova. We
had the problem related to wrong report (IMHO it is wrong). The situation is
below:

- We launch an instance with 600Gb of ephemeral disk of flavor without
specifying extra ephemeral disk. The total free_disk before launching is
1025Gb, the total disk_available_least is 1016Gb. - After successful
launching, what we have is the value of disk_available_disk = -184Gb
therefore we could not launch the next instance.

It seems like the value of disk_available_disk decreases 2*600Gb when I think
it should be 600Gb. The value of free_disk after launching is calculated
well. I would not believe that the disk_over_commit in this case is 600Gb.
Otherwise, we also check the instance that indeed it has only 600Gb mounted
as vdb.


Assuming you're using qcow, the maximum disk space consumed by a qcow disk is 
"size of backing file" + "size of differences from backing file".  Thus, if you 
have a single instance using a given backing file, the worst-case consumption is 
actually twice the size of your disk.


I think the backing file for an ephemeral disk is actually a sparse file, but on 
a cursory examination it will appear to be consuming 600GB.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Stepping down from core

2016-08-03 Thread Taylor D Peoples

Hi all,

I'm stepping down from Watcher core and will be leaving the OpenStack
community to pursue other opportunities.

I wasn't able to contribute to Watcher anywhere near the amount that I was
hoping, but I have enjoyed the little that I was able to contribute.

Good luck to the Watcher team going forward.


Best,
Taylor Peoples
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] race in keystone unit tests

2016-08-03 Thread Lance Bragstad
Sending a follow-up because I think we ended up finding something relevant
to this discussion.

As keystone moves towards making fernet the default, one of our work items
was to mock the system clock in tests. This allows us to advance the clock
by one second where we need to avoid sub-second race conditions. To do this
we used freezegun [0]. We recently landed a bunch of fixes to do this.

It turns out that there is a possible race between when freezegun patches
it's modules and when the test runs. This turned up in a patch I was
working on locally and I noticed certain clock operations weren't using the
fake time object from freezegun. As a work-around, we can leverage the
set_time_override() method from oslo_utils.timeutils to make sure we are
using the fake time from within the frozen time context. In my testing
locally this worked.

If keystone requires a hybrid approach to patching
(oslo_utils.timeutils.set_time_override() + freezegun), we should build it
into a well documented hybrid context manager so that's its more apparent
why we need it.

Sean, I can start working on this to see if it starts mitigating the races
you're seeing.

[0] https://pypi.python.org/pypi/freezegun

On Tue, Aug 2, 2016 at 9:21 AM, Lance Bragstad  wrote:

> Hi Sean,
>
> Thanks for the information. This obviously looks Fernet-related and I
> would be happy to spend some cycles on it. We recently landed a bunch of
> refactors in keystone to improve Fernet test coverage. This could be
> related to those refactors. Just double checking - but you haven't opened a
> bug in launchpad for this yet have you?
>
> Thanks for the heads up!
>
> On Tue, Aug 2, 2016 at 5:32 AM, Sean Dague  wrote:
>
>> One of my concerns about stacking up project unit tests in the
>> requirements jobs, is the unit tests aren't as free of races as you
>> would imagine. Because they only previously impacted the one project
>> team, those teams are often just fast to recheck instead of get to the
>> bottom of it. Cross testing with them in a voting way changes their
>> impact.
>>
>> The keystone unit tests have a existing race condition in them, which
>> recently failed an unrelated requirements bump -
>>
>> http://logs.openstack.org/50/348250/6/check/gate-cross-keystone-python27-db-ubuntu-xenial/962327d/console.html#_2016-08-02_03_52_14_537923
>>
>> I'm not fully sure where to go from here. But wanted to make sure that
>> data is out there. Any keystone folks who can dive into and sort it out
>> would be highly appreciated.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Progress on undercloud-upgrade CI job

2016-08-03 Thread Emilien Macchi
Hi,

For those who missed that, I am currently working on getting an
upstream CI job that will test an undercloud upgrade (from Mitaka to
Newton).

I am trying to get this procedure done the right way so any user
outside CI will be able to out-of-the-box upgrade TripleO undercloud.

Right now, here is the work in progress:

- instack-undercloud: implement pre_upgrade hook when deploying undercloud
This patch will allow us to run custom things before (or even after) an upgrade.
Before, we tried to do it in tripleoclient, but it was wrong. Actions
should be executed by undercloud code itself.
https://review.openstack.org/#/c/350657/
For now, it's pretty basic, we just stop the service before the
upgrade, but this is a first iteration and we will work on making it
better.

- tripleoclient: update CLI when upgrading undercloud, to run "yum
update" of instack dependencies.
This patch will consider that we updated the repository previously and
update all TripleO dependencies will be updated by tripleoclient
before redeploying the undercloud.
https://review.openstack.org/#/c/331804/
Any suggestion to move this code is welcome but this is AFIK the best
place for now.

- tripleo-ci: test the upgrade itself
https://review.openstack.org/#/c/346995/
It is testing the upgrade with this process:
* deploy Mitaka undercloud
* Update RDO repositories to deploy Newton
* run tripleoclient "openstack undercloud upgrade" (that will
re-install tripleo dependencies, see 331804)
  "openstack undercloud upgrade" will run instack-install-undercloud
(with pre_upgrade to True, by default, see 350657), so service will be
stopped and undercloud will be re-deployed.
* run undercloud sanity check to validate all works.

I tested this workflow and I managed to successfully upgrade an
undercloud from Mitaka to Newton.
Now I need your feedback and reviews on the patches we are working on.

Finally, this is the spec Ben has been working on:
https://review.openstack.org/#/c/349737/
And all of the process previously described fits with what we want in
the blueprint.

Thanks for your help,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pydotplus (taskflow) vs pydot-ng (fuel)

2016-08-03 Thread Joshua Harlow

If wha'ts most alive is pydot-ng that's fine with me.

One of the reasons taskflow went with pydotplus is that it's used in the 
larger networkx graph library (which taskflow happens to use for its 
internal workflow graph):


https://github.com/networkx/networkx/blob/master/networkx/drawing/nx_pydot.py

Which if u see in the taskflow code is just what's being called into:

https://github.com/openstack/taskflow/blob/master/taskflow/types/graph.py#L59

But if pydot-ng is more 'healthy' I'm fine with adjusting a few things 
and using that (and submitting a PR to networkx to get them to use 
pydot-ng).


If fuel used networkx @ https://github.com/networkx/networkx that'd be 
even better (in general if anyone is using graphs in there app, networkx 
seems to be the best library for that kind of stuff that currently 
exists) ;)


-Josh

Igor Kalnitsky wrote:

Hi Thomas,

If I'm not mistaken, pydot-ng [1] has been made by ex-fueler in order
to overcome some limitations of pydot ( and do not change much. If
pydotplus is alive project and do the same thing, I vote for using it
in Fuel.

Thanks,
Igor


[1]: https://pypi.io/project/pydot-ng/

On Tue, Aug 2, 2016 at 4:44 PM, Thomas Goirand  wrote:

Hi,

Fuel uses pydot-ng, and (at least) taskflow uses pydotplus. I believe
both aren't using pydot because that's dead upstream.

Could we have a bit of consistency here, and have one or the other
component to switch, so we could get rid of one more package that does
the same thing in downstream distros?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIs for exposing resource capabilities

2016-08-03 Thread Everett Toews

On Aug 3, 2016, at 12:59 AM, Ramakrishna, Deepti 
> wrote:

Hi,

I would like to bring your attention to my spec [1] (already approved) on 
capability APIs and would like to get feedback from API WG.

To summarize, I propose defining a capability API for every resource in a REST 
API where it makes sense and is needed. In the context of Cinder, we would have 
a capability API at the root resource level (GET 
/v3.x/{tenant_id}/capabilities) that would return, e.g., [“volume-backup", 
“other-capability”]. Similarly, we could have a capability API on the volume 
types resource (GET /v3.x/{tenant_id}/types/{volume_type_id}/capabilities) that 
would return all the features supported by a volume type and so on.

I believe that this API pattern solves the problem of exposing capabilities and 
could be used for enabling/disabling UI widgets on Horizon and other clients. 
This pattern cleanly translates to all OpenStack projects which all face the 
general problem.

Can you please look at the spec (and the implementation for Cinder [2]) and let 
me know if you have any feedback? I would be most interested in knowing your 
thoughts about cross-project suitability of this solution.

Thanks,
Deepti

[1] https://review.openstack.org/#/c/306930/
[2] https://review.openstack.org/#/c/350310/

I added this topic to the API WG meeting tomorrow.

https://wiki.openstack.org/wiki/Meetings/API-WG

Everett

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova compute reports the disk usage

2016-08-03 Thread Tuan Luong
Hi Jay,
Thank you for the reply. Actually, I did ask this question to make sure that 
everything is still going well with disk usage report of Nova. We had the 
problem related to wrong report (IMHO it is wrong). The situation is below:

- We launch an instance with 600Gb of ephemeral disk of flavor without 
specifying extra ephemeral disk. The total free_disk before launching is 
1025Gb, the total disk_available_least is 1016Gb.
- After successful launching, what we have is the value of disk_available_disk 
= -184Gb therefore we could not launch the next instance.

It seems like the value of disk_available_disk decreases 2*600Gb when I think 
it should be 600Gb. The value of free_disk after launching is calculated well. 
I would not believe that the disk_over_commit in this case is 600Gb. Otherwise, 
we also check the instance that indeed it has only 600Gb mounted as vdb.

Best,

Tuan
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: August 03, 2016 14:53
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Nova compute reports the disk usage

On 08/03/2016 04:35 AM, Tuan Luong wrote:
> Hi,
>
> When we try to add ephemeral disk in booting instance, as we know that 
> it will create disk.local and the backing file in _/base. Both of them 
> are referred to the ephemeral disk. When nova reports the disk usage 
> of compute, does it count both of them for the used disk? What I see 
> in the resource/_tracker that it calculates the instance['ephemeral_gb'].

The resource tracker reports disk usage by adding up the root_gb + ephemeral_gb 
values of the *flavor* attribute of the instances assigned to that compute node.

Does that answer your question?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-03 Thread Amrith Kumar
To Steven's specific question:

> If PTLs can weigh in on this thread and commit to participation in such a
> cross-project subgroup, I'd be happy to lead it.

I would like to participate and help get this kind of a group going.

-amrith


> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: Tuesday, August 02, 2016 11:45 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
> 
> Responses inline:
> 
> On 8/2/16, 8:13 AM, "Hayes, Graham"  wrote:
> 
> >On 02/08/2016 15:42, Flavio Percoco wrote:
> >> On 01/08/16 10:19 -0400, Sean Dague wrote:
> >>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
>  Thierry, Ben, Doug,
> 
>  How can we distinguish between. "Project is doing the right thing,
> but
>  others are not joining" vs "Project is actively trying to keep people
>  out"?
> >>>
> >>> I think at some level, it's not really that different. If we treat
> them
> >>> as different, everyone will always believe they did all the right
> >>> things, but got no results. 3 cycles should be plenty of time to drop
> >>> single entity contributions below 90%. That means prioritizing bugs /
> >>> patches from outside groups (to drop below 90% on code commits),
> >>> mentoring every outside member that provides feedback (to drop below
> >>>90%
> >>> on reviews), shifting development resources towards mentoring / docs /
> >>> on ramp exercises for others in the community (to drop below 90% on
> >>>core
> >>> team).
> >>>
> >>> Digging out of a single vendor status is hard, and requires making
> that
> >>> your top priority. If teams aren't interested in putting that ahead of
> >>> development work, that's fine, but that doesn't make it a sustainable
> >>> OpenStack project.
> >>
> >>
> >> ++ to the above! I don't think they are that different either and we
> >>might not
> >> need to differentiate them after all.
> >>
> >> Flavio
> >>
> >
> >I do have one question - how are teams getting out of
> >"team:single-vendor" and towards "team:diverse-affiliation" ?
> >
> >We have tried to get more people involved with Designate using the ways
> >we know how - doing integrations with other projects, pushing designate
> >at conferences, helping DNS Server vendors to add drivers, adding
> >drivers for DNS Servers and service providers ourselves, adding
> >features - the lot.
> >
> >We have a lot of user interest (41% of users were interested in using
> >us), and are quite widely deployed for a non tc-approved-release
> >project (17% - 5% in production). We are actually the most deployed
> >non tc-approved-release project.
> >
> >We still have 81% of the reviews done by 2 companies, and 83% by 3
> >companies.
> 
> By the objective criteria of team:single-vendor Designate isn't a single
> vendor project.  By the objective criteria of team:diverse-affiliation
> your not a diversely affiliated project either.  This is why I had
> suggested we need a third tag which accurately represents where Designate
> is in its community building journey.
> >
> >I know our project is not "cool", and DNS is probably one of the most
> >boring topics, but I honestly believe that it has a place in the
> >majority of OpenStack clouds - both public and private. We are a small
> >team of people dedicated to making Designate the best we can, but are
> >still one company deciding to drop OpenStack / DNS development from
> >joining the single-vendor party.
> 
> Agree Designate is important to OpenStack.  But IMO it is not a single
> vendor project as defined by the criteria given the objective statistics
> you mentioned above.
> 
> >
> >We are definitely interested in putting community development ahead of
> >development work - but what that actual work is seems to difficult to
> >nail down. I do feel sometimes that I am flailing in the dark trying to
> >improve this.
> 
> Fantastic its a high-prioiirty goal.  Sad to hear your struggling but
> struggling is part of the activity.
> >
> >If projects could share how that got out of single-vendor or into
> >diverse-affiliation this could really help teams progress in the
> >community, and avoid being removed.
> 
> You bring up a fantastic point here - and that is that teams need to share
> techniques for becoming multi-vendor and some day diversely affiliated.  I
> am a super busy atm, or I would volunteer to lead a cross-project effort
> with PTLs to coordinate community building from our shared knowledge pool
> of expert Open Source contributors in the wider OpenStack community.
> 
> That said, I am passing the baton for Kolla PTL at the conclusion of
> Newton (assuming the leadership pipeline I've built for Kolla wants to run
> for Kolla PTL), and would be pleased to lead a cross project effort in
> Occata on moving from single-vendor to multi-vendor and beyond if there is
> enough PTL interest.  I take 

Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-03 Thread Jay Pipes

On 08/03/2016 10:03 AM, Matt Riedemann wrote:

On 8/2/2016 1:11 AM, han.ro...@zte.com.cn wrote:

patchset url: https://review.openstack.org/#/c/334747/

Allow "revert_resize" to recover error instance after resize/migrate.

When resize/migrate instance, if error occurs on source compute node,
instance state can rollback to active currently. But if error occurs in
"finish_resize" function on destination compute node, the instance state
would not rollback to active.

This patch is to rollback instance state from error to active when
resize or migrate action failed on destination compute node..

Best,

Rong Han


I lean toward yes for this needing a microversion as it's a behavior
change and without a microversion, how am I as an end user of the API
supposed to have any idea that I can perform this action and have a
chance of it working? We've said the same thing for other stuff like
this, like being able to rescue a volume-backed instance:

https://review.openstack.org/#/c/270288/18/nova/compute/api.py


This is a grey area for sure. Personally, I kind of view this as a bug 
fix, not really a behaviour change that needs to be communicated to the 
API end user. But I suppose I could see the case for needing a 
microversion, so I'll just go along with whatever the API WG thinks.


Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [deb-packaging][infra] We need help with merging important change requests

2016-08-03 Thread Ivan Udovichenko
Hello,

Recently, change request where Thomas Goirand (zigo) proposed job [1] to
build generic Debian packages was finally merged. After CR to
openstack/deb-spice-html5 project [2] we've faced issue when the Infra
repository is not being used during package build job. Paul Belanger
already modified required script [3] to produce a source list to APT
which could be linked anytime. I've already linked it in
pkgdeb-build-pkg Jenkins job and the change [4] is on review.

We also need another Thomas Goirand change request [5] to be merged
and to be able to backport packages from Sid.

Please spend some time on reviewing and merging CRs [4] and [5].
Monty Taylor we also need +1 from you as PTL of packaging-deb team.

Once required changes are merged we can move forward with adding other
required projects [6] and make progress with packaging.

Thank you.


Already merged:
[1] https://review.openstack.org/307742
[2] https://review.openstack.org/347946
[3] https://review.openstack.org/341856

Requires your attention:
[4] https://review.openstack.org/350056
[5] https://review.openstack.org/346095

TODO:
[6] https://review.openstack.org/347047

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [E] [TripleO] scripts to do post deployment analysis of an overcloud

2016-08-03 Thread Joe Talerico
On Wed, Jul 27, 2016 at 2:04 AM, Hugh Brock  wrote:
> On Jul 26, 2016 8:08 PM, "Gordon, Kent" 
> wrote:
>>
>>
>>
>>
>>
>>
>> > -Original Message-
>> > From: Gonéri Le Bouder [mailto:gon...@lebouder.net]
>> > Sent: Tuesday, July 26, 2016 12:24 PM
>> > To: openstack-dev@lists.openstack.org
>> > Subject: [E] [openstack-dev] [TripleO] scripts to do post deployment
>> > analysis
>> > of an overcloud
>> >
>> > Hi all,
>> >
>> > For the Distributed-CI[0] project, we did two scripts[1] that we use to
>> > extract
>>
>> Links not included in message
>>
>> > information from an overcloud.
>> > We use this information to improve the readability of the deployment
>> > logs.
>> > I attached an example to show how we use the extracted stack
>> > information.
>> >
>> > Now my question, do you know some other tools that we can use to do this
>> > kind of anaylsis?
>> > --
>> > Gonéri Le Bouder
>>
>> Kent S. Gordon
>
> Joe, any overlap with Browbeat here?
>
> -Hugh

Hey Hugh- Not from what I can tell... Any reason this tool couldn't be
built into the openstack-api?

This seems to be looking at the heat information? Browbeat will create
a ansible inventory and login to a Overcloud deployment and check on
settings and the cluster vs looking at heat.

Joe

>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-03 Thread Alex Xu
2016-08-03 22:03 GMT+08:00 Matt Riedemann :

> On 8/2/2016 1:11 AM, han.ro...@zte.com.cn wrote:
>
>> patchset url: https://review.openstack.org/#/c/334747/
>>
>>
>>
>> Allow "revert_resize" to recover error instance after resize/migrate.
>>
>> When resize/migrate instance, if error occurs on source compute node,
>> instance state can rollback to active currently. But if error occurs in
>> "finish_resize" function on destination compute node, the instance state
>> would not rollback to active.
>>
>> This patch is to rollback instance state from error to active when
>> resize or migrate action failed on destination compute node..
>>
>>
>>
>>
>>
>> Best,
>>
>> Rong Han
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I lean toward yes for this needing a microversion as it's a behavior
> change and without a microversion, how am I as an end user of the API
> supposed to have any idea that I can perform this action and have a chance
> of it working? We've said the same thing for other stuff like this, like
> being able to rescue a volume-backed instance:
>
> https://review.openstack.org/#/c/270288/18/nova/compute/api.py



At least, I didn't get a good reason for not using Microversion.


>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-03 Thread Christian Schwede
Thanks Steven for your feedback! Please see my answers inline.

On 02.08.16 23:46, Steven Hardy wrote:
> On Tue, Aug 02, 2016 at 09:36:45PM +0200, Christian Schwede wrote:
>> Hello everyone,
>>
>> I'd like to improve the Swift deployments done by TripleO. There are a
>> few problems today when deployed with the current defaults:
> 
> Thanks for digging into this, I'm aware this has been something of a
> known-issue for some time, so it's great to see it getting addressed :)
> 
> Some comments inline;
> 
>> 1. Adding new nodes (or replacing existing nodes) is not possible,
>> because the rings are built locally on each host and a new node doesn't
>> know about the "history" of the rings. Therefore rings might become
>> different on the nodes, and that results in an unusable state eventually.
>>
>> 2. The rings are only using a single device, and it seems that this is
>> just a directory and not a mountpoint with a real device. Therefore data
>> is stored on the root device - even if you have 100TB disk space in the
>> background. If not fixed manually your root device will run out of space
>> eventually.
>>
>> 3. Even if a real disk is mounted in /srv/node, replacing a faulty disk
>> is much more troublesome. Normally you would simply unmount a disk, and
>> then replace the disk sometime later. But because mount_check is set to
>> False in the storage servers data will be written to the root device in
>> the meantime; and when you finally mount the disk again, you can't
>> simply cleanup.
>>
>> 4. In general, it's not possible to change cluster layout (using
>> different zones/regions/partition power/device weight, slowly adding new
>> devices to avoid 25% of the data will be moved immediately when adding
>> new nodes to a small cluster, ...). You could manually manage your
>> rings, but they will be overwritten finally when updating your overcloud.
>>
>> 5. Missing erasure coding support (or storage policies in general)
>>
>> This sounds bad, however most of the current issues can be fixed using
>> customized templates and some tooling to create the rings in advance on
>> the undercloud node.
>>
>> The information about all the devices can be collected from the
>> introspection data, and by using node placement the nodenames in the
>> rings are known in advance if the nodes are not yet powered on. This
>> ensures a consistent ring state, and an operator can modify the rings if
>> needed and to customize the cluster layout.
>>
>> Using some customized templates we can already do the following:
>> - disable rinbguilding on the nodes
>> - create filesystems on the extra blockdevices
>> - copy ringfiles from the undercloud, using pre-built rings
>> - enable mount_check by default
>> - (define storage policies if needed)
>>
>> I started working on a POC using tripleo-quickstart, some custom
>> templates and a small Python tool to build rings based on the
>> introspection data:
>>
>> https://github.com/cschwede/tripleo-swift-ring-tool
>>
>> I'd like to get some feedback on the tool and templates.
>>
>> - Does this make sense to you?
> 
> Yes, I think the basic workflow described should work, and it's good to see
> that you're passing the ring data via swift as this is consistent with how
> we already pass some data to nodes via our DeployArtifacts interface:
> 
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.yaml
> 
> Note however that there are no credentials to access the undercloud swift
> on the nodes, so you'll need to pass a tempurl reference in (which is what
> we do for deploy artifacts, obviously you will have credentials to create
> the container & tempurl on the undercloud).

Ah, that's very useful! I updated my POC; makes one less customized
template and less code to support in the Python tool. Works as expected!

> One slight concern I have is mandating the use of predictable placement -
> it'd be nice to think about ways we might avoid that but the undercloud
> centric approach seems OK for a first pass (in either case I think the
> delivery via swift will be the same).

Do you mean the predictable artifact filename? We could just add a
randomized prefix to the filename IMO.

>> - How (and where) could we integrate this upstream?
> 
> So I think the DeployArtefacts interface may work for this, and we have a
> helper script that can upload data to swift:
> 
> https://github.com/openstack/tripleo-common/blob/master/scripts/upload-swift-artifacts
> 
> This basically pushes a tarball to swift, creates a tempurl, then creates a
> file ($HOME/.tripleo/environments/deployment-artifacts.yaml) which is
> automatically read by tripleoclient on deployment.
> 
> DeployArtifactURLs is already a list, but we'll need to test and confirm we
> can pass both e.g swift ring data and updated puppet modules at the same
> time.

If I see this correct the artifacts are deployed just before Puppet
runs; and the Swift rings doesn't affect the Puppet modules, so that
should be 

Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-03 Thread Matt Riedemann

On 8/2/2016 1:11 AM, han.ro...@zte.com.cn wrote:

patchset url: https://review.openstack.org/#/c/334747/



Allow "revert_resize" to recover error instance after resize/migrate.

When resize/migrate instance, if error occurs on source compute node,
instance state can rollback to active currently. But if error occurs in
"finish_resize" function on destination compute node, the instance state
would not rollback to active.

This patch is to rollback instance state from error to active when
resize or migrate action failed on destination compute node..





Best,

Rong Han


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I lean toward yes for this needing a microversion as it's a behavior 
change and without a microversion, how am I as an end user of the API 
supposed to have any idea that I can perform this action and have a 
chance of it working? We've said the same thing for other stuff like 
this, like being able to rescue a volume-backed instance:


https://review.openstack.org/#/c/270288/18/nova/compute/api.py

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-03 Thread Andrey Pavlov
Hi,

Maybe I understood this option -

Instead of adding one more relation to glance, I can relate my charm to new
relation 'cinder-volume-service'

So there is no more changes in the glance charm.
And additional in my charm will be -

metadata.yaml -




*subordinate: trueprovides:  image-backend:interface: cinderscope:
container*


and then I can relate my charm to glance (and my charm will be installed in
the same container as glance, so I can configure OS)
*juju add-relation glance:cinder-volume-service
scaleio-openstack:image-backend*

This option works for me - I don't need to add something to glance config.
I'm only need to add files to /etc/glance/rootwrap.d/
and this option allows to do it.

I've made additional review - https://review.openstack.org/#/c/350565/
But I don't need it :)
Should I abandon it or not?


On Tue, Aug 2, 2016 at 6:15 PM, James Page  wrote:

> Hi Andrey
>
> On Tue, 2 Aug 2016 at 15:59 Andrey Pavlov  wrote:
>
>> I need to add glance support via storing images in cinder instead of
>> local files.
>> (This works only from Mitaka version due to glance-store package)
>>
>
> OK
>
>
>> First step I've made here -
>> https://review.openstack.org/#/c/348336/
>> This patchset adds ability to relate glance-charm to cinder-charm
>> (it's similar to ceph/swift relations)
>>
>
> Looks like a good start, I'll comment directly on the review with any
> specific comments.
>
>
>> And also it configures glance's rootwrap - original glance package
>> doesn't have such code
>> (
>>   I think that this is a bug in glance-common package - cinder and
>> nova can do it themselves.
>>   And if someone point me to bugtracker - I will file the bug there.
>> )
>>
>
> Sounds like this should be in the glance package:
>
>   https://bugs.launchpad.net/ubuntu/+source/glance/+filebug
>
>  or use:
>
>   ubuntu-bug glance-common
>
> on an installed system.
>
>
>> But main question is about additional configurations' steps -
>> Some cinder backends need to store additional files in
>> /etc/glance/rootwrap.d/ folder.
>> I have two options to implement this -
>> 1) relate my charm to glance:juju-info (it will be run on the same
>> machine as glance)
>> and do all work in this hook in my charm.
>> 2) add one more relation to glance - like
>> 'storage-backend:cinder-backend' in cinder.
>> And write code in a same way - with ability to pass config options.
>>
>
>> I prefer option 2. It's more logical and more general. It will allow
>> to configure any cinder's backend.
>>
>
> +1 the subordinate approach in cinder (and nova) works well; lets ensure
> the semantics on the relation data mean its easy to restart the glance
> services from the subordinate service if need be.
>
> Taking this a step further, it might also make sense to have the relation
> to cinder on the subordinate charm and pass up the data item to configure
> glance to use cinder from the sub - does that make sense in this context?
>
> Cheers
>
> James
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission

2016-08-03 Thread Znoinski, Waldemar


 >-Original Message-
 >From: Znoinski, Waldemar [mailto:waldemar.znoin...@intel.com]
 >Sent: Monday, August 1, 2016 6:52 PM
 >To: OpenStack Development Mailing List (not for usage questions)
 >
 >Subject: Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission
 >
 >
 > >-Original Message-
 > >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 > >Sent: Friday, July 29, 2016 6:37 PM
 > >To: openstack-dev@lists.openstack.org
 > >Subject: Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission
 >>  >On 7/29/2016 10:47 AM, Znoinski, Waldemar wrote:
 > >> Hi Matt et al,
 > >> Thanks for taking the time to have a chat about it in Nova meeting
 >>yesterday.
 > >> In relation to your two points below...
 > >>
 > >> 1. tempest-dsvm-ovsdpdk-nfv-networking job in our Intel NFV CI was
 >>broken for about a day till we troubleshooted the issue, to find out merge
 >of  >this [1] change started to cause our troubles.
 > >> We set Q_USE_PROVIDERNET_FOR_PUBLIC back to False to let the job
 >get  >green again and test what it should be testing - nova/neutron changes
 >and  >not giving false negatives because of that devstack change.
 > >> We saw a REVERT [2] of the above change shortly after as it was breaking
 >>Jenkins neutron's linuxbridge tempest too [3].
 > >>
 > >> 2. Our aim is to have two things tested when new change is proposed to
 > >devstack: NFV and OVS+DPDK. For better clarity we'll run two separate
 >jobs  >instead of having NFV+OVSDPDK together.
 > >> Currently we run OVSDPDK+ODL on devstack changes to discover
 >potential  >issues with configuring these two together with each devstack
 >change  >proposed. We've discussed this internally and we can add/(replace
 >>OVSDPDK+ODL job) with a 'tempest-dsvm-full-nfv' one (currently running
 >on  >Nova changes) that does devstack + runs full tempest test suite (1100+
 >tests)  >on NFV enabled flavors. It should test properly proposed devstack
 >changes  >with NFV features (as per wiki [4]) we have enabled in Openstack.
 > >>
 > >> Let me know if there are other questions, concerns, asks or suggestions.
 > >>
 > >> Thanks
 > >> Waldek
 > >>
 > >>
 > >> [1] https://review.openstack.org/#/c/343072/
 > >> [2] https://review.openstack.org/#/c/345820/
 > >> [3] https://bugs.launchpad.net/devstack/+bug/1605423
 > >> [4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI
 > >>
 > >>
 > >>  >-Original Message-
 > >>  >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 > >>  >Sent: Thursday, July 28, 2016 4:14 PM  >>  >To: openstack-
 >d...@lists.openstack.org  >>  >Subject: Re: [openstack-dev] [nova] [infra]
 >Intel NFV CI voting  >> permission  >  >On 7/21/2016 5:38 AM, Znoinski,
 >Waldemar wrote:
 > >>  >> Hi Nova cores et al,
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >> I would like to acquire voting (+/-1 Verified) permission for our  >> 
 > >> >>
 >Intel NFV CI.
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >> 1.   It's running since Q1'2015.
 > >>  >>
 > >>  >> 2.   Wiki [1].
 > >>  >>
 > >>  >> 3.   It's using openstack-infra/puppet-openstackci
 > >>  >>  with Zuul
 >>> >> 2.1.1 for last 4 months: zuul, gearman, Jenkins, nodepool, local  >>
 >Openstack  >cloud.
 > >>  >>
 > >>  >> 4.   We have a team of 2 people + me + Nagios looking after it. 
 > >> Its
 > >>  >> problems are fixed promptly and rechecks triggered after non-code
 >>> >> related issues. It's being reconciled against ci-watch [2].
 > >>  >>
 > >>  >> 5.   Reviews [3].
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >> Let me know if further questions.
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >> 1.
 >https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI
 > >>  >>
 > >>  >> 2.   http://ci-watch.tintri.com/project?project=nova
 > >>  >>
 > >>  >> 3.
 > >>  >> https://review.openstack.org/#/q/reviewer:%22Intel+NFV-
 > >>  >CI+%253Copensta
 > >>  >> ck-nfv-ci%2540intel.com%253E%22
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >> *Waldek*
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >> --
 > >>  >> Intel Research and Development Ireland Limited Registered in  >>
 >Ireland  >> Registered Office: Collinstown Industrial Park, Leixlip,  >> 
 >County
 >>> Kildare Registered Number: 308263  >>  >> This e-mail and  >> any
 >attachments may contain confidential material for  >> the sole use  >> of the
 >intended recipient(s). Any review or distribution  >> by others  >> is 
 >strictly
 >prohibited. If you are not the intended  >> recipient,  >> please contact the
 >sender and delete all copies.
 > >>  >>
 > >>  >>
 > >>  >>
 > >>  >>
 > >>
 >
 >>>
 >_
 > >_
 > >>  >
 > >>  >>  OpenStack Development Mailing List (not for usage questions)
 >>> >> Unsubscribe:
 > >>  >> 

Re: [openstack-dev] [Cinder] Pending removal of Scality volume driver

2016-08-03 Thread Nicolas Trangez
Sean,

Jordan, who's managing this, is currently in holidays. Furthermore
we're experiencing some hardware issues with the lab used by the CI
system to run the tests complicating things.

Any chance we could get some more time for Jordan to look into this?

Thanks,

Nicolas

On Wed, 2016-08-03 at 08:55 -0300, Erlon Cruz wrote:
> Hi sean, I think it would worth to CC the contact info informed in
> the CI
> Wiki (openstack...@scality.com).
> 
> On Tue, Aug 2, 2016 at 4:26 PM, Sean McGinnis 
> wrote:
> 
> > 
> > Tomorrow is the one week grace period. I just ran the last comment
> > script and it still shows it's been 112 days since the Scality CI
> > has
> > reported on a patch.
> > 
> > Please let me know the status of the CI.
> > 
> > On Thu, Jul 28, 2016 at 07:28:26AM -0500, Sean McGinnis wrote:
> > > 
> > > On Thu, Jul 28, 2016 at 11:28:42AM +0200, Jordan Pittier wrote:
> > > > 
> > > > Hi Sean,
> > > > 
> > > > Thanks for the heads up.
> > > > 
> > > > On Wed, Jul 27, 2016 at 11:13 PM, Sean McGinnis  > > > gmx.com
> > > 
> > > > 
> > > > wrote:
> > > > 
> > > > > 
> > > > > The Cinder policy for driver CI requires that all volume
> > > > > drivers
> > > > > have a CI reporting on any new patchset. CI's may have some
> > > > > down
> > > > > time, but if they do not report within a two week period they
> > > > > are
> > > > > considered out of compliance with our policy.
> > > > > 
> > > > > This is a notification that the Scality OpenStack CI is out
> > > > > of
> > compliance.
> > > 
> > > > 
> > > > > 
> > > > > It has not reported since April 12th, 2016.
> > > > > 
> > > > Our CI is still running for every patchset, just that it
> > > > doesn't report
> > > > back to Gerrit. I'll see what I can do about it.
> > > 
> > > Great! I'll watch for it to start reporting again. Thanks for
> > > being
> > > responsive and looking into it.
> > > 
> > > > 
> > > > 
> > > > > 
> > > > > 
> > > > > The patch for driver removal has been posted here:
> > > > > 
> > > > > https://review.openstack.org/348032/
> > > > 
> > > > That link is about the Tegile driver, not ours.
> > > 
> > > Oops, copy/paste error. Here is the correct one:
> > > 
> > > https://review.openstack.org/#/c/348042/
> > > 
> > > > 
> > > > 
> > > 
> > > 
> > ___
> > ___
> > > 
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Weekly meeting today, canceled today

2016-08-03 Thread Amrith Kumar
I'm going to cancel today's weekly trove meeting for a couple of reasons. We 
had our mid-cycle last week and went through a lot of the backlog of items, and 
there are a couple of people who are out today so attendance and agenda are 
kind-of thin.

If there is something urgent that you would like to discuss, let's catch up on 
#openstack-trove.

Thanks,

-amrith


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova compute reports the disk usage

2016-08-03 Thread Jay Pipes

On 08/03/2016 04:35 AM, Tuan Luong wrote:

Hi,

When we try to add ephemeral disk in booting instance, as we know that
it will create disk.local and the backing file in _/base. Both of them
are referred to the ephemeral disk. When nova reports the disk usage of
compute, does it count both of them for the used disk? What I see in the
resource/_tracker that it calculates the instance[‘ephemeral_gb’].


The resource tracker reports disk usage by adding up the root_gb + 
ephemeral_gb values of the *flavor* attribute of the instances assigned 
to that compute node.


Does that answer your question?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.db 4.10.0 release (newton)

2016-08-03 Thread no-reply
We are psyched to announce the release of:

oslo.db 4.10.0: Oslo Database library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

Changes in oslo.db 4.9.0..4.10.0


3277ef3 Capture DatabaseError for deadlock check


Diffstat (except docs and test files)
-

oslo_db/sqlalchemy/exc_filters.py| 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Exception registering nodes when Ironic API with apache

2016-08-03 Thread Emilien Macchi
On Wed, Aug 3, 2016 at 6:06 AM, Lucas Alvares Gomes
 wrote:
> Hi,
>
>> On Friday we landed a patch in TripleO to deploy Ironic API in WSGI with 
>> Apache.
>> Since then, our baremetal CI jobs were failing randomly, and a lot of
>> time, with an exception.
>> I decided to revert the patch to bring our CI back:
>> https://review.openstack.org/#/c/349281/ (+2 from slagle).
>>
>
> (most a FYI...)
>
> I haven't dug to see what the puppet modules are doing, but we do have
> documentation around setting up the ironic-api service to run behind
> Apache with mod_wsgi:
> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configuring-ironic-api-behind-mod-wsgi
>
> I've followed the instructions and got it working in an local
> environment here (logs: http://paste.openstack.org/show/547736/)
>
> Hope that helps,

Well, not really :-) It might work in devstack, but it really breaks TripleO.
We currently configure Apache with the same template of configuration
for Nova, Cinder, Keystone, Ceilometer, Aodh, Gnocchi, Mistral, (...)
and Ironic.
Ironic is the only service that looks unstable when deploying with Apache.

Please look at Ironic vhost:
https://paste.fedoraproject.org/400663/02264641/

And Ironic configuration file:
ironic: https://paste.fedoraproject.org/400666/26543147
ironic-inspector: https://paste.fedoraproject.org/400665/70226518

Do you see something not conventional?

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [neutron][dvr][fip] fg device allocated private ip address

2016-08-03 Thread huangdenghui
Hi zhuna
I think  carl means  that there is not need dynamic route protocol here, 
arp proxy is enough, but i have a concern here, upstream router should have a 
mechanism to let private gateway ip address as arp source protocol address, 
since fg device has private ip address?






At 2016-08-03 15:22:15, "zhuna"  wrote:


Hi Carl,

 

IMO, if the upstream router has the route to floating ip subnet, no need to 
assign additional IP address to the router.

 

For example, there are 2 subnets in external network,

Subnet1: 10.0.0.0/24 (fg ip address)

Subnet2: 9.0.0.0/24 (fip)

 

Suppose assign fip 9.0.0.10 for vm1, and the fg ip address is 10.0.0.10, so 
there are 2 ip address configured in fg, one is 9.0.0.10 and 10.0.0.10.

+---+

|  router ns   |

+---+

| fg (10.0.0.10, 9.0.0.10)

|

|

| router-if (10.0.0.1)

+---+

|  upstream router   | Internet

+---+

   

The default route of router ns is 10.0.0.1,  add a static route 9.0.0.10/32 
10.0.0.10 to upstream router , or learn the route by routing protocol 
(neutron-dynamic-routing).

 

 

 

发件人: Carl Baldwin [mailto:c...@ecbaldwin.net]
发送时间: 2016年8月3日 6:39
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip 
address

 

 

 

On Tue, Aug 2, 2016 at 6:15 AM, huangdenghui  wrote:

hi john and brain
   thanks for your information, if we get patch[1],patch[2] merged,then fg can 
allocate private ip address. after that, we need consider floating ip 
dataplane, in current dvr implementation, fg is used to reachment testing for 
floating ip, now,with subnet types bp,fg has different subnet than floating ip 
address, from fg'subnet gateway point view, to reach floating ip, it need a 
routes entry, destination is some floating ip address, fg'ip address is the 
nexthop, and this routes entry need be populated at the event of floating ip 
creating, deleting when floating ip is dissociated. any comments?

 

The fg device will still do proxy arp for the floating ip to other devices on 
the external network. This will be part of our testing. The upstream router 
should still have an on-link route on the network to the floating ip subnet. 
IOW, you shouldn't replace the floating ip subnet with the private fg subnet on 
the upstream router. You should add the new subnet to the already existing ones 
and the router should have an additional IP address on the new subnet to be 
used as the gateway address for north-bound traffic.

 

Carl__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Exception registering nodes when Ironic API with apache

2016-08-03 Thread Jim Rollenhagen
On Sun, Jul 31, 2016 at 07:01:16PM -0400, Emilien Macchi wrote:
> Hi,
> 
> On Friday we landed a patch in TripleO to deploy Ironic API in WSGI with 
> Apache.
> Since then, our baremetal CI jobs were failing randomly, and a lot of
> time, with an exception.
> I decided to revert the patch to bring our CI back:
> https://review.openstack.org/#/c/349281/ (+2 from slagle).
> 
> Ironic experts, please have a look at the bug report:
> https://bugs.launchpad.net/tripleo/+bug/1608252

I see that the reverted patch touches rabbit config somehow:
Class['::rabbitmq'] -> Service['httpd']

And the bug report has lots of errors about rabbit messages timing out.

Sounds related; what does that line do?

// jim

> 
> I'll propose to re-activate it so we can debug more of what it fails.
> 
> Thanks,
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Pending removal of Scality volume driver

2016-08-03 Thread Erlon Cruz
Hi sean, I think it would worth to CC the contact info informed in the CI
Wiki (openstack...@scality.com).

On Tue, Aug 2, 2016 at 4:26 PM, Sean McGinnis  wrote:

> Tomorrow is the one week grace period. I just ran the last comment
> script and it still shows it's been 112 days since the Scality CI has
> reported on a patch.
>
> Please let me know the status of the CI.
>
> On Thu, Jul 28, 2016 at 07:28:26AM -0500, Sean McGinnis wrote:
> > On Thu, Jul 28, 2016 at 11:28:42AM +0200, Jordan Pittier wrote:
> > > Hi Sean,
> > >
> > > Thanks for the heads up.
> > >
> > > On Wed, Jul 27, 2016 at 11:13 PM, Sean McGinnis  >
> > > wrote:
> > >
> > > > The Cinder policy for driver CI requires that all volume drivers
> > > > have a CI reporting on any new patchset. CI's may have some down
> > > > time, but if they do not report within a two week period they are
> > > > considered out of compliance with our policy.
> > > >
> > > > This is a notification that the Scality OpenStack CI is out of
> compliance.
> > > > It has not reported since April 12th, 2016.
> > > >
> > > Our CI is still running for every patchset, just that it doesn't report
> > > back to Gerrit. I'll see what I can do about it.
> >
> > Great! I'll watch for it to start reporting again. Thanks for being
> > responsive and looking into it.
> >
> > >
> > > >
> > > > The patch for driver removal has been posted here:
> > > >
> > > > https://review.openstack.org/348032/
> > >
> > > That link is about the Tegile driver, not ours.
> >
> > Oops, copy/paste error. Here is the correct one:
> >
> > https://review.openstack.org/#/c/348042/
> >
> > >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Stepping Down.

2016-08-03 Thread Davanum Srinivas
Thanks for your guidance/help/work/time Morgan!

-- Dims

On Tue, Aug 2, 2016 at 4:18 PM, Morgan Fainberg
 wrote:
> Based upon my personal time demands among a number of other reasons I will
> be stepping down from the Technical Committee. This is planned to take
> effect with the next TC election so that my seat will be up to be filled at
> that time.
>
> For those who elected me in, thank you.
>
> Regards,
> --Morgan Fainberg
> IRC: notmorgan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] [horizon-plugin] AngularJS 1.5.8

2016-08-03 Thread Rob Cresswell
Hi all,

Angular 1.5.8 is now updated in its XStatic repo: 
https://github.com/openstack/xstatic-angular

I've done some manual testing of the angular content and found no issues so 
far. I'll be checking that the JS tests and integration tests pass too; if they 
do, would it be desirable to release 1.5.8 this week, or wait until after N is 
released? It'd be nice to be in sync with current stable, but I don't want to 
cause unnecessary work a few weeks before plugin FF.

Thoughts?

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Angular panel enable/disable not overridable in local_settings

2016-08-03 Thread Rob Cresswell
Ah, I see. I think I can understand why that was done initially, but since 
using UPDATE_HORIZON_CONFIG is already going beyond the scope of the enabled 
file since its directly affecting an exposed setting, I think we should just 
use the setting (and document it appropriately in the settings.rst)

Rob

On 2 August 2016 at 23:39, Richard Jones 
> wrote:
On 3 August 2016 at 00:32, Rob Cresswell 
> wrote:
Hi all,

So we seem to be adopting a pattern of using UPDATE_HORIZON_CONFIG in the 
enabled files to add a legacy/angular toggle to the settings. I don't like 
this, because in settings.py the enabled files are processed *after* 
local_settings.py imports, meaning the angular panel will always be enabled, 
and would require a local/enabled file change to disable it.

My suggestion would be:

- Remove current UPDATE_HORIZON_CONFIG change in the swift panel and images 
panel patch
- Add equivalents ('angular') to the settings.py HORIZON_CONFIG dict, and then 
the 'legacy' version to the test settings.

I think that should run UTs as expected, and allow the legacy/angular panel to 
be toggled via local_settings.

Was there a reason we chose to use UPDATE_HORIZON_CONFIG, rather than just 
updating the dict in settings.py? I couldn't recall a reason, and the original 
patch ( https://review.openstack.org/#/c/293168/ ) doesn't seem to indicate why.

It was an attempt to keep the change more self-contained, and since 
UPDATE_HORIZON_CONFIG existed, it seemed reasonable to use it. It meant that 
all the configuration regarding the visibility of the panel was in one place, 
and since it's expected that deployers edit enabled files, I guess your concern 
stated above didn't come into it.

I'm ambivalent about the change you propose, would be OK going either way :-)


 Richard


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Exception registering nodes when Ironic API with apache

2016-08-03 Thread Lucas Alvares Gomes
Hi,

> On Friday we landed a patch in TripleO to deploy Ironic API in WSGI with 
> Apache.
> Since then, our baremetal CI jobs were failing randomly, and a lot of
> time, with an exception.
> I decided to revert the patch to bring our CI back:
> https://review.openstack.org/#/c/349281/ (+2 from slagle).
>

(most a FYI...)

I haven't dug to see what the puppet modules are doing, but we do have
documentation around setting up the ironic-api service to run behind
Apache with mod_wsgi:
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configuring-ironic-api-behind-mod-wsgi

I've followed the instructions and got it working in an local
environment here (logs: http://paste.openstack.org/show/547736/)

Hope that helps,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-03 Thread Flavio Percoco

On 02/08/16 15:13 +, Hayes, Graham wrote:

On 02/08/2016 15:42, Flavio Percoco wrote:

On 01/08/16 10:19 -0400, Sean Dague wrote:

On 08/01/2016 09:58 AM, Davanum Srinivas wrote:

Thierry, Ben, Doug,

How can we distinguish between. "Project is doing the right thing, but
others are not joining" vs "Project is actively trying to keep people
out"?


I think at some level, it's not really that different. If we treat them
as different, everyone will always believe they did all the right
things, but got no results. 3 cycles should be plenty of time to drop
single entity contributions below 90%. That means prioritizing bugs /
patches from outside groups (to drop below 90% on code commits),
mentoring every outside member that provides feedback (to drop below 90%
on reviews), shifting development resources towards mentoring / docs /
on ramp exercises for others in the community (to drop below 90% on core
team).

Digging out of a single vendor status is hard, and requires making that
your top priority. If teams aren't interested in putting that ahead of
development work, that's fine, but that doesn't make it a sustainable
OpenStack project.



++ to the above! I don't think they are that different either and we might not
need to differentiate them after all.

Flavio



I do have one question - how are teams getting out of
"team:single-vendor" and towards "team:diverse-affiliation" ?

We have tried to get more people involved with Designate using the ways
we know how - doing integrations with other projects, pushing designate
at conferences, helping DNS Server vendors to add drivers, adding
drivers for DNS Servers and service providers ourselves, adding
features - the lot.

We have a lot of user interest (41% of users were interested in using
us), and are quite widely deployed for a non tc-approved-release
project (17% - 5% in production). We are actually the most deployed
non tc-approved-release project.

We still have 81% of the reviews done by 2 companies, and 83% by 3
companies.

I know our project is not "cool", and DNS is probably one of the most
boring topics, but I honestly believe that it has a place in the
majority of OpenStack clouds - both public and private. We are a small
team of people dedicated to making Designate the best we can, but are
still one company deciding to drop OpenStack / DNS development from
joining the single-vendor party.

We are definitely interested in putting community development ahead of
development work - but what that actual work is seems to difficult to
nail down. I do feel sometimes that I am flailing in the dark trying to
improve this.

If projects could share how that got out of single-vendor or into
diverse-affiliation this could really help teams progress in the
community, and avoid being removed.

Making grand statements about "work harder on community" without any
guidance about what we need to work on do not help the community.


Zaqar has had the same issue ever since the project was created. The team has
been actively mentoring folks from the Outreachy program and Google Summer of
code whenever possible.

Folks from other teams have also contributed to the project but sometimes these
folks were also part of the same company as the majority of Zaqar's
contributors, which doesn't help much with this.

It's not until recently that Zaqar has increased its diversity but I believe
it's in the edge and it's also related to the amount (or lack there of) of
adoption it's gotten.

To me, one of the most important items is engaging with mentees from other
programs. I see this also as a way to give back to the communities and the rest
of the world.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Aug.3

2016-08-03 Thread joehuang
Hi, team,


IRC meeting: https://webchat.freenode.net/?channels=openstack-meetingon every 
Wednesday starting from UTC 13:00.



The agenda of this weekly meeting is:


# progress review in feature parity, cross pod L2 networking, dynamic pod 
binding: 
https://docs.google.com/spreadsheets/d/1yXdxGtQq_6YJUtJtjmPICyhq4SbP8E6uWyzn0zev8bI/

# open discussion


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [searchlight] What do we need in notification payload?

2016-08-03 Thread Hirofumi Ichihara

Thank you for your quick reply.

On 2016/08/01 23:31, McLellan, Steven wrote:

In our (Searchlight's) ideal world, every notification about a resource
would contain the full representation of that resource (for instance,
equivalent to the API response for a resource), because it means that each
notification on its own can be treated as the current state at that time
without having to potentially handle multiple incremental updates to a
resource. That isn't the case at the moment in lots of places either for
historic reasons or because the implementation would be complex or
expensive.
It seems current neutron implementation just adds API response body into 
notification's payload. Therefore, in Neutron, the payload depends on 
each extension's implementation and it's not surely the full 
representation of that resource now.



With tags as an example, while I understand why that's the case (the API
treats tags as a separate entity and it's implemented as a separate
database table) it doesn't make a lot of logical sense to me to treat
adding a tag to a network as a separate event from (for instance) renaming
it. In both cases as far as a consumer of notifications is concerned, some
piece of information about the network changed. That said, it's obviously
up to each project how they generate notifications for events (and thanks
for taking this one on), and I understand why you don't want to add a huge
amount of complexity to the plugin code.

Thanks for summarizing main points. That's right.


One thing that would be useful is if adding a tag changes the resource's
'updated_at', and have that included in the notification. That allows us
to determine whether a notification is more up-to-date than a request at
some point in the near past to the API. I guess though that this will also
be difficult in terms of how the plugin interacts with the core code?
This is another point. I can understand your opinion. I will try to add 
'updated_at' into tag notification's payload in future. However, now tag 
and other extensions resources cannot be used with the 
feature(timestamp). I think that it's next step after implementing tag 
notification.


Thanks,
Hirofumi



Thanks,

Steve

On 8/1/16, 3:33 AM, "Hirofumi Ichihara" 
wrote:


Hi,

I'm trying to solve a issue[1, 2] which doesn't send a notification when
Tag is updated. I'm worried about the payload. My solution just outputs
added tag, resource type, and resource id as payload. However, there was
a comment which mentioned the payload should have more information. I
guess that it means, for instance, when we added a tag to a network, we
can accept the network's name, status, description, share, and so on as
notification payload.

If Tag plugin already has such information, I might not disagree the
opinion but the plugin doesn't have it now. So we will need to add
reading DB process to each Tag API for notification only. I wouldn't go
as far as to add such extra process.

Is my current solution enough information for searchlight or other
notification systems?

[1]: https://bugs.launchpad.net/neutron/+bug/1560226
[2]: https://review.openstack.org/#/c/298133/

Thanks,
Hirofumi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-03 Thread Flavio Percoco

On 02/08/16 17:29 +0200, Thierry Carrez wrote:

Doug Hellmann wrote:

[...]

Likewise, what if the Manila project team decides they aren't interested
in supporting Python 3.5 or a particular greenlet library du jour that
has been mandated upon them? Is the only filesystem-as-a-service project
going to be booted from the tent?


I hardly think "move off of the EOL-ed version of our language" and
"use a library du jour" are in the same class.  All of the topics
discussed so far are either focused on eliminating technical debt
that project teams have not prioritized consistently or adding
features that, again for consistency, are deemed important by the
overall community (API microversioning falls in that category,
though that's an example and not in any way an approved goal right
now).


Right, the proposal is pretty clearly about setting a number of
reasonable, small goals for a release cycle that would be awesome to
collectively reach. Not really invasive top-down design mandates that we
would expect teams to want to resist.

IMHO if a team has a good reason for not wanting or not being able to
fulfill a common goal that's fine -- it just needs to get documented and
should not result in itself in getting kicked out from anything. If a
team regularly skips on common goals (and/or misses releases, and/or
doesn't fix security issues) that's a general sign that it's not really
behaving like an OpenStack project and then a case could be opened for
removal, but there is nothing new here.


I think some flexibility on this should be considered (as mentioned by Thierry
in the previous paragraph) but I'd be quite worried of extremely special-cased
projects and I'd like us to be able to act on this cases.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Stepping Down.

2016-08-03 Thread Flavio Percoco

On 02/08/16 13:18 -0700, Morgan Fainberg wrote:

Based upon my personal time demands among a number of other reasons I will
be stepping down from the Technical Committee. This is planned to take
effect with the next TC election so that my seat will be up to be filled at
that time.


Thanks for the time you've served as a member of the TC and for taking making
this call with such sense of responsibility. I appreciate your willingness to
recognize your shift of priorities.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [networking-sfc] Flow classifier conflict logic

2016-08-03 Thread Artem Plakunov

100.*
$ neutron port-show 429fdb89-1bfa-4dc1-bb89-25373501ebde | grep tenant_id
| tenant_id | 0dafd2d782f4445798363ba9b27e104f |
$ neutron port-show ca7f8fdf-a1ff-4cd7-8897-9f6ca5220be6 | grep tenant_id
| tenant_id | 0dafd2d782f4445798363ba9b27e104f |
$ neutron port-show df8ce9a2-eddd-4b86-8d1c-705f9c96ddb6 | grep tenant_id
| tenant_id | 0dafd2d782f4445798363ba9b27e104f |

200.*
$ neutron port-show 2c6f6f67-6241-4661-977c-3fe5da864c95 | grep tenant_id
| tenant_id | ddf01417a9b74648a3a20c2b818a52ca |
$ neutron port-show 9b20c466-f62c-4c49-a074-91a088ebb0f6 | grep tenant_id
| tenant_id | ddf01417a9b74648a3a20c2b818a52ca |
$ neutron port-show f95f2509-d27d-4b3a-b62a-b9bdb69085bf | grep tenant_id
| tenant_id | ddf01417a9b74648a3a20c2b818a52ca |

02.08.2016 20:00, Farhad Sunavala пишет:

Please send the tenant ids of all six neutron ports.

From admin:
neutron port-show  | grep tenant_id

Thanks,
Farhad.


On Monday, August 1, 2016 7:44 AM, Artem Plakunov 
 wrote:



Thanks.

You said though that classifier must be unique within a tenant. I 
tried creating chains in two different tenants by different users 
without any RBAC rules. So there are two tenants, each has 1 network, 
2 vms (source, service) and an admin user. I used different openrc 
configs for each user yet still get the same conflict.


Info about the test is in the attachment
31.07.2016 5:25, Farhad Sunavala пишет:


Yes, this was intentionally done.
The logical-source-port is important only at the point of classification.
All successive classifications rely only on the 5 tuple and MPLS 
label (chain ID).


Consider an extension of the scenario you mention below.

Sources: (similar to your case)
a
b

Port-pairs: (added ppe and ppf)
ppc
ppd
ppe
ppf

Port-pair-groups: (added ppge and ppgf)
ppgc
ppgd
ppge
ppgf

Flow-classifiers:
fc1: logical-source-port of a && tcp
fc2: logical-source-port of b && tcp

Port-chains:
pc1: fc1 && (ppgc + ppge)
pc2: fc2 && (ppgd + ppgc + ppgf)



The flow-classifier has logical-src-port and protocol=tcp
The logical-src-port has no relevance in the middle of the chain.

In the middle of the chain, the only relevant flow-classifier is 
protocol=tcp.


If we allow it, we cannot distinguish TCP traffic coming out of ppgc 
(and subsequently ppc)

as to whether to mark it with the label for pc1 or the label for pc2.

In other words, within a tenant the flow-classifiers need to be 
unique wrt the 5 tuples.


thanks,
Farhad.

Date: Fri, 29 Jul 2016 18:01:05 +0300
From: Artem Plakunov >
To: openst...@lists.openstack.org 
Subject: [Openstack] [networking-sfc] Flow classifier conflict logic
Message-ID: <579b6fb1.3030...@lvk.cs.msu.su 
>

Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hello.
We have two deployments with networking-sfc:
mirantis 8.0 (liberty) and mirantis 9.0 (mitaka).

I noticed a difference in how flow classifiers conflict with each other
which I do not understand. I'm not sure if it is a bug or not.

I did the following on mitaka:
1. Create tenant 1 and network 1
2. Launch vms A and B in network 1
3. Create tenant 2, share network 1 to it with RBAC policy, launch vm C
in network 1
4. Create tenant 3, share network 1 to it with RBAC policy, launch vm D
in network 1
5. Setup sfc:
create two port pairs for vm C and vm D with a bidirectional port
create two port pair groups with these pairs (one pair in one group)
create flow classifier 1: logical-source-port = vm A port, protocol
= tcp
create flow classifier 2: logical-source-port = vm B port, protocol
= tcp
create chain with group 1 and classifier 1
create chain with group 2 and classifier 2 - this step gives the
following error:

Flow Classifier 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow
Classifier 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain
d1070955-fae9-4483-be9e-0e30f2859282.
Neutron server returns request_ids:
['req-9d0eecec-2724-45e8-84b4-7ccf67168b03']

The only thing neutron logs have is this from server.log:
2016-07-29 14:15:57.889 18917 INFO neutron.api.v2.resource
[req-9d0eecec-2724-45e8-84b4-7ccf67168b03
0b807c8616614b84a4b16a318248d28c 9de9dcec18424398a75a518249707a61 - - -]
create failed (client error): Flow Classifier
7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow Classifier
4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain
d1070955-fae9-4483-be9e-0e30f2859282.

I tried the same in liberty and it works and sfc successfully routes
traffic from both vms to their respective port groups

Liberty setup:
neutron version 7.0.4
neutronclient version 3.1.1
networking-sfc version 1.0.0 (from pip package)

Mitaka setup:
neutron version 8.1.1
neutronclient version 5.0.0 (tried using 3.1.1 with same outcome)
networking-sfc version 1.0.1.dev74 (from master branch commit

[openstack-dev] [Nova] Nova compute reports the disk usage

2016-08-03 Thread Tuan Luong
Hi,

When we try to add ephemeral disk in booting instance, as we know that it will 
create disk.local and the backing file in _base. Both of them are referred to 
the ephemeral disk. When nova reports the disk usage of compute, does it count 
both of them for the used disk? What I see in the resource_tracker that it 
calculates the instance['ephemeral_gb'].

Tuan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-03 Thread Daniel P. Berrange
On Tue, Aug 02, 2016 at 02:36:32PM +, Koniszewski, Pawel wrote:
> In Mitaka development cycle 'live_migration_flag' and 'block_migration_flag'
> have been marked as deprecated for removal. I'm working on a patch [1] to
> remove both of them and want to ask what we should do with 
> live_migration_tunnelled
> logic.
> 
> The default configuration of both flags contain VIR_MIGRATE_TUNNELLED option. 
> It is
> there to avoid the need to configure the network to allow direct communication
> between hypervisors. However, tradeoff is that it slows down all migrations 
> by up
> to 80% due to increased number of memory copies and single-threaded encryption
> mechanism in Libvirt. By 80% here I mean that transfer between source and 
> destination
> node is around 2Gb/s on a 10Gb network. I believe that this is a 
> configuration issue
> and people deploying OpenStack are not aware that live migrations with this 
> flag will
> not work. I'm not sure that this is something we wanted to achieve. AFAIK most
> operators are turning it OFF in order to make live migration usable.

FYI, when you have post-copy migration active, live migration *will* still work.

> Going to a new flag that is there to keep possibility to turn tunneling on -
> Live_migration_tunnelled [2] which is a tri-state boolean - None, False, True:
> 
> * True - means that live migrations will be tunneled through libvirt.
> * False - no tunneling, native hypervisor transport.
> * None - nova will choose default based on, e.g., the availability of native
>   encryption support in the hypervisor. (Default value)
> 
> Right now we don't have any logic implemented for None value which is a 
> default
> value. So the question here is should I implement logic so that if
> live_migration_tunnelled=None it will still use VIR_MIGRATE_TUNNELLED if 
> native
> encryption is not available? Given the impact of this flag I'm not sure that 
> we
> really want to keep it there. Another option is to change default value of
> live_migration_tunnelled to be True. In both cases we will again end up with
> slower LM and people complaining that LM does not work at all in OpenStack.

FWIW, I have compared libvirt tunnelled migration with TLS against native QEMU
TLS encryption and the performance is approximately the same. In both cases the
bottleneck is how fast the CPU can perform AES and we're maxing out a single
thread for that. IOW, there's no getting away from the fact that encryption is
going to have a performance impact on migration when you get into range of
10-Gig networking.

So the real question is whether we want to default to a secure or an insecure
configuration. If we default to secure config then, in future with native QEMU
TLS, this will effectively force those deploying nova to deploy x509 certs for
QEMU before they can use live migration. This would be akin of having our 
default
deployment of the public REST API mandate HTTPS and not listen on HTTP out of 
the
box. IIUC, we default to HTTP for REST APIs out of the box, which would suggest
doing the same for migration and defaulting to non-encrypted. This would mean
we do *not* need to set TUNNELLED by default.

Second, with some versions of QEMU, it is *not* possible to use tunnelled
migration in combination with block migration. We don't want to have normal
live migration and block live migration use different settings. This strongly
suggests *not* defaulting to tunnelled.

So all three points (performance, x509 deployment requirements, and block
migration limitations) point to not having TUNNELLED in the default flags,
and leaving it as an opt-in.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [neutron][dvr][fip] fg device allocated private ip address

2016-08-03 Thread zhuna
Hi Carl,

IMO, if the upstream router has the route to floating ip subnet, no need to 
assign additional IP address to the router.

For example, there are 2 subnets in external network,
Subnet1: 10.0.0.0/24 (fg ip address)
Subnet2: 9.0.0.0/24 (fip)

Suppose assign fip 9.0.0.10 for vm1, and the fg ip address is 10.0.0.10, so 
there are 2 ip address configured in fg, one is 9.0.0.10 and 10.0.0.10.
+---+
|  router ns   |
+---+
| fg (10.0.0.10, 9.0.0.10)
|
|
| router-if (10.0.0.1)
+---+
|  upstream router   | Internet
+---+

The default route of router ns is 10.0.0.1,  add a static route 9.0.0.10/32 
10.0.0.10 to upstream router , or learn the route by routing protocol 
(neutron-dynamic-routing).



发件人: Carl Baldwin [mailto:c...@ecbaldwin.net]
发送时间: 2016年8月3日 6:39
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip 
address



On Tue, Aug 2, 2016 at 6:15 AM, huangdenghui 
> wrote:
hi john and brain
   thanks for your information, if we get patch[1],patch[2] merged,then fg can 
allocate private ip address. after that, we need consider floating ip 
dataplane, in current dvr implementation, fg is used to reachment testing for 
floating ip, now,with subnet types bp,fg has different subnet than floating ip 
address, from fg'subnet gateway point view, to reach floating ip, it need a 
routes entry, destination is some floating ip address, fg'ip address is the 
nexthop, and this routes entry need be populated at the event of floating ip 
creating, deleting when floating ip is dissociated. any comments?

The fg device will still do proxy arp for the floating ip to other devices on 
the external network. This will be part of our testing. The upstream router 
should still have an on-link route on the network to the floating ip subnet. 
IOW, you shouldn't replace the floating ip subnet with the private fg subnet on 
the upstream router. You should add the new subnet to the already existing ones 
and the router should have an additional IP address on the new subnet to be 
used as the gateway address for north-bound traffic.

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] Use ResourceProviderTags instead of ResourceClass?

2016-08-03 Thread Alex Xu
2016-08-03 2:12 GMT+08:00 Jay Pipes :

> On 08/02/2016 08:19 AM, Alex Xu wrote:
>
>> Chris have a thought about using ResourceClass to describe Capabilities
>> with an infinite inventory. In the beginning we brain storming the idea
>> of Tags, Tan Lin have same thought, but we say no very quickly, due to
>> the ResourceClass is really about Quantitative stuff. But Chris give
>> very good point about simplify the ResourceProvider model and the API.
>>
>> After rethinking about those idea, I like simplify the ResourceProvider
>> model and the API. But I think the direction is opposite. ResourceClass
>> with infinite inventory is really hacky. The Placement API is simple,
>> but the usage of those API isn't simple for user, they need create a
>> ResourceClass, then create an infinite inventory. And ResourceClass
>> isn't managable like Tags, look at the Tags API, there are many query
>> parameter.
>>
>> But look at the ResourceClass and ResourceProviderTags, they are totally
>> same, two columns: one is integer id, another one is string.
>> ResourceClass is just for naming the quantitative stuff. So what we need
>> is thing used to 'naming'. ResourceProviderTags is higher abstract, Tags
>> is generic thing to name something, we totally can use Tag instead of
>> ResourceClass. So user can create inventory with tags, also user can
>> create ResourceProvider with tags.
>>
>
> No, this sounds like actually way more complexity than is needed and will
> make the schema less explicit.


No, it simplify the ResourceProvider model and the API? maybe the
complexity you pointed at other place.

Yes, it make the schema less explicit. Using higher layer abstraction, it
will lose some characteristic. That is thing we have to pay.

Anyway let me put this in the alternative section...


>
>
> But yes, there may still have problem isn't resolved, one of problem is
>> pointed out when I discuss with YingXin about how to distinguish the Tag
>> is about quantitative or qualitative. He think we need attribute for Tag
>> to distinguish it. But the attribute isn't thing I like, I prefer leave
>> that alone due to the user of placement API is admin-user.
>>
>> Any thought? or I'm too crazy at here...maybe I just need put this in
>> the alternative section in the spec...
>>
>
> A resource class is not a capability, though. It's an indication of a type
> of quantitative consumable that is exposed on a resource provider.
>
> A capability is a string that indicates a feature that a resource provider
> offers. A capability isn't "consumed".
>

Agree about the definition of resource class and capability. I think it is
pretty clear for us.

What I want to say is the Placement Engine really don't know what is
ResourceClass and Capability. They just need an indication for something.
You can think about ResourceClass and Capability is sub-class, the Tag is
the base-class for them. And think about a case, user can input 'cookie' as
the name of ResourceClass, but Placement Engine won't say no to user.
Because Placement Engine really don't care about the meaning of
ResourceClass' name. Placement Engine just needs a 'tag' to distinguish the
ResourceClass and Capability.


> BTW, this is why I continue to think that using the term "tags" in the
> placement API is wrong. The placement API should clearly indicate that a
> resource provider has a set of capabilities. Tags, in Nova at least, are
> end-user-defined simple categorization strings that have no standardization
> and no cataloguing or collation to them.
>

Yes, but we don't have standard string for all the capabilities. For shared
storage, this is setup by deployer, not the OpenStack, so the capabilities
of shared storage won't be defined by OpenStack, it is defined by deployer.


>
> Capabilities are not end-user-defined -- they can be defined by an
> operator but they are not things that a normal end-user can simply create.
> And capabilities are specifically *not* for categorization purposes. They
> are an indication of a set of features that a resource provider exposes.
>

I totally see your point. But there is one question I can't get answer. If
we call Capabilities instead of tags, but user still can free to input any
string. User can input 'fish' as a capabilities, and placement API won't
say no. Is this OK? Why this is OK when user input non-capability into a
capability field? Actually same question for ResourceClass. (ah, this is
back to ResourceClass and Capability have same base-class Tags)

I think this is only question I can't pass through, otherwise I will update
the spec :)


>
> This is why I think the placement API for capabilities should use the term
> "capabilities" and not "tags".
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-03 Thread Alex Xu
2016-08-03 14:40 GMT+08:00 Alex Xu :

>
>
> 2016-08-02 22:09 GMT+08:00 Matt Riedemann :
>
>> On 8/2/2016 2:41 AM, Alex Xu wrote:
>>
>>> A little strange we have two API endpoints, one is
>>> '/servers/{uuid}/os-interfaces', another one is
>>> '/servers/{uuid}/os-virtual-interfaces'.
>>>
>>> I prefer to keep os-attach-interface. Due to I think we should deprecate
>>> the nova-network also. Actually we deprecate all the nova-network
>>> related API in the 2.36 also. And os-attach-interface didn't support
>>> nova-network, then it is the right choice.
>>>
>>> So we can deprecate the os-virtual-interface in newton. And in Ocata, we
>>> correct the implementation to get the vif info and tag.
>>> os-attach-interface actually accept the server_id, and there is check
>>> ensure the port belong to the server. So it shouldn't very hard to get
>>> the vif info and tag.
>>>
>>> And sorry for I missed that when coding patches also...let me if you
>>> need any help at here.
>>>
>>>
>>>
>>> --
>>>
>>> Thanks,
>>>
>>> Matt Riedemann
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> <
>>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Alex,
>>
>> os-interface will be deprecated, that's the APIs to show/list ports for a
>> given server.
>>
>> os-virtual-interfaces is not the same, and was never a proxy for neutron
>> since before 2.32 we never stored anything in the virtual_interfaces table
>> in the nova database for neutron, but now we do because that's where we
>> store the VIF tags.
>>
>> We have to keep os-attach-interface (attach/detach interface actions on a
>> server).
>>
>> Are you suggesting we drop os-virtual-interfaces and change the behavior
>> of os-interfaces to use the nova virtual_interfaces table rather than
>> proxying to neutron?
>>
>
> Yes, but I missed the point your point out as below. The reason is that if
> we only deprecate the GET of os-interface, then when user want to add
> interface, user needs to send request 'POST /servers/{uuid}/os-interface'.
> When user want to query the interface which attach to the server, user
> needs send request to 'GET /servers/{uuid}/os-virtual-interfaces'. That
> means user access one resource but they are under different API endpoint.
>
> Initially I think we can use virtual_interface table to reimplement the
> GET of os-interface'. But as you pointed out, neutron ports won't show if
> it is created before Newton. That means we change the os-interface
> behaviour in old Microversion. Emm... I'm a little hesitate.
>
>
>>
>> Note that with os-virtual-interfaces even if we start showing VIFs for
>> neutron ports, any ports created before Newton won't be in there, which
>> might be a bit confusing.
>
>
> If we keep the os-virtual-interfaces, but we can't ensure this API works
> for all the instances forever. Due to we can't ensure there won't any old
> instance which created before Newton in user's cloud.
>
> So.. one idea, we keep the GET of os-interface, deprecate the
> os-virtual-interfaces in Newton. In Ocata, we use virtual_interface
> table/object instead of neutron API proxy, but we fallback to proxy neutron
> API when the instance haven't virtual_interface table( this is detected by
> compare the network_info_cache, instance have vif in network_info_cace, but
> no entry in virtual_interface table.).
>

And remove the code about fallback in the future, when we believe there
isn't any old instance.


>
>
>>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-03 Thread Alex Xu
2016-08-02 22:09 GMT+08:00 Matt Riedemann :

> On 8/2/2016 2:41 AM, Alex Xu wrote:
>
>> A little strange we have two API endpoints, one is
>> '/servers/{uuid}/os-interfaces', another one is
>> '/servers/{uuid}/os-virtual-interfaces'.
>>
>> I prefer to keep os-attach-interface. Due to I think we should deprecate
>> the nova-network also. Actually we deprecate all the nova-network
>> related API in the 2.36 also. And os-attach-interface didn't support
>> nova-network, then it is the right choice.
>>
>> So we can deprecate the os-virtual-interface in newton. And in Ocata, we
>> correct the implementation to get the vif info and tag.
>> os-attach-interface actually accept the server_id, and there is check
>> ensure the port belong to the server. So it shouldn't very hard to get
>> the vif info and tag.
>>
>> And sorry for I missed that when coding patches also...let me if you
>> need any help at here.
>>
>>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Alex,
>
> os-interface will be deprecated, that's the APIs to show/list ports for a
> given server.
>
> os-virtual-interfaces is not the same, and was never a proxy for neutron
> since before 2.32 we never stored anything in the virtual_interfaces table
> in the nova database for neutron, but now we do because that's where we
> store the VIF tags.
>
> We have to keep os-attach-interface (attach/detach interface actions on a
> server).
>
> Are you suggesting we drop os-virtual-interfaces and change the behavior
> of os-interfaces to use the nova virtual_interfaces table rather than
> proxying to neutron?
>

Yes, but I missed the point your point out as below. The reason is that if
we only deprecate the GET of os-interface, then when user want to add
interface, user needs to send request 'POST /servers/{uuid}/os-interface'.
When user want to query the interface which attach to the server, user
needs send request to 'GET /servers/{uuid}/os-virtual-interfaces'. That
means user access one resource but they are under different API endpoint.

Initially I think we can use virtual_interface table to reimplement the GET
of os-interface'. But as you pointed out, neutron ports won't show if it is
created before Newton. That means we change the os-interface behaviour in
old Microversion. Emm... I'm a little hesitate.


>
> Note that with os-virtual-interfaces even if we start showing VIFs for
> neutron ports, any ports created before Newton won't be in there, which
> might be a bit confusing.


If we keep the os-virtual-interfaces, but we can't ensure this API works
for all the instances forever. Due to we can't ensure there won't any old
instance which created before Newton in user's cloud.

So.. one idea, we keep the GET of os-interface, deprecate the
os-virtual-interfaces in Newton. In Ocata, we use virtual_interface
table/object instead of neutron API proxy, but we fallback to proxy neutron
API when the instance haven't virtual_interface table( this is detected by
compare the network_info_cache, instance have vif in network_info_cace, but
no entry in virtual_interface table.).


>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] APIs for exposing resource capabilities

2016-08-03 Thread Ramakrishna, Deepti
Hi,



I would like to bring your attention to my spec [1] (already approved) on 
capability APIs and would like to get feedback from API WG.



To summarize, I propose defining a capability API for every resource in a REST 
API where it makes sense and is needed. In the context of Cinder, we would have 
a capability API at the root resource level (GET 
/v3.x/{tenant_id}/capabilities) that would return, e.g., ["volume-backup", 
"other-capability"]. Similarly, we could have a capability API on the volume 
types resource (GET /v3.x/{tenant_id}/types/{volume_type_id}/capabilities) that 
would return all the features supported by a volume type and so on.



I believe that this API pattern solves the problem of exposing capabilities and 
could be used for enabling/disabling UI widgets on Horizon and other clients. 
This pattern cleanly translates to all OpenStack projects which all face the 
general problem.



Can you please look at the spec (and the implementation for Cinder [2]) and let 
me know if you have any feedback? I would be most interested in knowing your 
thoughts about cross-project suitability of this solution.



Thanks,

Deepti



[1] https://review.openstack.org/#/c/306930/

[2] https://review.openstack.org/#/c/350310/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev