[openstack-dev] [meghdwar] use case for meghdwar API irc call agenda

2016-08-02 Thread prakash RAMCHANDRAN
Hi all,
Please join the meeting  for API and codes decision 
Meeting Wednesday 3rd August      (14 Hr-15 Hr UTC )   [7AM-8AM PDT]

irc #openstck-megdwar
Follow up on same points with updates aslast meeting.Since we had a irc issues 
last week we plan to conduct this meeting instead of skipping and discuss 
following agenda based on review and updates later in the week,
Summary of last irc meeting July 27th and updates for Aug 3rd meeting:
1 The last meeting  agenda and actions taken or pending 
We looked  Cloudlet failure on Ubuntu 16.04 and filed a request with Cloudlet 
upstream developers for Ubuntu 16.04 library support for Fabric (FUSE) suport 
in OEC 
(http://forum.openedgecomputing.org/t/fab-module-support-for-ubuntu-16-04/92)
The summary of previous week on Senlin was it has nodes to add to cluster and 
scale but not distribute. But if we can setup an yaml profile and policies we 
can distribute cloudlet nodes with a special meghdwar driver and may review it 
at Ocata summit, once we plan that through etherpad entries and engage with 
Senlin team(TBD)
2 Megdwar Gateway API discussions based on use case
Focus is what APIs needed for minimum use case of two cloudlets on two edges 
running 1 app each and how to move one of the apps from source edge gateway to 
destination edge gateway on compute nodes through those gateways.
Reviewed other Catalog modules (application-catalog-ui, murano, murano-agent to 
be tested) on Rcakspace3. What other modules are needed in OpenStack 'meghdwar' 
a. Cloudlet (existing) -  b Cloudlet Client (existing)To discuss Binder option 
for two cloudlets instead of clusters.   c. Cloudlet Gateway Management  
(Cluster Management) 
   d. python  Cloudlet  Cluster Management  To review murano-agent and API 
differences
   e. Cloudlet Agent To review application-catalog-gui with images, 
orchestraton and Murano with Application adn Components
   f. Cloudlet horizon plug-in for supporting d & e as GUI instead as CLI
4. How do we go about priority 
Consider two cloudlet Binders instead of Clusters
5. Any other missing items. 
The Directory structure to start from templates in OpenStack or codes upstream?
6. Plan for Barcelona Summit (TBD)
If any of our developers  have any comments to discuss on topics here or more 
feel free to add to the end, and will update the Wiki as we continue of our 
efforts to freez APIs and Architecture to suport Edge Services.

Thankspramchan__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Amit Saha
Great! This is much needed. We will be glad to help in anyway possible.

Regards,
Amit

On Wed, Aug 3, 2016 at 12:02 AM, Ihar Hrachyshka 
wrote:

> Amit Kumar Saha (amisaha)  wrote:
>
> Hi,
>>
>> We would like to introduce the community to a new Python based project
>> called DON – Diagnosing OpenStack Networking. More details about the
>> project can be found at https://github.com/openstack/python-don.
>>
>> DON, written primarily in Python, and available as a dashboard in
>> OpenStack Horizon, Libery release, is a network analysis and diagnostic
>> system and provides a completely automated service for verifying and
>> diagnosing the networking functionality provided by OVS. The genesis of
>> this idea was presented at the Vancouver summit, May 2015. Hopefully the
>> community will find this project interesting and will give us valuable
>> feedback.
>>
>
> Amit,
>
> neutron team currently works on defining a new diagnostics API:
> https://review.openstack.org/#/c/308973/
>
> Please work with the community on API definition, and later, on backend
> specific implementation of desired checks.
>
> Ihar
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Smugmug coupon code: pwzn27r9CVvSg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-02 Thread zhi
I still have a problem about the fg device with private ip address.

In DVR mode, there is a external ip address in fq device, because we need
to figure out the default route.

If the fg device with a private ip address, how do we figure out the
default route in fip namespace?

Default route is not reachable by the private ip address, doesn't it?


Hopes for your reply. ;-)


2016-08-03 6:38 GMT+08:00 Carl Baldwin :

>
>
> On Tue, Aug 2, 2016 at 6:15 AM, huangdenghui  wrote:
>
>> hi john and brain
>>thanks for your information, if we get patch[1],patch[2] merged,then
>> fg can allocate private ip address. after that, we need consider floating
>> ip dataplane, in current dvr implementation, fg is used to reachment
>> testing for floating ip, now,with subnet types bp,fg has different subnet
>> than floating ip address, from fg'subnet gateway point view, to reach
>> floating ip, it need a routes entry, destination is some floating ip
>> address, fg'ip address is the nexthop, and this routes entry need be
>> populated at the event of floating ip creating, deleting when floating ip
>> is dissociated. any comments?
>>
>
> The fg device will still do proxy arp for the floating ip to other devices
> on the external network. This will be part of our testing. The upstream
> router should still have an on-link route on the network to the floating ip
> subnet. IOW, you shouldn't replace the floating ip subnet with the private
> fg subnet on the upstream router. You should add the new subnet to the
> already existing ones and the router should have an additional IP address
> on the new subnet to be used as the gateway address for north-bound traffic.
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to create a image using http img file

2016-08-02 Thread Fei Long Wang
As the error said, you need to see the disk_format and container_format.
I haven't digged the code, but I think you should try to set the
container_format and disk_format when you create the image like this:

image = self.glance.images.create(name="myNewImage",
  
container_format='bare',
  
disk_format='raw',)

On 03/08/16 13:27, kris...@itri.org.tw wrote:
>
>
>
> refer to: glance client Python API v2
>
> http://docs.openstack.org/developer/python-glanceclient/ref/v2/images.html
>
>  
>
> *add_location*(/image_id/, /url/, /metadata/)
>
> Add a new location entry to an image’s list of locations.
>
> It is an error to add a URL that is already present in the list of
> locations.
>
> *Parameters:*
>
>   
>
> · *image_id* –ID of image to which the location is to be added.
>
> · *url* –URL of the location to add.
>
> · *metadata* –Metadata associated with the location.
>
> *Returns:*
>
>   
>
> The updated image
>
> ---
>
> #--source code--
>
> from glanceclient.v2.client import Client
>
> ……
>
> url = 'http:///ubuntu1604.qcow2'
>
> image = self.glance.images.create(name="myNewImage")
>
> self.glance.images.add_location(image.id, url, {})
>
> #--end
>
> ---
>
>  
>
> I am sure the images.create is work.
>
> I got image .id  ‘be416e4a-f266-4ad5-a62f-979242d23633’.
>
> I don’t Know which data should be assign to metadata.
>
> Then I got :
>
>  
>
> self.glance.images.add_location(image.id, url, {})
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py",
> line 311, in add_location
>
> self._send_image_update_request(image_id, add_patch)
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py",
> line 296, in _send_image_update_request
>
> self.http_client.patch(url, headers=hdrs, data=json.dumps(patch_body))
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 284, in patch
>
> return self._request('PATCH', url, **kwargs)
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 267, in _request
>
> resp, body_iter = self._handle_response(resp)
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 83, in _handle_response
>
> raise exc.from_response(resp, resp.content)
>
> HTTPBadRequest: 400 Bad Request: Properties disk_format,
> container_format must be set prior to saving data. (HTTP 400)
>
>  
>
> Best Regards,
>
> Kristen
>
>  
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內
> 容,並請銷毀此信件。 This email may contain confidential information.
> Please do not use or disclose it in any way and delete it if you are
> not the intended recipient.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to create a image using http img file

2016-08-02 Thread kristen


refer to: glance client Python API v2
http://docs.openstack.org/developer/python-glanceclient/ref/v2/images.html

add_location(image_id, url, metadata)
Add a new location entry to an image’s list of locations.
It is an error to add a URL that is already present in the list of locations.
Parameters:

· image_id – ID of image to which the location is to be added.
· url – URL of the location to add.
· metadata – Metadata associated with the location.

Returns:

The updated image

---
#--source code--
from glanceclient.v2.client import Client
……
url = 'http:///ubuntu1604.qcow2'
image = self.glance.images.create(name="myNewImage")
self.glance.images.add_location(image.id, url, {})
#--end
---

I am sure the images.create is work.
I got image .id  ‘be416e4a-f266-4ad5-a62f-979242d23633’.
I don’t Know which data should be assign to metadata.
Then I got :

self.glance.images.add_location(image.id, url, {})
  File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py", line 311, 
in add_location
self._send_image_update_request(image_id, add_patch)
  File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py", line 296, 
in _send_image_update_request
self.http_client.patch(url, headers=hdrs, data=json.dumps(patch_body))
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
284, in patch
return self._request('PATCH', url, **kwargs)
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
267, in _request
resp, body_iter = self._handle_response(resp)
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 83, 
in _handle_response
raise exc.from_response(resp, resp.content)
HTTPBadRequest: 400 Bad Request: Properties disk_format, container_format must 
be set prior to saving data. (HTTP 400)

Best Regards,
Kristen



--
本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain 
confidential information. Please do not use or disclose it in any way and 
delete it if you are not the intended recipient.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-02 Thread Carl Baldwin
On Aug 2, 2016 6:52 PM, "Kevin Benton"  wrote:
>
> >It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it.
>
> The reason I thought it was relevant to bring up is because it's going to
be difficult to actually fix it. If any of the following lines fail, none
of the IPAM rollback code will be called:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1195-L1215

Agreed. Just saying why i didn't bring it up on this context to start with.

> If we decide to just fix the exception handler inside of ipam itself for
rollbacks (which would be a quick fix), I would be okay with that but we
need to be clear that any driver depending on that alone for state
synchronization is in a very dangerous position of becoming inconsistent
(i.e. I want something to point people to if we get bug reports saying that
the delete call wasn't made when the port failed to create).

I think we could fix it in steps. I do think that both issues are worth
fixing and will pursue them both. I'll file a bugs.

Carl

> On Tue, Aug 2, 2016 at 3:27 PM, Carl Baldwin  wrote:
>>
>> On Tue, Aug 2, 2016 at 2:50 AM, Kevin Benton  wrote:
>>>
>>> >Given that it shares the session, it wouldn't have to do anything.
But, again, it wouldn't behave like an external driver.
>>>
>>> Why not? The only additional thing an external driver would be doing at
this step is calling an external system. Any local accounting in the DB
that driver would do would automatically be rolled back just like the
in-tree system.
>>
>> See below.
>>>
>>> Keep in mind that anything else can fail later in the transaction
outside of IPAM (e.g. ML2 driver precommit validation) and none of this
IPAM rollback logic will be called. Maybe the right answer is to get rid of
the IPAM rollback logic completely because it gives the wrong impression
that it is going to be called on all failures to commit when it's really
only called in failures inside of IPAM's module. Essentially every instance
of _safe_rollback in [1] is in an exception handler that isn't triggered if
there are exceptions anywhere in the core plugin after the initial base DB
calls.
>>
>> I noticed that there are failures which will not call rollback. I
started thinking about it when I wrote this note [1] (I did realize that
rollback might not even happen with the flush under the likes of galera and
that there are other failures that can happen outside of this and would
fail to rollback). I didn't bring it up here because I thought it was a
separate issue.
>>
>> It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it. If we eliminate the idea, it isn't
clear to me yet how drivers will handle leaked allocations (or
deallocations). We could talk about some alternatives. I've got a few
knocking around the back of my head but nothing that seems like a complete
picture yet.
>>
>> If one only cares about the in-tree driver which doesn't need an
explicit rollback call then one probably wouldn't care about having one at
all. This is the kind of thing I'd like to avoid by having the in-tree
driver work more like any other external driver. When the in-tree driver
works differently than others because it has a closer relationship to the
rest of the system, we quickly forget the needs of other drivers.
>>
>> Carl
>>
>> [1]
https://review.openstack.org/#/c/348956/1/neutron/tests/unit/extensions/test_segment.py@793
>>
>>> 1.
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py
>>>
>>>
>>> On Aug 1, 2016 18:11, "Carl Baldwin"  wrote:



 On Mon, Aug 1, 2016 at 2:29 PM, Kevin Benton  wrote:
>
> >We still want the exception to rollback the entire API operation and
stopping it with a nested operation I think would mess that up.
>
> Well I think you would want to start a nested transaction, capture
the duplicate, call the ipam delete methods, then throw a retryrequest. The
exception will still trigger a rollback of the entire operation.


 This is kind of what I was headed when I decided to solicit some
feedback. It is a possibility should still be considered.

>
> >Second, I've been throwing around the idea of not sharing the
session with the IPAM driver.
>
> If the IPAM driver does not have access to the session, it can't see
any of the uncommitted data. Would that be a problem? In particular,
doesn't the IPAM driver's DB table have foreign key constraints with the
data waiting to be committed in the other session? I'm hesitant to take
this approach because it means other (if the in-tree doesn't already) IPAM
drivers cannot have any relational integrity with the objects in 

Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-02 Thread Kevin Benton
>It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it.

The reason I thought it was relevant to bring up is because it's going to
be difficult to actually fix it. If any of the following lines fail, none
of the IPAM rollback code will be called:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1195-L1215

If we decide to just fix the exception handler inside of ipam itself for
rollbacks (which would be a quick fix), I would be okay with that but we
need to be clear that any driver depending on that alone for state
synchronization is in a very dangerous position of becoming inconsistent
(i.e. I want something to point people to if we get bug reports saying that
the delete call wasn't made when the port failed to create).

On Tue, Aug 2, 2016 at 3:27 PM, Carl Baldwin  wrote:

> On Tue, Aug 2, 2016 at 2:50 AM, Kevin Benton  wrote:
>
>> >Given that it shares the session, it wouldn't have to do anything. But,
>> again, it wouldn't behave like an external driver.
>>
> Why not? The only additional thing an external driver would be doing at
>> this step is calling an external system. Any local accounting in the DB
>> that driver would do would automatically be rolled back just like the
>> in-tree system.
>>
> See below.
>
>> Keep in mind that anything else can fail later in the transaction outside
>> of IPAM (e.g. ML2 driver precommit validation) and none of this IPAM
>> rollback logic will be called. Maybe the right answer is to get rid of the
>> IPAM rollback logic completely because it gives the wrong impression that
>> it is going to be called on all failures to commit when it's really only
>> called in failures inside of IPAM's module. Essentially every instance of
>> _safe_rollback in [1] is in an exception handler that isn't triggered if
>> there are exceptions anywhere in the core plugin after the initial base DB
>> calls.
>>
> I noticed that there are failures which will not call rollback. I started
> thinking about it when I wrote this note [1] (I did realize that rollback
> might not even happen with the flush under the likes of galera and that
> there are other failures that can happen outside of this and would fail to
> rollback). I didn't bring it up here because I thought it was a separate
> issue.
>
> It might be the wrong impression, but it was already given and there are
> drivers which have been written under it. That's why I tend toward fixing
> rollback instead of eliminating it. If we eliminate the idea, it isn't
> clear to me yet how drivers will handle leaked allocations (or
> deallocations). We could talk about some alternatives. I've got a few
> knocking around the back of my head but nothing that seems like a complete
> picture yet.
>
> If one only cares about the in-tree driver which doesn't need an explicit
> rollback call then one probably wouldn't care about having one at all. This
> is the kind of thing I'd like to avoid by having the in-tree driver work
> more like any other external driver. When the in-tree driver works
> differently than others because it has a closer relationship to the rest of
> the system, we quickly forget the needs of other drivers.
>
> Carl
>
> [1]
> https://review.openstack.org/#/c/348956/1/neutron/tests/unit/extensions/test_segment.py@793
>
> 1.
>> https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py
>>
>> On Aug 1, 2016 18:11, "Carl Baldwin"  wrote:
>>
>>>
>>>
>>> On Mon, Aug 1, 2016 at 2:29 PM, Kevin Benton  wrote:
>>>
 >We still want the exception to rollback the entire API operation and
 stopping it with a nested operation I think would mess that up.

 Well I think you would want to start a nested transaction, capture the
 duplicate, call the ipam delete methods, then throw a retryrequest. The
 exception will still trigger a rollback of the entire operation.

>>>
>>> This is kind of what I was headed when I decided to solicit some
>>> feedback. It is a possibility should still be considered.
>>>
>>>
 >Second, I've been throwing around the idea of not sharing the session
 with the IPAM driver.

 If the IPAM driver does not have access to the session, it can't see
 any of the uncommitted data. Would that be a problem? In particular,
 doesn't the IPAM driver's DB table have foreign key constraints with the
 data waiting to be committed in the other session? I'm hesitant to take
 this approach because it means other (if the in-tree doesn't already) IPAM
 drivers cannot have any relational integrity with the objects in question.

>>>
>>> The in-tree driver doesn't have any FK constraints back to the neutron
>>> db schema for IPAM [1]. I don't think that would make sense since it is
>>> supposed to work like an external 

Re: [openstack-dev] [horizon] Angular panel enable/disable not overridable in local_settings

2016-08-02 Thread Richard Jones
On 3 August 2016 at 00:32, Rob Cresswell 
wrote:

> Hi all,
>
> So we seem to be adopting a pattern of using UPDATE_HORIZON_CONFIG in the
> enabled files to add a legacy/angular toggle to the settings. I don't like
> this, because in settings.py the enabled files are processed *after*
> local_settings.py imports, meaning the angular panel will always be
> enabled, and would require a local/enabled file change to disable it.
>
> My suggestion would be:
>
> - Remove current UPDATE_HORIZON_CONFIG change in the swift panel and
> images panel patch
> - Add equivalents ('angular') to the settings.py HORIZON_CONFIG dict, and
> then the 'legacy' version to the test settings.
>
> I think that should run UTs as expected, and allow the legacy/angular
> panel to be toggled via local_settings.
>
> Was there a reason we chose to use UPDATE_HORIZON_CONFIG, rather than just
> updating the dict in settings.py? I couldn't recall a reason, and the
> original patch ( https://review.openstack.org/#/c/293168/ ) doesn't seem
> to indicate why.
>

It was an attempt to keep the change more self-contained, and since
UPDATE_HORIZON_CONFIG existed, it seemed reasonable to use it. It meant
that all the configuration regarding the visibility of the panel was in one
place, and since it's expected that deployers edit enabled files, I guess
your concern stated above didn't come into it.

I'm ambivalent about the change you propose, would be OK going either way
:-)


 Richard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-02 Thread Carl Baldwin
On Tue, Aug 2, 2016 at 6:15 AM, huangdenghui  wrote:

> hi john and brain
>thanks for your information, if we get patch[1],patch[2] merged,then fg
> can allocate private ip address. after that, we need consider floating ip
> dataplane, in current dvr implementation, fg is used to reachment testing
> for floating ip, now,with subnet types bp,fg has different subnet than
> floating ip address, from fg'subnet gateway point view, to reach floating
> ip, it need a routes entry, destination is some floating ip address, fg'ip
> address is the nexthop, and this routes entry need be populated at the
> event of floating ip creating, deleting when floating ip is dissociated.
> any comments?
>

The fg device will still do proxy arp for the floating ip to other devices
on the external network. This will be part of our testing. The upstream
router should still have an on-link route on the network to the floating ip
subnet. IOW, you shouldn't replace the floating ip subnet with the private
fg subnet on the upstream router. You should add the new subnet to the
already existing ones and the router should have an additional IP address
on the new subnet to be used as the gateway address for north-bound traffic.

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-02 Thread Carl Baldwin
On Tue, Aug 2, 2016 at 2:50 AM, Kevin Benton  wrote:

> >Given that it shares the session, it wouldn't have to do anything. But,
> again, it wouldn't behave like an external driver.
>
Why not? The only additional thing an external driver would be doing at
> this step is calling an external system. Any local accounting in the DB
> that driver would do would automatically be rolled back just like the
> in-tree system.
>
See below.

> Keep in mind that anything else can fail later in the transaction outside
> of IPAM (e.g. ML2 driver precommit validation) and none of this IPAM
> rollback logic will be called. Maybe the right answer is to get rid of the
> IPAM rollback logic completely because it gives the wrong impression that
> it is going to be called on all failures to commit when it's really only
> called in failures inside of IPAM's module. Essentially every instance of
> _safe_rollback in [1] is in an exception handler that isn't triggered if
> there are exceptions anywhere in the core plugin after the initial base DB
> calls.
>
I noticed that there are failures which will not call rollback. I started
thinking about it when I wrote this note [1] (I did realize that rollback
might not even happen with the flush under the likes of galera and that
there are other failures that can happen outside of this and would fail to
rollback). I didn't bring it up here because I thought it was a separate
issue.

It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it. If we eliminate the idea, it isn't
clear to me yet how drivers will handle leaked allocations (or
deallocations). We could talk about some alternatives. I've got a few
knocking around the back of my head but nothing that seems like a complete
picture yet.

If one only cares about the in-tree driver which doesn't need an explicit
rollback call then one probably wouldn't care about having one at all. This
is the kind of thing I'd like to avoid by having the in-tree driver work
more like any other external driver. When the in-tree driver works
differently than others because it has a closer relationship to the rest of
the system, we quickly forget the needs of other drivers.

Carl

[1]
https://review.openstack.org/#/c/348956/1/neutron/tests/unit/extensions/test_segment.py@793

1.
> https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py
>
> On Aug 1, 2016 18:11, "Carl Baldwin"  wrote:
>
>>
>>
>> On Mon, Aug 1, 2016 at 2:29 PM, Kevin Benton  wrote:
>>
>>> >We still want the exception to rollback the entire API operation and
>>> stopping it with a nested operation I think would mess that up.
>>>
>>> Well I think you would want to start a nested transaction, capture the
>>> duplicate, call the ipam delete methods, then throw a retryrequest. The
>>> exception will still trigger a rollback of the entire operation.
>>>
>>
>> This is kind of what I was headed when I decided to solicit some
>> feedback. It is a possibility should still be considered.
>>
>>
>>> >Second, I've been throwing around the idea of not sharing the session
>>> with the IPAM driver.
>>>
>>> If the IPAM driver does not have access to the session, it can't see any
>>> of the uncommitted data. Would that be a problem? In particular, doesn't
>>> the IPAM driver's DB table have foreign key constraints with the data
>>> waiting to be committed in the other session? I'm hesitant to take this
>>> approach because it means other (if the in-tree doesn't already) IPAM
>>> drivers cannot have any relational integrity with the objects in question.
>>>
>>
>> The in-tree driver doesn't have any FK constraints back to the neutron db
>> schema for IPAM [1]. I don't think that would make sense since it is
>> supposed to work like an external driver.
>>
>>
>>> A related question is, why does the in-tree IPAM driver have to do
>>> anything at all on a rollback? It currently does share a session which is
>>> automatically going to rollback all of it's DB operations for it. If it's
>>> because the driver cannot distinguish a delete call from a rollback and a
>>> normal delete, I suggest we change the delete call to pass a flag
>>> indicating that it's for a rollback. That would allow any DB-based drivers
>>> to just do nothing at this step.
>>>
>>
>> Given that it shares the session, it wouldn't have to do anything. But,
>> again, it wouldn't behave like an external driver. I'd like to not have
>> special drivers that behave differently than drivers that are really
>> external; we end up finding things that the in-tree driver does in our
>> testing that doesn't work right for other drivers.
>>
>> Drivers might need to access uncommitted data from the neutron DB. I
>> think even external drivers do this. However, there is a hard line between
>> the Neutron tables (even IPAM related ones) and the pluggable 

Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-02 Thread Steven Hardy
On Tue, Aug 02, 2016 at 09:36:45PM +0200, Christian Schwede wrote:
> Hello everyone,
> 
> I'd like to improve the Swift deployments done by TripleO. There are a
> few problems today when deployed with the current defaults:

Thanks for digging into this, I'm aware this has been something of a
known-issue for some time, so it's great to see it getting addressed :)

Some comments inline;

> 1. Adding new nodes (or replacing existing nodes) is not possible,
> because the rings are built locally on each host and a new node doesn't
> know about the "history" of the rings. Therefore rings might become
> different on the nodes, and that results in an unusable state eventually.
> 
> 2. The rings are only using a single device, and it seems that this is
> just a directory and not a mountpoint with a real device. Therefore data
> is stored on the root device - even if you have 100TB disk space in the
> background. If not fixed manually your root device will run out of space
> eventually.
> 
> 3. Even if a real disk is mounted in /srv/node, replacing a faulty disk
> is much more troublesome. Normally you would simply unmount a disk, and
> then replace the disk sometime later. But because mount_check is set to
> False in the storage servers data will be written to the root device in
> the meantime; and when you finally mount the disk again, you can't
> simply cleanup.
> 
> 4. In general, it's not possible to change cluster layout (using
> different zones/regions/partition power/device weight, slowly adding new
> devices to avoid 25% of the data will be moved immediately when adding
> new nodes to a small cluster, ...). You could manually manage your
> rings, but they will be overwritten finally when updating your overcloud.
> 
> 5. Missing erasure coding support (or storage policies in general)
> 
> This sounds bad, however most of the current issues can be fixed using
> customized templates and some tooling to create the rings in advance on
> the undercloud node.
> 
> The information about all the devices can be collected from the
> introspection data, and by using node placement the nodenames in the
> rings are known in advance if the nodes are not yet powered on. This
> ensures a consistent ring state, and an operator can modify the rings if
> needed and to customize the cluster layout.
> 
> Using some customized templates we can already do the following:
> - disable rinbguilding on the nodes
> - create filesystems on the extra blockdevices
> - copy ringfiles from the undercloud, using pre-built rings
> - enable mount_check by default
> - (define storage policies if needed)
> 
> I started working on a POC using tripleo-quickstart, some custom
> templates and a small Python tool to build rings based on the
> introspection data:
> 
> https://github.com/cschwede/tripleo-swift-ring-tool
> 
> I'd like to get some feedback on the tool and templates.
> 
> - Does this make sense to you?

Yes, I think the basic workflow described should work, and it's good to see
that you're passing the ring data via swift as this is consistent with how
we already pass some data to nodes via our DeployArtifacts interface:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.yaml

Note however that there are no credentials to access the undercloud swift
on the nodes, so you'll need to pass a tempurl reference in (which is what
we do for deploy artifacts, obviously you will have credentials to create
the container & tempurl on the undercloud).

One slight concern I have is mandating the use of predictable placement -
it'd be nice to think about ways we might avoid that but the undercloud
centric approach seems OK for a first pass (in either case I think the
delivery via swift will be the same).

> - How (and where) could we integrate this upstream?

So I think the DeployArtefacts interface may work for this, and we have a
helper script that can upload data to swift:

https://github.com/openstack/tripleo-common/blob/master/scripts/upload-swift-artifacts

This basically pushes a tarball to swift, creates a tempurl, then creates a
file ($HOME/.tripleo/environments/deployment-artifacts.yaml) which is
automatically read by tripleoclient on deployment.

DeployArtifactURLs is already a list, but we'll need to test and confirm we
can pass both e.g swift ring data and updated puppet modules at the same
time.

The part that actually builds the rings on the undercloud will probably
need to be created as a custom mistral action:

https://github.com/openstack/tripleo-common/tree/master/tripleo_common/actions

These are then driven as part of the deployment workflow (although the
final workflow where this will wire in hasn't yet landed):

https://review.openstack.org/#/c/298732/

> - Templates might be included in tripleo-heat-templates?

Yes, although by the look of it there may be few template changes required.

If you want to remove the current ringbuilder puppet step completely, you
can simply remove 

Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-02 Thread Timofei Durakov
If operator haven't explicitly defined  live_migration_tunnelled param in
nova.conf, after upgrade is done it's default value will be set to False.
If operator set this param explicitly, everything will be unchanged. To
notify about this change I'm proposing to use release notes, as It's
usually done. So there will be no upgrades impact related to this change.

On Tue, Aug 2, 2016 at 10:51 PM, Chris Friesen 
wrote:

> On 08/02/2016 09:14 AM, Timofei Durakov wrote:
>
>> Hi,
>>
>> Taking into account everything above I'd prefer to see
>> live_migration_tunnelled(that corresponds to VIR_MIGRATE_TUNNELLED)
>> defaulted to
>> False. We just need to make a release note for this change, and on the
>> host
>> startup do LOG.warning to notify the operator that there are no tunnels
>> for
>> live-migration. For me, it will be enough. Then just put [1] on top of it.
>>
>
> How would upgrades work?  Presumably you'd have to get all the existing
> compute nodes able to handle un-tunnelled live migrations, then you'd
> live-migrate from the old compute nodes to the new ones using tunnelled
> migrations (where live migration is possible), but after that everything
> would be un-tunnelled?
>
> Chris
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Priorities for the rest of the cycle

2016-08-02 Thread Jim Rollenhagen
Hey all,

There's some deadlines coming up:

* non-client library freeze in 3 weeks
* client library freeze in 4 weeks
* final releases in 8 weeks

http://releases.openstack.org/newton/schedule.html

As usual, we don't do a hard feature freeze at the normal feature freeze
date (4 weeks from now), however we do want to stop merging big or risky
things around that time. We also need to keep in mind that features
which need client support should obviously be done with enough time to
get the client side done before client freeze.

So with that in mind, I've recently shuffled things around a bit on our
trello board:
https://trello.com/b/ROTxmGIc/ironic-newton-priorities

There's now two lists for code patches: must have, and nice to have.
Both lists are in the order that I think things should be prioritized.
Please do stand up if you disagree on any of it. :)

Please do keep the priorities in mind when writing code or doing
reviews; we have a lot of things that are getting close, and I'd like to
finish many of these so that we don't need to carry them over to Ocata.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-02 Thread Matt Riedemann

On 8/2/2016 9:09 AM, Matt Riedemann wrote:

On 8/2/2016 2:41 AM, Alex Xu wrote:

A little strange we have two API endpoints, one is
'/servers/{uuid}/os-interfaces', another one is
'/servers/{uuid}/os-virtual-interfaces'.

I prefer to keep os-attach-interface. Due to I think we should deprecate
the nova-network also. Actually we deprecate all the nova-network
related API in the 2.36 also. And os-attach-interface didn't support
nova-network, then it is the right choice.

So we can deprecate the os-virtual-interface in newton. And in Ocata, we
correct the implementation to get the vif info and tag.
os-attach-interface actually accept the server_id, and there is check
ensure the port belong to the server. So it shouldn't very hard to get
the vif info and tag.

And sorry for I missed that when coding patches also...let me if you
need any help at here.



--

Thanks,

Matt Riedemann




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Alex,

os-interface will be deprecated, that's the APIs to show/list ports for
a given server.

os-virtual-interfaces is not the same, and was never a proxy for neutron
since before 2.32 we never stored anything in the virtual_interfaces
table in the nova database for neutron, but now we do because that's
where we store the VIF tags.

We have to keep os-attach-interface (attach/detach interface actions on
a server).

Are you suggesting we drop os-virtual-interfaces and change the behavior
of os-interfaces to use the nova virtual_interfaces table rather than
proxying to neutron?

Note that with os-virtual-interfaces even if we start showing VIFs for
neutron ports, any ports created before Newton won't be in there, which
might be a bit confusing.



Here is the draft spec: https://review.openstack.org/#/c/350277/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-08-02 Thread Julien Danjou
On Tue, Aug 02 2016, gordon chung wrote:

> so from very rough testing, we can choose to lower it to 3600points which
> offers better split opportunities with negligible improvement/degradation, or
> even more to 900points with potentially small write degradation (massive
> batching).

3600 points sounds nice. :)

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Stepping Down.

2016-08-02 Thread Morgan Fainberg
Based upon my personal time demands among a number of other reasons I will
be stepping down from the Technical Committee. This is planned to take
effect with the next TC election so that my seat will be up to be filled at
that time.

For those who elected me in, thank you.

Regards,
--Morgan Fainberg
IRC: notmorgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Belated nova newton midcycle recap (part 2)

2016-08-02 Thread Matt Riedemann

On 8/2/2016 12:25 PM, Jim Rollenhagen wrote:

On Mon, Aug 01, 2016 at 09:15:46PM -0500, Matt Riedemann wrote:




* Placement API for resource providers

Jay's personal goal for Newton is for the resource tracker to be writing
inventory and allocation data via the placement API. We want to get the data
writing into the placement API in Newton so we can start using it in Ocata.

There are some spec amendments up for resource providers, at least one has
merged, and the initial placement API change merged today:

https://review.openstack.org/#/c/329149/

We talked about supporting dynamic resource classes for Ironic use cases
which is a stretch goal for Nova in Newton. Jay has a spec for that here:

https://review.openstack.org/#/c/312696/

There is a lot more detail in the etherpad and honestly Jay Pipes or Jim
Rollenhagen would be better to summarize what came out of this at the
midcycle and what's being worked on for dynamic resource classes right now.


I actually wrote a bit about this last week:
http://lists.openstack.org/pipermail/openstack-dev/2016-July/099922.html

I'm not sure it covers everything, but it's the important pieces I got
from it.

// jim


We talked about a separate placement API database but decided this should be
optional to avoid forcing yet another nova database on deployers in a couple
of releases. This would be available for deployers to use to avoid some
future upgrade pain when the placement service is split out from Nova, but
if not configured it will default to the API database for the placement API.
There are a bunch more details and discussion on that in this thread that
Chris Dent started after the midcycle:

http://lists.openstack.org/pipermail/openstack-dev/2016-July/100302.html



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Perfect, thanks! I totally missed that.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-02 Thread Chris Friesen

On 08/02/2016 09:14 AM, Timofei Durakov wrote:

Hi,

Taking into account everything above I'd prefer to see
live_migration_tunnelled(that corresponds to VIR_MIGRATE_TUNNELLED) defaulted to
False. We just need to make a release note for this change, and on the host
startup do LOG.warning to notify the operator that there are no tunnels for
live-migration. For me, it will be enough. Then just put [1] on top of it.


How would upgrades work?  Presumably you'd have to get all the existing compute 
nodes able to handle un-tunnelled live migrations, then you'd live-migrate from 
the old compute nodes to the new ones using tunnelled migrations (where live 
migration is possible), but after that everything would be un-tunnelled?


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][neutron][ipam][networking-infoblox] networking-infoblox 2.0.2

2016-08-02 Thread John Belamaric
I am happy to announce that we have released version 2.0.2 of the Infoblox IPAM 
driver for OpenStack. This driver uses the pluggable IPAM framework delivered 
in Neutron's Liberty release, enabling the use of Infoblox for allocating 
subnets and IP addresses, and automatically creating DNS zones and records in 
Infoblox.

The driver is compatible with both Liberty and Mitaka.

This version contains important bug fixes and some feature enhancements and is 
recommended for all users.

More information and the code may be found at the networking-infoblox Launchpad 
page [1], or PyPi [2]. Bugs may also be reported on the same page.

[1] https://launchpad.net/networking-infoblox
[2] https://pypi.python.org/pypi/networking-infoblox

We are continuing to develop this driver to offer additional functionality, so 
please do provide any feedback you may have.

Thank you,

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] v2 API - request for feedback on "problem description"

2016-08-02 Thread Devananda van der Veen
Hi all,

Today's ironic-v2-api meeting was pretty empty, so I am posting a summary of our
subteam's activity here.

I have taken the midcycle notes about our API's current pain points / usability
gaps, and written them up into the format we would use for a spec's "Problem
Description", and posted them into an etherpad:

  https://etherpad.openstack.org/p/ironic-v2-api

As context for folks that may not have been in the midcycle discussion, my goal
in this work is to, by Barcelona, have a concrete outline of the problems with
our API -- and a proposal of how we might solve them -- written up as a design
specification.

I would like to invite feedback on this very early draft ahead of our weekly
meeting next week, and then discuss it for a portion of the meeting on Monday.


Thanks for your time,
Devananda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-02 Thread Christian Schwede
Hello everyone,

I'd like to improve the Swift deployments done by TripleO. There are a
few problems today when deployed with the current defaults:

1. Adding new nodes (or replacing existing nodes) is not possible,
because the rings are built locally on each host and a new node doesn't
know about the "history" of the rings. Therefore rings might become
different on the nodes, and that results in an unusable state eventually.

2. The rings are only using a single device, and it seems that this is
just a directory and not a mountpoint with a real device. Therefore data
is stored on the root device - even if you have 100TB disk space in the
background. If not fixed manually your root device will run out of space
eventually.

3. Even if a real disk is mounted in /srv/node, replacing a faulty disk
is much more troublesome. Normally you would simply unmount a disk, and
then replace the disk sometime later. But because mount_check is set to
False in the storage servers data will be written to the root device in
the meantime; and when you finally mount the disk again, you can't
simply cleanup.

4. In general, it's not possible to change cluster layout (using
different zones/regions/partition power/device weight, slowly adding new
devices to avoid 25% of the data will be moved immediately when adding
new nodes to a small cluster, ...). You could manually manage your
rings, but they will be overwritten finally when updating your overcloud.

5. Missing erasure coding support (or storage policies in general)

This sounds bad, however most of the current issues can be fixed using
customized templates and some tooling to create the rings in advance on
the undercloud node.

The information about all the devices can be collected from the
introspection data, and by using node placement the nodenames in the
rings are known in advance if the nodes are not yet powered on. This
ensures a consistent ring state, and an operator can modify the rings if
needed and to customize the cluster layout.

Using some customized templates we can already do the following:
- disable rinbguilding on the nodes
- create filesystems on the extra blockdevices
- copy ringfiles from the undercloud, using pre-built rings
- enable mount_check by default
- (define storage policies if needed)

I started working on a POC using tripleo-quickstart, some custom
templates and a small Python tool to build rings based on the
introspection data:

https://github.com/cschwede/tripleo-swift-ring-tool

I'd like to get some feedback on the tool and templates.

- Does this make sense to you?
- How (and where) could we integrate this upstream?
- Templates might be included in tripleo-heat-templates?

IMO the most important change would be to avoid overwriting rings on the
overcloud. There is a good chance to mess up your cluster if the
template to disable ring building isn't used and you already have
working rings in place. Same for the mount_check option.

I'm curious about your thoughts!

Thanks,

Christian


-- 
Christian Schwede
_

Red Hat GmbH
Technopark II, Haus C, Werner-von-Siemens-Ring 11-15, 85630 Grasbrunn,
Handelsregister: Amtsgericht Muenchen HRB 153243
Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael Cunningham,
Charles Cachera

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] network_interface, defaults, and explicitness

2016-08-02 Thread Devananda van der Veen
On 08/01/2016 05:10 AM, Jim Rollenhagen wrote:
> Hey all,
> 
> Our nova patch for networking[0] got stuck for a bit, because Nova needs
> to know which network interface is in use for the node, in order to
> properly set up the port.
> 
> The code landed for network_interface follows the following order for
> what is actually used for the node:
> 1) node.network_interface, if that is None:
> 2) CONF.default_network_interface, if that isNone:
> 3) flat, if using neutron DHCP
> 4) noop, if not using neutron DHCP
> 
> The API will return None for node.network_interface in the API (GET
> /v1/nodes/uuid). This won't work for Nova, because Nova can't know what
> CONF.default_network_interface is.
> 
> I propose that if a network_interface is not sent in the node-create
> call, we write whatever the current default is, so that it is always set
> and not using an implicit value that could change.
> 
> For nodes that exist before the upgrade, we do a database migration to
> set network_interface to CONF.default_network_interface (or if that's
> None, set to flat/noop depending on the DHCP provider).

Sounds quite reasonable to me.

--deva


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Pending removal of Scality volume driver

2016-08-02 Thread Sean McGinnis
Tomorrow is the one week grace period. I just ran the last comment
script and it still shows it's been 112 days since the Scality CI has
reported on a patch.

Please let me know the status of the CI.

On Thu, Jul 28, 2016 at 07:28:26AM -0500, Sean McGinnis wrote:
> On Thu, Jul 28, 2016 at 11:28:42AM +0200, Jordan Pittier wrote:
> > Hi Sean,
> > 
> > Thanks for the heads up.
> > 
> > On Wed, Jul 27, 2016 at 11:13 PM, Sean McGinnis 
> > wrote:
> > 
> > > The Cinder policy for driver CI requires that all volume drivers
> > > have a CI reporting on any new patchset. CI's may have some down
> > > time, but if they do not report within a two week period they are
> > > considered out of compliance with our policy.
> > >
> > > This is a notification that the Scality OpenStack CI is out of compliance.
> > > It has not reported since April 12th, 2016.
> > >
> > Our CI is still running for every patchset, just that it doesn't report
> > back to Gerrit. I'll see what I can do about it.
> 
> Great! I'll watch for it to start reporting again. Thanks for being
> responsive and looking into it.
> 
> > 
> > >
> > > The patch for driver removal has been posted here:
> > >
> > > https://review.openstack.org/348032/
> > 
> > That link is about the Tegile driver, not ours.
> 
> Oops, copy/paste error. Here is the correct one:
> 
> https://review.openstack.org/#/c/348042/
> 
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Pending removal of X-IO volume driver

2016-08-02 Thread Sean McGinnis
Thanks Richard. The removal patch has been abandoned.

On Tue, Aug 02, 2016 at 03:20:41PM +, Hedlind, Richard wrote:
> Status update. Our CI is back up and has been passing tests successfully for 
> ~18h now. I will keep a close eye on it to make sure it stays up. Sorry about 
> the down time.
> 
> Richard
> 
> -Original Message-
> From: Hedlind, Richard [mailto:richard.hedl...@x-io.com] 
> Sent: Thursday, July 28, 2016 9:37 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [Cinder] Pending removal of X-IO volume driver
> 
> Hi Sean,
> Thanks for the heads up. I have been busy on other projects and not been 
> involved in maintaining the CI. I will look into it and get it back up and 
> running.
> I will keep you posted on the progress.
> 
> Thanks,
> Richard
> 
> -Original Message-
> From: Sean McGinnis [mailto:sean.mcgin...@gmx.com] 
> Sent: Wednesday, July 27, 2016 2:26 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Cinder] Pending removal of X-IO volume driver
> 
> The Cinder policy for driver CI requires that all volume drivers have a CI 
> reporting on any new patchset. CI's may have some down time, but if they do 
> not report within a two week period they are considered out of compliance 
> with our policy.
> 
> This is a notification that the X-IO OpenStack CI is out of compliance.
> It has not reported since March 18th, 2016.
> 
> The patch for driver removal has been posted here:
> 
> https://review.openstack.org/348022
> 
> If this CI is not brought into compliance, the patch to remove the driver 
> will be approved one week from now.
> 
> Thanks,
> Sean McGinnis (smcginnis)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-02 Thread Brian Haley

On 08/02/2016 08:15 AM, huangdenghui wrote:

hi john and brain
   thanks for your information, if we get patch[1],patch[2] merged,then fg can
allocate private ip address. after that, we need consider floating ip dataplane,
in current dvr implementation, fg is used to reachment testing for floating ip,
now,with subnet types bp,fg has different subnet than floating ip address, from
fg'subnet gateway point view, to reach floating ip, it need a routes entry,
destination is some floating ip address, fg'ip address is the nexthop, and this
routes entry need be populated at the event of floating ip creating, deleting
when floating ip is dissociated. any comments?


Yes, there could be a small change necessary to the l3-agent in order to route 
packets with DVR enabled, but I don't see it being a blocker.


-Brian



On 2016-08-01 23:38 , John Davidge  Wrote:

Yes, as Brian says this will be covered by the follow-up patch to [2]
which I¹m currently working on. Thanks for the question.

John


On 8/1/16, 3:17 PM, "Brian Haley"  wrote:

>On 07/31/2016 06:27 AM, huangdenghui wrote:
>> Hi
>>Now we have spec named subnet service types, which provides a
>>capability of
>> allowing different port of a network to allocate ip address from
>>different
>> subnet. In current implementation of DVR, fip also is distributed on
>>every
>> compute node, floating ip and fg's ip are both allocated from external
>>network's
>> subnets. In large public cloud deployment, current implementation will
>>consume
>> lots of public ip address. Do we need a RFE to apply subnet service
>>types spec
>> to resolve this problem. Any thoughts?
>
>Hi,
>
>This is going to be covered in the existing RFE for subnet service types
>[1].
>We currently have two reviews in progress for CRUD [2] and CLI [3], the
>IPAM
>changes are next.
>
>-Brian
>
>[1] https://review.openstack.org/#/c/300207/
>[2] https://review.openstack.org/#/c/337851/
>[3] https://review.openstack.org/#/c/342976/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rackspace Limited is a company registered in England & Wales (company
registered number 03897010) whose registered office is at 5 Millington Road,
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may
contain confidential or privileged information intended for the recipient.
Any dissemination, distribution or copying of the enclosed material is
prohibited. If you receive this transmission in error, please notify us
immediately by e-mail at ab...@rackspace.com and delete the original
message. Your cooperation is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-08-01 10:23:57 -0400:
> On 08/01/2016 08:33 AM, Sean Dague wrote:
> > On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> >> One of the outcomes of the discussion at the leadership training
> >> session earlier this year was the idea that the TC should set some
> >> community-wide goals for accomplishing specific technical tasks to
> >> get the projects synced up and moving in the same direction.
> >>
> >> After several drafts via etherpad and input from other TC and SWG
> >> members, I've prepared the change for the governance repo [1] and
> >> am ready to open this discussion up to the broader community. Please
> >> read through the patch carefully, especially the "goals/index.rst"
> >> document which tries to lay out the expectations for what makes a
> >> good goal for this purpose and for how teams are meant to approach
> >> working on these goals.
> >>
> >> I've also prepared two patches proposing specific goals for Ocata
> >> [2][3].  I've tried to keep these suggested goals for the first
> >> iteration limited to "finish what we've started" type items, so
> >> they are small and straightforward enough to be able to be completed.
> >> That will let us experiment with the process of managing goals this
> >> time around, and set us up for discussions that may need to happen
> >> at the Ocata summit about implementation.
> >>
> >> For future cycles, we can iterate on making the goals "harder", and
> >> collecting suggestions for goals from the community during the forum
> >> discussions that will happen at summits starting in Boston.
> >>
> >> Doug
> >>
> >> [1] https://review.openstack.org/349068 describe a process for managing 
> >> community-wide goals
> >> [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
> >> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> >> libraries"
> >
> > I like the direction this is headed. And I think for the test items, it
> > works pretty well.
> 
> I commented on the reviews, but I disagree with both the direction and 
> the proposed implementation of this.
> 
> In short, I think there's too much stick and not enough carrot. We 
> should create natural incentives for projects to achieve desired 
> alignment in certain areas, but placing mandates on project teams in a 
> diverse community like OpenStack is not useful.
> 
> The consequences of a project team *not* meeting these proposed mandates 
> has yet to be decided (and I made that point on the governance patch 
> review). But let's say that the consequences are that a project is 
> removed from the OpenStack big tent if they fail to complete these 
> "shared objectives".
> 
> What will we do when Swift decides that they have no intention of using 
> oslo.messaging or oslo.config because they can't stand fundamentals 
> about those libraries? Are we going to kick Swift, a founding project of 
> OpenStack, out of the OpenStack big tent?

Membership in the tent is the carrot, and ejection is the stick. The
big tent was an acknowledgement that giving out carrots makes everyone
stronger (all these well fed projects have led to a bigger supply of
carrots in general).

I think this proposal is an attempt to manage the ensuing chaos. We've
all seen carrot farmers abandon their farms, as well as duplicated effort
leading to a confusing experience for consumers of OpenStack's products.

I think there's room to build consensus around diversity in implementation
and even culture. We don't need to be a monolith. Our Swift development
community is bringing strong, powerful insight to the overall effort,
and strengthens the OpenStack brand considerably.  Certainly we can
support projects doing things their own way in some instances if they
so choose. What we don't want, however, is projects that operate in
relative isolation, without any cohesion, even loose cohesion, with the
rest.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominate Vladimir Khlyunev for fuel-qa core

2016-08-02 Thread Alexey Stepanov
+1

On Tue, Aug 2, 2016 at 2:56 PM, Artem Panchenko 
wrote:

> +1
>
> On Tue, Aug 2, 2016 at 1:52 PM, Dmitry Tyzhnenko 
> wrote:
>
>> +1
>>
>> On Tue, Aug 2, 2016 at 12:51 PM, Artur Svechnikov <
>> asvechni...@mirantis.com> wrote:
>>
>>> +1
>>>
>>> Best regards,
>>> Svechnikov Artur
>>>
>>> On Tue, Aug 2, 2016 at 12:40 PM, Andrey Sledzinskiy <
>>> asledzins...@mirantis.com> wrote:
>>>
 Hi,
 I'd like to nominate Vladimir Khlyunev for fuel-qa [0] core.

 Vladimir has become a valuable member of fuel-qa project in quite short
 period of time. His solid expertise and constant contribution gives me no
 choice but to nominate him for fuel-qa core.

 If anyone has any objections, speak now or forever hold your peace

 [0]
 http://stackalytics.com/?company=mirantis=all=fuel-qa_id=vkhlyunev
 

 --
 Thanks,
 Andrey Sledzinskiy
 QA Engineer,
 Mirantis, Kharkiv


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> WBR,
>> Dmitry T.
>> Fuel QA Engineer
>> http://www.mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Alexey Stepanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Ihar Hrachyshka

Amit Kumar Saha (amisaha)  wrote:


Hi,

We would like to introduce the community to a new Python based project  
called DON – Diagnosing OpenStack Networking. More details about the  
project can be found at https://github.com/openstack/python-don.


DON, written primarily in Python, and available as a dashboard in  
OpenStack Horizon, Libery release, is a network analysis and diagnostic  
system and provides a completely automated service for verifying and  
diagnosing the networking functionality provided by OVS. The genesis of  
this idea was presented at the Vancouver summit, May 2015. Hopefully the  
community will find this project interesting and will give us valuable  
feedback.


Amit,

neutron team currently works on defining a new diagnostics API:  
https://review.openstack.org/#/c/308973/


Please work with the community on API definition, and later, on backend  
specific implementation of desired checks.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslotest 2.8.0 release (newton)

2016-08-02 Thread no-reply
We are excited to announce the release of:

oslotest 2.8.0: Oslo test framework

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslotest

With package available at:

https://pypi.python.org/pypi/oslotest

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

For more details, please see below.

Changes in oslotest 2.7.0..2.8.0


425d465 Import mock so that it works on Python 3.x
9c03983 Fix parameters of assertEqual are misplaced
9779729 Updated from global requirements
7bff0fc Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

setup.cfg   |  1 +
test-requirements.txt   |  2 +-
tox.ini |  2 +-
6 files changed, 14 insertions(+), 13 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index cecb61e..8282002 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.config>=3.10.0 # Apache-2.0
+oslo.config>=3.12.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] taskflow 2.4.0 release (newton)

2016-08-02 Thread no-reply
We are exuberant to announce the release of:

taskflow 2.4.0: Taskflow structured state management library.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

For more details, please see below.

Changes in taskflow 2.3.0..2.4.0


8eed13c Updated from global requirements
a643e92 Updated from global requirements
c290741 Remove white space between print and ()
7fa93b9 Updated from global requirements
18024a6 Add Python 3.5 classifier and venv
38c5812 Replace assertEqual(None, *) with assertIsNone in tests
35a9305 Ensure the fetching jobs does not fetch anything when in bad state


Diffstat (except docs and test files)
-

requirements.txt  |  6 +-
setup.cfg |  1 +
taskflow/examples/retry_flow.py   |  6 +-
taskflow/jobs/backends/impl_zookeeper.py  | 74 ---
test-requirements.txt |  2 +-
tox.ini   |  1 +
10 files changed, 83 insertions(+), 23 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 73819aa..6e54d79 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17 +17 @@ enum34;python_version=='2.7' or python_version=='2.6' or 
python_version=='3.3' #
-futurist>=0.11.0 # Apache-2.0
+futurist!=0.15.0,>=0.11.0 # Apache-2.0
@@ -29 +29 @@ contextlib2>=0.4.0 # PSF License
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
@@ -44 +44 @@ automaton>=0.5.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 6606911..172b449 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -31 +31 @@ psycopg2>=2.5 # LGPL/ZPL
-sqlalchemy-utils # BSD License
+SQLAlchemy-Utils # BSD License



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslosphinx 4.7.0 release (newton)

2016-08-02 Thread no-reply
We are content to announce the release of:

oslosphinx 4.7.0: OpenStack Sphinx Extensions and Theme

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslosphinx

With package available at:

https://pypi.python.org/pypi/oslosphinx

Please report issues through launchpad:

http://bugs.launchpad.net/oslosphinx

For more details, please see below.

Changes in oslosphinx 4.6.0..4.7.0
--

3bcdfc6 Allow "Other Versions" section to be configurable
3fc15a5 fix other versions sidebar links


Diffstat (except docs and test files)
-

oslosphinx/__init__.py | 18 ++
oslosphinx/theme/openstack/layout.html |  7 ---
oslosphinx/theme/openstack/theme.conf  |  1 +
4 files changed, 30 insertions(+), 7 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] tooz 1.42.0 release (newton)

2016-08-02 Thread no-reply
We are mirthful to announce the release of:

tooz 1.42.0: Coordination library for distributed systems.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.41.0..1.42.0
--

e5c530a etcd: don't run heartbeat() concurrently
f296519 etcd: properly block when using 'wait'
0d2bd80 Share _get_random_uuid() among all tests
b322024 Updated from global requirements
c09b20b Clean leave group hooks when unwatching.
fcc7ea1 Fix the test test_unwatch_elected_as_leader.
324482f Updated from global requirements


Diffstat (except docs and test files)
-

requirements.txt|  6 +--
tooz/coordination.py|  3 ++
tooz/drivers/etcd.py| 14 +-
tox.ini |  2 +-
11 files changed, 137 insertions(+), 67 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index a9cecef..0513da4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@ pbr>=1.6 # Apache-2.0
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
@@ -13,2 +13,2 @@ futures>=3.0;python_version=='2.7' or python_version=='2.6' # 
BSD
-futurist>=0.11.0 # Apache-2.0
-oslo.utils>=3.14.0 # Apache-2.0
+futurist!=0.15.0,>=0.11.0 # Apache-2.0
+oslo.utils>=3.15.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] stevedore 1.17.0 release (newton)

2016-08-02 Thread no-reply
We are mirthful to announce the release of:

stevedore 1.17.0: Manage dynamic plugins for Python applications

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/stevedore

With package available at:

https://pypi.python.org/pypi/stevedore

Please report issues through launchpad:

https://bugs.launchpad.net/python-stevedore

For more details, please see below.

Changes in stevedore 1.16.0..1.17.0
---

0c6b78c Remove discover from test-requirements
4ec5022 make error reporting for extension loading quieter
76c14b1 Add Python 3.5 classifier and venv
c8a3964 Replace assertEquals() with assertEqual()


Diffstat (except docs and test files)
-

setup.cfg  |  1 +
stevedore/extension.py | 11 +--
test-requirements.txt  |  1 -
tox.ini|  2 +-
5 files changed, 12 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 977194a..6b0ae8d 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10 +9,0 @@ testrepository>=0.0.18 # Apache-2.0/BSD
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.service 1.14.0 release (newton)

2016-08-02 Thread no-reply
We are happy to announce the release of:

oslo.service 1.14.0: oslo.service library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

For more details, please see below.

Changes in oslo.service 1.13.0..1.14.0
--

0c3a29d Updated from global requirements
aac1d89 Fix parameters of assertEqual are misplaced


Diffstat (except docs and test files)
-

requirements.txt |  2 +-
6 files changed, 50 insertions(+), 50 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 83d22d4..8df3c2b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ monotonic>=0.6 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.vmware 2.12.0 release (newton)

2016-08-02 Thread no-reply
We are happy to announce the release of:

oslo.vmware 2.12.0: Oslo VMware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.vmware

With package available at:

https://pypi.python.org/pypi/oslo.vmware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.vmware

For more details, please see below.

Changes in oslo.vmware 2.11.0..2.12.0
-

37283c8 Updated from global requirements
0258fe0 Add http_method to download_stream_optimized_data
2f9af24 Refactor the image transfer
7c893ca Remove discover from test-requirements
170d6b7 Updated from global requirements


Diffstat (except docs and test files)
-

oslo_vmware/image_transfer.py| 424 ---
requirements.txt |   4 +-
test-requirements.txt|   1 -
4 files changed, 67 insertions(+), 669 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 4f4f3a6..0637801 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ pbr>=1.6 # Apache-2.0
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
@@ -12 +12 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index b8c2e46..e9eac53 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +7,0 @@ hacking<0.11,>=0.10.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.messaging 5.6.0 release (newton)

2016-08-02 Thread no-reply
We are excited to announce the release of:

oslo.messaging 5.6.0: Oslo Messaging API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

5.6.0
^


New Features


   * Idle connections in the pool will be expired and closed. Default
 ttl is 1200s. Next configuration params was added


  * *conn_pool_ttl* (defaul 1200)

  * *conn_pool_min_size* (default 2)


Deprecation Notes
*

* The rabbitmq driver option "DEFAULT/max_retries" has been
  deprecated for removal (at a later point in the future) as it did
  not make logical sense for notifications and for RPC.

Changes in oslo.messaging 5.5.0..5.6.0
--

d946fb1 Fix pika functional tests
7576497 fix a typo in impl_rabbit.py
1288621 Updated from global requirements
317641c Fix syntax error on notification listener docs
a6f0aae Delete fanout queues on gracefully shutdown
564e423 Properly cleanup listener and driver on simulator exit
18c8bc9 [zmq] Let proxy serve on a static port numbers
162f6e9 Introduce TTL for idle connections
9ed95bb Fix parameters of assertEqual are misplaced
95d0402 Fix misstyping issue
d1cbca8 Updated from global requirements
73b3286 Updated from global requirements
ff9b4bb notify: add a CLI tool to manually send notifications
538c84b Add deprecated relnote for max_retries rabbit configuration option
ae1123e [zmq] Add py34 configuration for functional tests
07187f9 [zmq] Merge publishers
8e77865 Add Python 3.5 classifier and venv
689ba08 Replace assertEqual(None, *) with assertIsNone in tests
c6c70ab Updated from global requirements
66ded1f [zmq] Use json/msgpack instead of pickle
ac484f6 [zmq] Refactor publishers
96438a3 Add Python 3.4 functional tests for AMQP 1.0 driver
3514638 tests: allow to override the functionnal tests suite args
2b50ea5 [zmq] Additional configurations for f-tests
6967bd7 Remove discover from test-requirements
865bfec tests: rabbitmq failover tests
df9a009 Imported Translations from Zanata
6945323 Updated from global requirements
861a3ac Remove rabbitmq max_retries
61aae0f Config: no need to set default=None
dc1309a Improve the impl_rabbit logging


Diffstat (except docs and test files)
-

oslo_messaging/_cmd/zmq_proxy.py   |  34 +++-
oslo_messaging/_drivers/amqp1_driver/opts.py   |   2 -
oslo_messaging/_drivers/base.py|   7 +-
oslo_messaging/_drivers/impl_kafka.py  |  13 +-
oslo_messaging/_drivers/impl_rabbit.py |  82 ++---
oslo_messaging/_drivers/impl_zmq.py|   7 +-
.../pika_driver/pika_connection_factory.py |   8 +-
oslo_messaging/_drivers/pool.py|  65 ---
.../_drivers/zmq_driver/broker/__init__.py |   0
.../_drivers/zmq_driver/broker/zmq_proxy.py|  80 -
.../_drivers/zmq_driver/broker/zmq_queue_proxy.py  | 140 ---
.../publishers/dealer/zmq_dealer_call_publisher.py | 106 ---
.../publishers/dealer/zmq_dealer_publisher.py  |  89 -
.../publishers/dealer/zmq_dealer_publisher_base.py | 110 
.../dealer/zmq_dealer_publisher_direct.py  |  53 ++
.../dealer/zmq_dealer_publisher_proxy.py   | 199 +
.../client/publishers/dealer/zmq_reply_waiter.py   |  66 ---
.../client/publishers/zmq_pub_publisher.py |  71 
.../client/publishers/zmq_publisher_base.py| 158 +++-
.../client/publishers/zmq_push_publisher.py|  52 --
.../_drivers/zmq_driver/client/zmq_client.py   |  54 ++
.../_drivers/zmq_driver/client/zmq_client_base.py  |  25 ++-
.../_drivers/zmq_driver/client/zmq_receivers.py| 145 +++
.../_drivers/zmq_driver/client/zmq_response.py |  18 +-
.../zmq_driver/client/zmq_routing_table.py |  65 +++
.../_drivers/zmq_driver/client/zmq_senders.py  | 105 +++
.../zmq_driver/client/zmq_sockets_manager.py   |  96 ++
.../_drivers/zmq_driver/proxy/__init__.py  |   0
.../_drivers/zmq_driver/proxy/zmq_proxy.py |  98 ++
.../zmq_driver/proxy/zmq_publisher_proxy.py|  74 
.../_drivers/zmq_driver/proxy/zmq_queue_proxy.py   | 150 
.../server/consumers/zmq_dealer_consumer.py|  78 ++--
.../server/consumers/zmq_pull_consumer.py  |  69 ---
.../server/consumers/zmq_router_consumer.py|  66 +++
.../server/consumers/zmq_sub_consumer.py   |  26 +--
.../zmq_driver/server/zmq_incoming_message.py  |  51 +++---
oslo_messaging/_drivers/zmq_driver/zmq_names.py|  18 +-
oslo_messaging/_drivers/zmq_driver/zmq_socket.py   |  80 +++--

[openstack-dev] [new][oslo] oslo.versionedobjects 1.14.0 release (newton)

2016-08-02 Thread no-reply
We are satisfied to announce the release of:

oslo.versionedobjects 1.14.0: Oslo Versioned Objects library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.

Changes in oslo.versionedobjects 1.13.0..1.14.0
---

67ba3a0 Updated from global requirements
def295f Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt | 4 ++--
setup.cfg| 1 +
tox.ini  | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 7813946..2e55edb 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
@@ -10 +10 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.serialization 2.12.0 release (newton)

2016-08-02 Thread no-reply
We are jazzed to announce the release of:

oslo.serialization 2.12.0: Oslo Serialization library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.serialization

With package available at:

https://pypi.python.org/pypi/oslo.serialization

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.serialization

For more details, please see below.

Changes in oslo.serialization 2.11.0..2.12.0


afb5332 Updated from global requirements
5ae0432 Fix parameters of assertEqual are misplaced
aa0e480 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt  |  2 +-
setup.cfg |  1 +
tox.ini   |  2 +-
6 files changed, 71 insertions(+), 70 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 54901dd..33ada12 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13 +13 @@ msgpack-python>=0.4.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.reports 1.13.0 release (newton)

2016-08-02 Thread no-reply
We are pleased to announce the release of:

oslo.reports 1.13.0: oslo.reports library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.reports

With package available at:

https://pypi.python.org/pypi/oslo.reports

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.reports

For more details, please see below.

Changes in oslo.reports 1.12.0..1.13.0
--

329eb7c Updated from global requirements
0565210 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt | 2 +-
setup.cfg| 2 +-
tox.ini  | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 67f73ae..56641ec 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11 +11 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.privsep 1.11.0 release (newton)

2016-08-02 Thread no-reply
We are chuffed to announce the release of:

oslo.privsep 1.11.0: OpenStack library for privilege separation

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

Changes in oslo.privsep 1.10.0..1.11.0
--

108b201 Updated from global requirements
9510ac0 Drop python3.3 support in classifier


Diffstat (except docs and test files)
-

requirements.txt | 2 +-
setup.cfg| 1 -
2 files changed, 1 insertion(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1397b11..34304cd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.db 4.9.0 release (newton)

2016-08-02 Thread no-reply
We are delighted to announce the release of:

oslo.db 4.9.0: Oslo Database library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

4.9.0
^

Upgrade Notes

* The allowed values for the "connection_debug" option are now
  restricted to the range between 0 and 100 (inclusive). Previously a
  number lower than 0 or higher than 100 could be given without error.
  But now, a "ConfigFileValueError" will be raised when the option
  value is outside this range.

Changes in oslo.db 4.8.0..4.9.0
---

6bdb99f Updated from global requirements
60b5b14 Memoize sys.exc_info() before attempting a savepoint rollback
1dc55b8 Updated from global requirements
a9ec13d Consolidate pifpaf commands into variables
a794790 Updated from global requirements
5da12af Updated from global requirements
abebffc Fixed unit tests running on Windows
20613c3 Remove discover from setup.cfg
7b76cdf Add dispose_pool() method to enginefacade context, factory
e0cc306 Set a min and max on the connection_debug option
d594f62 Set max pool size default to 5
72bab42 tox: add py35 envs for convenience


Diffstat (except docs and test files)
-

oslo_db/exception.py   |  7 ++-
oslo_db/options.py | 15 ++---
oslo_db/sqlalchemy/enginefacade.py | 14 +
oslo_db/sqlalchemy/engines.py  |  2 +-
oslo_db/sqlalchemy/exc_filters.py  | 46 ++-
.../connection_debug_min_max-bf6d53d49be7ca52.yaml |  7 +++
requirements.txt   |  6 +-
setup.cfg  |  4 +-
tox.ini| 37 
12 files changed, 209 insertions(+), 44 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index fbc015b..6befe2a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10,2 +10,2 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
@@ -14 +14 @@ sqlalchemy-migrate>=0.9.6 # Apache-2.0
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.config 3.14.0 release (newton)

2016-08-02 Thread no-reply
We are amped to announce the release of:

oslo.config 3.14.0: Oslo Configuration API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.config

With package available at:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

3.14.0
^^

New Features

* Added minimum and maximum value limits to FloatOpt.

Changes in oslo.config 3.13.0..3.14.0
-

b409253 disable lazy translation in sphinx extension
2fdc1cf Trivial: adjust import order to fit the import order guideline
c115719 Make error message more clear
15d3ab8 Add min and max values to Float type and Opt
61224ce Fix parameters of assertEqual are misplaced
55c2026 Updated from global requirements
8ed5f75 Add max_length to URIOpt
f48a897 Remove discover from test-requirements
6f2c57c update docs for sphinxconfiggen
9b05dc9 Add URIOpt to doced option types


Diffstat (except docs and test files)
-

oslo_config/cfg.py |  38 +-
oslo_config/sphinxext.py   |  10 +
oslo_config/types.py   | 110 +++--
.../notes/add-float-min-max-b1a2e16301c8435c.yaml  |   3 +
requirements.txt   |   2 +-
test-requirements.txt  |   1 -
12 files changed, 555 insertions(+), 297 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index d1ac579..972e955 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ six>=1.9.0 # MIT
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index d444b33..a11d8f2 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +6,0 @@ hacking<0.11,>=0.10.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.middleware 3.16.0 release (newton)

2016-08-02 Thread no-reply
We are grateful to announce the release of:

oslo.middleware 3.16.0: Oslo Middleware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.middleware

With package available at:

https://pypi.python.org/pypi/oslo.middleware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

For more details, please see below.

Changes in oslo.middleware 3.15.0..3.16.0
-

2697995 Updated from global requirements
3a18916 Updated from global requirements
0c00db8 Fix unit tests on Windows


Diffstat (except docs and test files)
-

oslo_middleware/healthcheck/disable_by_file.py | 4 +++-
requirements.txt   | 6 +++---
3 files changed, 11 insertions(+), 6 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index b80e5c6..fdbfbf4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
@@ -12 +12 @@ six>=1.9.0 # MIT
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.policy 1.13.0 release (newton)

2016-08-02 Thread no-reply
We are thrilled to announce the release of:

oslo.policy 1.13.0: Oslo Policy library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.policy

With package available at:

https://pypi.python.org/pypi/oslo.policy

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.policy

For more details, please see below.

Changes in oslo.policy 1.12.0..1.13.0
-

10a81ba Updated from global requirements
cce967a Add note about not all APIs support policy enforcement by user_id
5273d2c Adds debug logging for policy file validation
09c5588 Fixed unit tests running on Windows
7204311 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

oslo_policy/policy.py| 16 ++
requirements.txt |  2 +-
setup.cfg|  1 +
tox.ini  |  2 +-
5 files changed, 56 insertions(+), 34 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 09e1525..a954394 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.log 3.13.0 release (newton)

2016-08-02 Thread no-reply
We are stoked to announce the release of:

oslo.log 3.13.0: oslo.log library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 3.12.0..3.13.0
--

656cef3 Updated from global requirements
92b6ff6 Fix parameters of assertEqual are misplaced
12de127 Updated from global requirements
8cb90f4 Remove discover from test-requirements
3ae0e87 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt|  4 ++--
setup.cfg   |  1 +
test-requirements.txt   |  1 -
tox.ini |  2 +-
5 files changed, 29 insertions(+), 29 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 9bac65f..a288b0b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 4c9bc0c..673f993 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +6,0 @@ hacking<0.11,>=0.10.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Belated nova newton midcycle recap (part 2)

2016-08-02 Thread Brian Haley

On 08/01/2016 10:15 PM, Matt Riedemann wrote:

Starting from where I accidentally left off:





We also talked a bit about live migration with Neutron. There has been a fix up
for live migration + DVR since Mitaka:

https://review.openstack.org/#/c/275073

It's a bit of a hacky workaround but the longer term solution that we all want (
https://review.openstack.org/#/c/309416 ) is not going to be in Newton and will
need discussion at the Ocata summit in Barcelona (John Garbutt was going to work
with the Neutron team on preparing for the summit on that one). So we agreed to
go with Swami's DVR fix but it needs to be rebased (which still hasn't happened
since the midcycle).


I just pushed an update to the DVR live-migration patch re-based to master, so 
feel free to review again.  Swami or myself will answer any other comments as 
quickly as possible.


Thanks,

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.cache 1.12.0 release (newton)

2016-08-02 Thread no-reply
We are enthusiastic to announce the release of:

oslo.cache 1.12.0: Cache storage for Openstack projects.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

For more details, please see below.

Changes in oslo.cache 1.11.0..1.12.0


3009e5f Updated from global requirements
e989c40 Add Python 3.5 classifier and venv
6e9091d Imported Translations from Zanata
30a7cf4 Updated from global requirements


Diffstat (except docs and test files)
-

oslo_cache/locale/en_GB/LC_MESSAGES/oslo_cache.po  | 53 ++
oslo_cache/locale/es/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/fr/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/it/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/ko_KR/LC_MESSAGES/oslo_cache.po  | 12 +++--
oslo_cache/locale/pt_BR/LC_MESSAGES/oslo_cache.po  | 12 +++--
oslo_cache/locale/ru/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/tr_TR/LC_MESSAGES/oslo_cache.po  | 14 +++---
oslo_cache/locale/zh_CN/LC_MESSAGES/oslo_cache.po  | 12 +++--
oslo_cache/locale/zh_TW/LC_MESSAGES/oslo_cache.po  | 14 +++---
.../locale/en_GB/LC_MESSAGES/releasenotes.po   | 30 
requirements.txt   |  4 +-
setup.cfg  |  2 +-
tox.ini|  2 +-
14 files changed, 152 insertions(+), 51 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index b1defe2..2f4ebb9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ six>=1.9.0 # MIT
-oslo.config>=3.10.0 # Apache-2.0
+oslo.config>=3.12.0 # Apache-2.0
@@ -10 +10 @@ oslo.log>=1.14.0 # Apache-2.0
-oslo.utils>=3.14.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] Use ResourceProviderTags instead of ResourceClass?

2016-08-02 Thread Jay Pipes

On 08/02/2016 08:19 AM, Alex Xu wrote:

Chris have a thought about using ResourceClass to describe Capabilities
with an infinite inventory. In the beginning we brain storming the idea
of Tags, Tan Lin have same thought, but we say no very quickly, due to
the ResourceClass is really about Quantitative stuff. But Chris give
very good point about simplify the ResourceProvider model and the API.

After rethinking about those idea, I like simplify the ResourceProvider
model and the API. But I think the direction is opposite. ResourceClass
with infinite inventory is really hacky. The Placement API is simple,
but the usage of those API isn't simple for user, they need create a
ResourceClass, then create an infinite inventory. And ResourceClass
isn't managable like Tags, look at the Tags API, there are many query
parameter.

But look at the ResourceClass and ResourceProviderTags, they are totally
same, two columns: one is integer id, another one is string.
ResourceClass is just for naming the quantitative stuff. So what we need
is thing used to 'naming'. ResourceProviderTags is higher abstract, Tags
is generic thing to name something, we totally can use Tag instead of
ResourceClass. So user can create inventory with tags, also user can
create ResourceProvider with tags.


No, this sounds like actually way more complexity than is needed and 
will make the schema less explicit.



But yes, there may still have problem isn't resolved, one of problem is
pointed out when I discuss with YingXin about how to distinguish the Tag
is about quantitative or qualitative. He think we need attribute for Tag
to distinguish it. But the attribute isn't thing I like, I prefer leave
that alone due to the user of placement API is admin-user.

Any thought? or I'm too crazy at here...maybe I just need put this in
the alternative section in the spec...


A resource class is not a capability, though. It's an indication of a 
type of quantitative consumable that is exposed on a resource provider.


A capability is a string that indicates a feature that a resource 
provider offers. A capability isn't "consumed".


BTW, this is why I continue to think that using the term "tags" in the 
placement API is wrong. The placement API should clearly indicate that a 
resource provider has a set of capabilities. Tags, in Nova at least, are 
end-user-defined simple categorization strings that have no 
standardization and no cataloguing or collation to them.


Capabilities are not end-user-defined -- they can be defined by an 
operator but they are not things that a normal end-user can simply 
create. And capabilities are specifically *not* for categorization 
purposes. They are an indication of a set of features that a resource 
provider exposes.


This is why I think the placement API for capabilities should use the 
term "capabilities" and not "tags".


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.concurrency 3.13.0 release (newton)

2016-08-02 Thread no-reply
We are eager to announce the release of:

oslo.concurrency 3.13.0: Oslo Concurrency library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

With package available at:

https://pypi.python.org/pypi/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.

Changes in oslo.concurrency 3.12.0..3.13.0
--

2e8d548 Updated from global requirements
0c3a39e Fix parameters of assertEqual are misplaced
e9a0914 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt |  2 +-
setup.cfg|  1 +
tox.ini  |  6 +--
5 files changed, 44 insertions(+), 45 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 864afc1..81d8537 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] mox3 0.18.0 release (newton)

2016-08-02 Thread no-reply
We are glad to announce the release of:

mox3 0.18.0: Mock object framework for Python

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/mox3

With package available at:

https://pypi.python.org/pypi/mox3

Please report issues through launchpad:

http://bugs.launchpad.net/python-mox3

For more details, please see below.

Changes in mox3 0.17.0..0.18.0
--

22c02dc Remove discover from test-requirements


Diffstat (except docs and test files)
-

test-requirements.txt | 1 -
1 file changed, 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index df05b72..b24a31f 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12 +11,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] debtcollector 1.7.0 release (newton)

2016-08-02 Thread no-reply
We are glowing to announce the release of:

debtcollector 1.7.0: A collection of Python deprecation patterns and
strategies that help you collect your technical debt in a non-
destructive manner.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/debtcollector

With package available at:

https://pypi.python.org/pypi/debtcollector

Please report issues through launchpad:

http://bugs.launchpad.net/debtcollector

For more details, please see below.

Changes in debtcollector 1.6.0..1.7.0
-

279bbca Remove discover from test-requirements
18f7de4 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

setup.cfg | 2 +-
test-requirements.txt | 1 -
tox.ini   | 6 +-
3 files changed, 6 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 16f75c6..0c903ff 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +7,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] futurist 0.17.0 release (newton)

2016-08-02 Thread no-reply
We are tickled pink to announce the release of:

futurist 0.17.0: Useful additions to futures, from the future.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/futurist

With package available at:

https://pypi.python.org/pypi/futurist

Please report issues through launchpad:

http://bugs.launchpad.net/futurist

For more details, please see below.

Changes in futurist 0.16.0..0.17.0
--

2a0d270 Remove discover from test-requirements


Diffstat (except docs and test files)
-

test-requirements.txt | 1 -
1 file changed, 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 68ed7a0..b18f71d 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12 +11,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] automaton 1.4.0 release (newton)

2016-08-02 Thread no-reply
We are joyful to announce the release of:

automaton 1.4.0: Friendly state machines for python.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/automaton

With package available at:

https://pypi.python.org/pypi/automaton

Please report issues through launchpad:

http://bugs.launchpad.net/automaton

For more details, please see below.

Changes in automaton 1.3.0..1.4.0
-

dab7331 Remove discover from test-requirements
e87dc55 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

setup.cfg | 1 +
test-requirements.txt | 1 -
tox.ini   | 2 +-
3 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 2c695bc..958c5dd 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9 +8,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-08-02 Thread gordon chung


On 29/07/16 03:29 PM, gordon chung wrote:

i'm using Ceph. but i should mention i also only have 1 thread enabled
because python+threading is... yeah.

i'll give it a try again with threads enabled.


I tried this again with 16 threads. as expected, python (2.7.x) threads do jack 
all.

i also tried lowering the points per object to 900 (~8KB max). this performed 
~4% worse for read/writes. i should probably add a disclaimer that i'm batching 
75K points/metric at once which is probably not normal.

so from very rough testing, we can choose to lower it to 3600points which 
offers better split opportunities with negligible improvement/degradation, or 
even more to 900points with potentially small write degradation (massive 
batching).


--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Belated nova newton midcycle recap (part 2)

2016-08-02 Thread Jim Rollenhagen
On Mon, Aug 01, 2016 at 09:15:46PM -0500, Matt Riedemann wrote:
> 
> 
> 
> * Placement API for resource providers
> 
> Jay's personal goal for Newton is for the resource tracker to be writing
> inventory and allocation data via the placement API. We want to get the data
> writing into the placement API in Newton so we can start using it in Ocata.
> 
> There are some spec amendments up for resource providers, at least one has
> merged, and the initial placement API change merged today:
> 
> https://review.openstack.org/#/c/329149/
> 
> We talked about supporting dynamic resource classes for Ironic use cases
> which is a stretch goal for Nova in Newton. Jay has a spec for that here:
> 
> https://review.openstack.org/#/c/312696/
> 
> There is a lot more detail in the etherpad and honestly Jay Pipes or Jim
> Rollenhagen would be better to summarize what came out of this at the
> midcycle and what's being worked on for dynamic resource classes right now.

I actually wrote a bit about this last week:
http://lists.openstack.org/pipermail/openstack-dev/2016-July/099922.html

I'm not sure it covers everything, but it's the important pieces I got
from it.

// jim

> We talked about a separate placement API database but decided this should be
> optional to avoid forcing yet another nova database on deployers in a couple
> of releases. This would be available for deployers to use to avoid some
> future upgrade pain when the placement service is split out from Nova, but
> if not configured it will default to the API database for the placement API.
> There are a bunch more details and discussion on that in this thread that
> Chris Dent started after the midcycle:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-July/100302.html
> 
> 
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-08-02 16:30:06 +:
> On 02/08/2016 16:37, Doug Hellmann wrote:
> > Excerpts from Hayes, Graham's message of 2016-08-02 13:49:06 +:
> >> On 02/08/2016 14:37, Doug Hellmann wrote:
> >>> Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
>  On 29/07/2016 21:59, Doug Hellmann wrote:
> > One of the outcomes of the discussion at the leadership training
> > session earlier this year was the idea that the TC should set some
> > community-wide goals for accomplishing specific technical tasks to
> > get the projects synced up and moving in the same direction.
> >
> > After several drafts via etherpad and input from other TC and SWG
> > members, I've prepared the change for the governance repo [1] and
> > am ready to open this discussion up to the broader community. Please
> > read through the patch carefully, especially the "goals/index.rst"
> > document which tries to lay out the expectations for what makes a
> > good goal for this purpose and for how teams are meant to approach
> > working on these goals.
> >
> > I've also prepared two patches proposing specific goals for Ocata
> > [2][3].  I've tried to keep these suggested goals for the first
> > iteration limited to "finish what we've started" type items, so
> > they are small and straightforward enough to be able to be completed.
> > That will let us experiment with the process of managing goals this
> > time around, and set us up for discussions that may need to happen
> > at the Ocata summit about implementation.
> >
> > For future cycles, we can iterate on making the goals "harder", and
> > collecting suggestions for goals from the community during the forum
> > discussions that will happen at summits starting in Boston.
> >
> > Doug
> >
> > [1] https://review.openstack.org/349068 describe a process for managing 
> > community-wide goals
> > [2] https://review.openstack.org/349069 add ocata goal "support python 
> > 3.5"
> > [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> > libraries"
> 
>  I am confused. When I proposed my patch for doing something very similar
>  (Equal Chances for all projects is basically a multiple release goal) I
>  got the following rebuttals:
> 
>   > "and it would be
>   > a mistake to try to require that because the issue is almost always
>   > lack of resources and not lack of desire. Volunteers to contribute
>   > to the work that's needed will do more to help than a
>   > one-size-fits-all policy."
> 
>   > This isn't a thing that gets fixed with policy. It gets fixed with
>   > people.
> 
>   > I am reading through the thread, and it puzzles me that I see a lot
>   > of right words about goals but not enough hints on who is going to
>   > implement that.
> 
>   > I think the right solutions here are human ones. Talk with people.
>   > Figure out how you can help lighten their load so that they have
>   > breathing space. I think hiding behind policy becomes a way to make
>   > us more separate rather than engaging folks more directly.
> 
>   > Coming at this with top down declarations of how things should work
>   > not only ignores reality of the ecosystem and the current state of
>   > these projects, but is also going about things backwards.
> 
>   > This entirely ignores that these are all open source projects,
>   > which are often very sparsely contributed to. If you have an issue
>   > with a project or the interface it provides, then contribute to it.
>   > Don't make grandiose resolutions trying to force things into what you
>   > see as an ideal state, instead contribute to help fix the problems
>   > you've identified.
> 
>  And yet, we are currently suggesting a system that will actively (in an
>  undefined way) penalise projects who do not comply with a different set
>  of proposals, done in a top down manner.
> 
>  I may be missing the point, but the two proposals seem to have
>  similarities - what is the difference?
> 
>  When I see comments like:
> 
>   > Project teams who join the big tent sign up to the rights and
>   > responsibilities that go along with membership. Those responsibilities
>   > include taking some direction from the TC, including regarding work
>   > they may not give the same priority as the broader community.
> 
>  It sounds like top down is OK, but previous ML thread / TC feedback
>  has been different.
> >>>
> >>> One difference is that these goals are not things like "the
> >>> documentation team must include every service project in the
> >>> installation guide" but rather would be phrased like "every project
> >>> must provide an installation guide". The work 

[openstack-dev] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Amit Kumar Saha (amisaha)
Hi,

We would like to introduce the community to a new Python based project called 
DON - Diagnosing OpenStack Networking. More details about the project can be 
found at https://github.com/openstack/python-don.

DON, written primarily in Python, and available as a dashboard in OpenStack 
Horizon, Libery release, is a network analysis and diagnostic system and 
provides a completely automated service for verifying and diagnosing the 
networking functionality provided by OVS. The genesis of this idea was 
presented at the Vancouver summit, May 2015. Hopefully the community will find 
this project interesting and will give us valuable feedback.

Regards,
Amit
Cisco Bangalore

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Migration APIs 2-phase vs. 1-phase

2016-08-02 Thread Ben Swartzlander
It occurred to me that if we write the 2-phase migration APIs correctly, 
then it will be fairly trivial to implement 1-phase migration outside 
Manila (in the client, or even higher up).


I would like to propose that we change the migration API to actually 
work that way, because I think it will have positive impact on the 
driver interface and it will make the internals for migration a lot 
simpler. Specifically, I'm proposing that the Manila REST API only 
supports starting/completing migrations, and querying the status of an 
ongoing migration -- there should be no automatic looping inside Manila 
to perform a start+complete in 1 shot.


Additionally I think it makes sense to make all the migration driver 
interfaces more asynchronous, but that change is less urgent. Getting 
the driver interface exactly right is less important than getting the 
REST API right in Newton. Nevertheless, I think we should aim for a 
driver interface that expects all the migration calls to return quickly 
and for status polling to occur automatically on long running 
operations. This will enable much better behavior when restarting 
services during a migration.


I'm going to put a topic on the meeting agenda for Thursday to discuss 
this in more detail, but if anyone has other feelings please chime in here.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [networking-sfc] Flow classifier conflict logic

2016-08-02 Thread Farhad Sunavala
 Please send the tenant ids of all six neutron ports.
>From admin:neutron port-show  | grep tenant_id
Thanks,Farhad.

On Monday, August 1, 2016 7:44 AM, Artem Plakunov  
wrote:
 

  Thanks. 
 
 You said though that classifier must be unique within a tenant. I tried 
creating chains in two different tenants by different users without any RBAC 
rules. So there are two tenants, each has 1 network, 2 vms (source, service) 
and an admin user. I used different openrc configs for each user yet still get 
the same conflict. 
 
 Info about the test is in the attachment
 31.07.2016 5:25, Farhad Sunavala пишет:
  
  Yes, this was intentionally done. The logical-source-port is important only 
at the point of classification. All successive classifications rely only on the 
5 tuple and MPLS label (chain ID). 
  Consider an extension of the scenario you mention below. 
  Sources: (similar to your case) a  b 
  Port-pairs: (added ppe and ppf) ppc ppd ppe ppf 
  Port-pair-groups: (added ppge and ppgf) ppgc ppgd ppge ppgf 
  Flow-classifiers: fc1: logical-source-port of a && tcp fc2: 
logical-source-port of b && tcp 
  Port-chains: pc1: fc1 && (ppgc + ppge) pc2: fc2 && (ppgd + ppgc + ppgf) 
  
  
  The flow-classifier has logical-src-port and protocol=tcp The 
logical-src-port has no relevance in the middle of the chain. 
  In the middle of the chain, the only relevant flow-classifier is 
protocol=tcp. 
  If we allow it, we cannot distinguish TCP traffic coming out of ppgc (and 
subsequently ppc)  as to whether to mark it with the label for pc1 or the label 
for pc2. 
  In other words, within a tenant the flow-classifiers need to be unique wrt 
the 5 tuples. 
  thanks, Farhad. 
 Date: Fri, 29 Jul 2016 18:01:05 +0300
 From: Artem Plakunov 
 To: openst...@lists.openstack.org
 Subject: [Openstack] [networking-sfc] Flow classifier conflict logic
 Message-ID: <579b6fb1.3030...@lvk.cs.msu.su>
 Content-Type: text/plain; charset="utf-8"; Format="flowed"
 
 Hello.
 We have two deployments with networking-sfc:
 mirantis 8.0 (liberty) and mirantis 9.0 (mitaka).
 
 I noticed a difference in how flow classifiers conflict with each other 
 which I do not understand. I'm not sure if it is a bug or not.
 
 I did the following on mitaka:
 1. Create tenant 1 and network 1
 2. Launch vms A and B in network 1
 3. Create tenant 2, share network 1 to it with RBAC policy, launch vm C 
 in network 1
 4. Create tenant 3, share network 1 to it with RBAC policy, launch vm D 
 in network 1
 5. Setup sfc:
     create two port pairs for vm C and vm D with a bidirectional port
     create two port pair groups with these pairs (one pair in one group)
     create flow classifier 1: logical-source-port = vm A port, protocol 
 = tcp
     create flow classifier 2: logical-source-port = vm B port, protocol 
 = tcp
     create chain with group 1 and classifier 1
     create chain with group 2 and classifier 2 - this step gives the 
 following error:
 
 Flow Classifier 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow 
 Classifier 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
 d1070955-fae9-4483-be9e-0e30f2859282.
 Neutron server returns request_ids: 
 ['req-9d0eecec-2724-45e8-84b4-7ccf67168b03']
 
 The only thing neutron logs have is this from server.log:
 2016-07-29 14:15:57.889 18917 INFO neutron.api.v2.resource 
 [req-9d0eecec-2724-45e8-84b4-7ccf67168b03 
 0b807c8616614b84a4b16a318248d28c 9de9dcec18424398a75a518249707a61 - - -] 
 create failed (client error): Flow Classifier 
 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow Classifier 
 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
 d1070955-fae9-4483-be9e-0e30f2859282.
 
 I tried the same in liberty and it works and sfc successfully routes 
 traffic from both vms to their respective port groups
 
 Liberty setup:
 neutron version 7.0.4
 neutronclient version 3.1.1
 networking-sfc version 1.0.0 (from pip package)
 
 Mitaka setup:
 neutron version 8.1.1
 neutronclient version 5.0.0 (tried using 3.1.1 with same outcome)
 networking-sfc version 1.0.1.dev74 (from master branch commit 
 6730b6810355761cf55f04a40cd645f065f15752)
 
 I'll attach the output of commands neutron port-list, port-pair-list, 
 port-pair-group-list, flow-classifier-list and port-chain-list.
 
 Is this an intended flow classifier behavior? If so, why? The port 
 chains and all their participants are different.
 
 
 
   
 
 

  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Hayes, Graham
On 02/08/2016 16:37, Doug Hellmann wrote:
> Excerpts from Hayes, Graham's message of 2016-08-02 13:49:06 +:
>> On 02/08/2016 14:37, Doug Hellmann wrote:
>>> Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
 On 29/07/2016 21:59, Doug Hellmann wrote:
> One of the outcomes of the discussion at the leadership training
> session earlier this year was the idea that the TC should set some
> community-wide goals for accomplishing specific technical tasks to
> get the projects synced up and moving in the same direction.
>
> After several drafts via etherpad and input from other TC and SWG
> members, I've prepared the change for the governance repo [1] and
> am ready to open this discussion up to the broader community. Please
> read through the patch carefully, especially the "goals/index.rst"
> document which tries to lay out the expectations for what makes a
> good goal for this purpose and for how teams are meant to approach
> working on these goals.
>
> I've also prepared two patches proposing specific goals for Ocata
> [2][3].  I've tried to keep these suggested goals for the first
> iteration limited to "finish what we've started" type items, so
> they are small and straightforward enough to be able to be completed.
> That will let us experiment with the process of managing goals this
> time around, and set us up for discussions that may need to happen
> at the Ocata summit about implementation.
>
> For future cycles, we can iterate on making the goals "harder", and
> collecting suggestions for goals from the community during the forum
> discussions that will happen at summits starting in Boston.
>
> Doug
>
> [1] https://review.openstack.org/349068 describe a process for managing 
> community-wide goals
> [2] https://review.openstack.org/349069 add ocata goal "support python 
> 3.5"
> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> libraries"

 I am confused. When I proposed my patch for doing something very similar
 (Equal Chances for all projects is basically a multiple release goal) I
 got the following rebuttals:

  > "and it would be
  > a mistake to try to require that because the issue is almost always
  > lack of resources and not lack of desire. Volunteers to contribute
  > to the work that's needed will do more to help than a
  > one-size-fits-all policy."

  > This isn't a thing that gets fixed with policy. It gets fixed with
  > people.

  > I am reading through the thread, and it puzzles me that I see a lot
  > of right words about goals but not enough hints on who is going to
  > implement that.

  > I think the right solutions here are human ones. Talk with people.
  > Figure out how you can help lighten their load so that they have
  > breathing space. I think hiding behind policy becomes a way to make
  > us more separate rather than engaging folks more directly.

  > Coming at this with top down declarations of how things should work
  > not only ignores reality of the ecosystem and the current state of
  > these projects, but is also going about things backwards.

  > This entirely ignores that these are all open source projects,
  > which are often very sparsely contributed to. If you have an issue
  > with a project or the interface it provides, then contribute to it.
  > Don't make grandiose resolutions trying to force things into what you
  > see as an ideal state, instead contribute to help fix the problems
  > you've identified.

 And yet, we are currently suggesting a system that will actively (in an
 undefined way) penalise projects who do not comply with a different set
 of proposals, done in a top down manner.

 I may be missing the point, but the two proposals seem to have
 similarities - what is the difference?

 When I see comments like:

  > Project teams who join the big tent sign up to the rights and
  > responsibilities that go along with membership. Those responsibilities
  > include taking some direction from the TC, including regarding work
  > they may not give the same priority as the broader community.

 It sounds like top down is OK, but previous ML thread / TC feedback
 has been different.
>>>
>>> One difference is that these goals are not things like "the
>>> documentation team must include every service project in the
>>> installation guide" but rather would be phrased like "every project
>>> must provide an installation guide". The work is distributed to the
>>> vertical teams, and not focused in the horizontal teams.
>>
>> Well, the wording was actually "the documentation team must provide a
>> way for all projects to be included in the documentation guide". The
>> work was on the 

Re: [openstack-dev] [HA] RFC: High Availability track at future Design Summits

2016-08-02 Thread Adam Spiers
Hi Thierry,

Thierry Carrez  wrote:
> Adam Spiers wrote:
> > I doubt anyone would dispute that High Availability is a really
> > important topic within OpenStack, yet none of the OpenStack
> > conferences or Design Summits so far have provided an "official" track
> > or similar dedicated space for discussion on HA topics.
> > [...]
> 
> We do not provide a specific track at the "Design Summit" for HA (or for
> hot upgrades for the matter) but we have space for "cross-project
> workshops" in which HA topics would be discussed. I suspect what you
> mean here is that the one of two sessions that the current setup allows
> are far from enough to tackle that topic efficiently ?

Yes, I think that's probably true.  I get the impression cross-project
workshops are intended more for coordination of common topics between
many official big tent projects, whereas our topics typically involve
a small handful of projects, some of which are currently unofficial.

> IMHO there is dedicated space -- just not enough of it. It's one of the
> issues with the current Design Summit setup -- just not enough time and
> space to tackle everything in one week. With the new event format I
> expect we'll be able to free up more time to discuss such horizontal
> issues

Right.  I'm looking forward to the new format :-)

> but as far as Barcelona goes (where we have less space and less
> time than in Austin), I'd encourage you to still propose cross-project
> workshops (and engage on the Ops side of the Design Summit to get
> feedback from there as well).

OK thanks - I'll try to figure out the best way of following up on
these two points.  I see that

  https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads

is still empty, so I guess we're still on the early side of planning
for design summit tracks, which hopefully means there's still time
to propose a fishbowl session for Ops feedback on HA.

Thanks a lot for the advice!
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Jay Pipes

On 08/02/2016 11:29 AM, Thierry Carrez wrote:

Doug Hellmann wrote:

[...]

Likewise, what if the Manila project team decides they aren't interested
in supporting Python 3.5 or a particular greenlet library du jour that
has been mandated upon them? Is the only filesystem-as-a-service project
going to be booted from the tent?


I hardly think "move off of the EOL-ed version of our language" and
"use a library du jour" are in the same class.  All of the topics
discussed so far are either focused on eliminating technical debt
that project teams have not prioritized consistently or adding
features that, again for consistency, are deemed important by the
overall community (API microversioning falls in that category,
though that's an example and not in any way an approved goal right
now).


Right, the proposal is pretty clearly about setting a number of
reasonable, small goals for a release cycle that would be awesome to
collectively reach. Not really invasive top-down design mandates that we
would expect teams to want to resist.

IMHO if a team has a good reason for not wanting or not being able to
fulfill a common goal that's fine -- it just needs to get documented and
should not result in itself in getting kicked out from anything. If a
team regularly skips on common goals (and/or misses releases, and/or
doesn't fix security issues) that's a general sign that it's not really
behaving like an OpenStack project and then a case could be opened for
removal, but there is nothing new here.


Sure, I have no disagreement with any of the above. I just see TC 
mandates as a slippery slope. I'm practicing my OpenStack civic "duty" 
to guard against the encroachment of project technical independence.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Tim Bell

> On 02 Aug 2016, at 17:13, Hayes, Graham  wrote:
> 
> On 02/08/2016 15:42, Flavio Percoco wrote:
>> On 01/08/16 10:19 -0400, Sean Dague wrote:
>>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
 Thierry, Ben, Doug,
 
 How can we distinguish between. "Project is doing the right thing, but
 others are not joining" vs "Project is actively trying to keep people
 out"?
>>> 
>>> I think at some level, it's not really that different. If we treat them
>>> as different, everyone will always believe they did all the right
>>> things, but got no results. 3 cycles should be plenty of time to drop
>>> single entity contributions below 90%. That means prioritizing bugs /
>>> patches from outside groups (to drop below 90% on code commits),
>>> mentoring every outside member that provides feedback (to drop below 90%
>>> on reviews), shifting development resources towards mentoring / docs /
>>> on ramp exercises for others in the community (to drop below 90% on core
>>> team).
>>> 
>>> Digging out of a single vendor status is hard, and requires making that
>>> your top priority. If teams aren't interested in putting that ahead of
>>> development work, that's fine, but that doesn't make it a sustainable
>>> OpenStack project.
>> 
>> 
>> ++ to the above! I don't think they are that different either and we might 
>> not
>> need to differentiate them after all.
>> 
>> Flavio
>> 
> 
> I do have one question - how are teams getting out of
> "team:single-vendor" and towards "team:diverse-affiliation" ?
> 
> We have tried to get more people involved with Designate using the ways
> we know how - doing integrations with other projects, pushing designate
> at conferences, helping DNS Server vendors to add drivers, adding
> drivers for DNS Servers and service providers ourselves, adding
> features - the lot.
> 
> We have a lot of user interest (41% of users were interested in using
> us), and are quite widely deployed for a non tc-approved-release
> project (17% - 5% in production). We are actually the most deployed
> non tc-approved-release project.
> 
> We still have 81% of the reviews done by 2 companies, and 83% by 3
> companies.
> 
> I know our project is not "cool", and DNS is probably one of the most
> boring topics, but I honestly believe that it has a place in the
> majority of OpenStack clouds - both public and private. We are a small
> team of people dedicated to making Designate the best we can, but are
> still one company deciding to drop OpenStack / DNS development from
> joining the single-vendor party.
> 
> We are definitely interested in putting community development ahead of
> development work - but what that actual work is seems to difficult to
> nail down. I do feel sometimes that I am flailing in the dark trying to
> improve this.
> 
> If projects could share how that got out of single-vendor or into 
> diverse-affiliation this could really help teams progress in the
> community, and avoid being removed.
> 
> Making grand statements about "work harder on community" without any
> guidance about what we need to work on do not help the community.
> 
> - Graham
> 
> 

Interesting thread… it raises some questions for me

- Some projects in the big tent are inter-related. For example, if we identify 
a need for a project in our production cloud, we contribute a puppet module 
upstream into the openstack-puppet project. If the project is then evicted, 
does this mean that the puppet module would also be removed from the puppet 
openstack project ? Documentation repositories ? 

- Operators considering including a project in their cloud portfolio look at 
various criteria in places like the project navigator. If a project does not 
have diversity, there is a risk that it would not remain in the big tent after 
an 18 month review of diversity. An operator may therefore delay their testing 
and production deployment of that project which makes it more difficult to 
achieve the diversity given lack of adoption.

I think there is a difference between projects which are meeting a specific set 
of needs with the user community but are not needing major support and one 
which is not meeting the 4 opens. We’ve really appreciated projects which solve 
a need for us such as EC2 API and RDO which have been open but also had 
significant support from a vendor. They could have improved their diversity by 
submitting less commits to get the percentages better...

Tim

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Hayes, Graham
On 02/08/2016 16:48, Steven Dake (stdake) wrote:
> Responses inline:
>
> On 8/2/16, 8:13 AM, "Hayes, Graham"  wrote:
>
>> On 02/08/2016 15:42, Flavio Percoco wrote:
>>> On 01/08/16 10:19 -0400, Sean Dague wrote:
 On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
> Thierry, Ben, Doug,
>
> How can we distinguish between. "Project is doing the right thing, but
> others are not joining" vs "Project is actively trying to keep people
> out"?

 I think at some level, it's not really that different. If we treat them
 as different, everyone will always believe they did all the right
 things, but got no results. 3 cycles should be plenty of time to drop
 single entity contributions below 90%. That means prioritizing bugs /
 patches from outside groups (to drop below 90% on code commits),
 mentoring every outside member that provides feedback (to drop below
 90%
 on reviews), shifting development resources towards mentoring / docs /
 on ramp exercises for others in the community (to drop below 90% on
 core
 team).

 Digging out of a single vendor status is hard, and requires making that
 your top priority. If teams aren't interested in putting that ahead of
 development work, that's fine, but that doesn't make it a sustainable
 OpenStack project.
>>>
>>>
>>> ++ to the above! I don't think they are that different either and we
>>> might not
>>> need to differentiate them after all.
>>>
>>> Flavio
>>>
>>
>> I do have one question - how are teams getting out of
>> "team:single-vendor" and towards "team:diverse-affiliation" ?
>>
>> We have tried to get more people involved with Designate using the ways
>> we know how - doing integrations with other projects, pushing designate
>> at conferences, helping DNS Server vendors to add drivers, adding
>> drivers for DNS Servers and service providers ourselves, adding
>> features - the lot.
>>
>> We have a lot of user interest (41% of users were interested in using
>> us), and are quite widely deployed for a non tc-approved-release
>> project (17% - 5% in production). We are actually the most deployed
>> non tc-approved-release project.
>>
>> We still have 81% of the reviews done by 2 companies, and 83% by 3
>> companies.
>
> By the objective criteria of team:single-vendor Designate isn't a single
> vendor project.  By the objective criteria of team:diverse-affiliation
> your not a diversely affiliated project either.  This is why I had
> suggested we need a third tag which accurately represents where Designate
> is in its community building journey.
>>
>> I know our project is not "cool", and DNS is probably one of the most
>> boring topics, but I honestly believe that it has a place in the
>> majority of OpenStack clouds - both public and private. We are a small
>> team of people dedicated to making Designate the best we can, but are
>> still one company deciding to drop OpenStack / DNS development from
>> joining the single-vendor party.
>
> Agree Designate is important to OpenStack.  But IMO it is not a single
> vendor project as defined by the criteria given the objective statistics
> you mentioned above.

My point is that we are close to being single vendor - it is a real
possibility over then next few months, if a big contributor was to
leave the project, which may happen.

The obvious solution to avoid this is to increase participation - which
is what we are struggling with.

>>
>> We are definitely interested in putting community development ahead of
>> development work - but what that actual work is seems to difficult to
>> nail down. I do feel sometimes that I am flailing in the dark trying to
>> improve this.
>
> Fantastic its a high-prioiirty goal.  Sad to hear your struggling but
> struggling is part of the activity.
>>
>> If projects could share how that got out of single-vendor or into
>> diverse-affiliation this could really help teams progress in the
>> community, and avoid being removed.
>
> You bring up a fantastic point here - and that is that teams need to share
> techniques for becoming multi-vendor and some day diversely affiliated.  I
> am a super busy atm, or I would volunteer to lead a cross-project effort
> with PTLs to coordinate community building from our shared knowledge pool
> of expert Open Source contributors in the wider OpenStack community.
>
> That said, I am passing the baton for Kolla PTL at the conclusion of
> Newton (assuming the leadership pipeline I've built for Kolla wants to run
> for Kolla PTL), and would be pleased to lead a cross project effort in
> Occata on moving from single-vendor to multi-vendor and beyond if there is
> enough PTL interest.  I take mentorship seriously and the various single
> vendor (and others) PTL's won't be disappointed in such an effort.
>
>>
>> Making grand statements about "work harder on community" without any
>> guidance about what we need to work on do not help the community.
>
> Agree - 

Re: [openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-02 Thread Andrey Pavlov
James, thank you for your answer.

I'll file bug to glance - but in current releases glance-charm have to
do it himself, right?

I'm not sure that I'm correctly understand your question.
I suppose that deployment will have glance and cinder on different machines.
Also there will be one relation between cinder and glance to configure
glance to store images in cinder.
Other steps are optional -
If cinder is used specific backend that needs additional configuration
- then it can be done via storage-backend relation (from subordinate
charm).
If this backend needs to configure glances' filters or glance's config
- then it should be done via any subordinate charm to glance (but
glance doesn't have such relations now).
Due to these suggestions I think that additional relation needs to be
added to glance to allow connecting charms (with installation in the
same container as glance).



On Tue, Aug 2, 2016 at 6:15 PM, James Page  wrote:
> Hi Andrey
>
> On Tue, 2 Aug 2016 at 15:59 Andrey Pavlov  wrote:
>>
>> I need to add glance support via storing images in cinder instead of
>> local files.
>> (This works only from Mitaka version due to glance-store package)
>
>
> OK
>
>>
>> First step I've made here -
>> https://review.openstack.org/#/c/348336/
>> This patchset adds ability to relate glance-charm to cinder-charm
>> (it's similar to ceph/swift relations)
>
>
> Looks like a good start, I'll comment directly on the review with any
> specific comments.
>
>>
>> And also it configures glance's rootwrap - original glance package
>> doesn't have such code
>> (
>>   I think that this is a bug in glance-common package - cinder and
>> nova can do it themselves.
>>   And if someone point me to bugtracker - I will file the bug there.
>> )
>
>
> Sounds like this should be in the glance package:
>
>   https://bugs.launchpad.net/ubuntu/+source/glance/+filebug
>
>  or use:
>
>   ubuntu-bug glance-common
>
> on an installed system.
>
>>
>> But main question is about additional configurations' steps -
>> Some cinder backends need to store additional files in
>> /etc/glance/rootwrap.d/ folder.
>> I have two options to implement this -
>> 1) relate my charm to glance:juju-info (it will be run on the same
>> machine as glance)
>> and do all work in this hook in my charm.
>> 2) add one more relation to glance - like
>> 'storage-backend:cinder-backend' in cinder.
>> And write code in a same way - with ability to pass config options.
>>
>>
>> I prefer option 2. It's more logical and more general. It will allow
>> to configure any cinder's backend.
>
>
> +1 the subordinate approach in cinder (and nova) works well; lets ensure the
> semantics on the relation data mean its easy to restart the glance services
> from the subordinate service if need be.
>
> Taking this a step further, it might also make sense to have the relation to
> cinder on the subordinate charm and pass up the data item to configure
> glance to use cinder from the sub - does that make sense in this context?
>
> Cheers
>
> James
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind regards,
Andrey Pavlov.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] A couple feature freeze exception requests

2016-08-02 Thread Dan Smith
> Multitenant networking
> ==

I haven't reviewed this one much either, but it looks smallish and if
other people are good with it then I think it's probably something we
should do.

> Multi-compute usage via a hash ring
> ===

I'm obviously +2 on this one :)

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Steven Dake (stdake)
Responses inline:

On 8/2/16, 8:13 AM, "Hayes, Graham"  wrote:

>On 02/08/2016 15:42, Flavio Percoco wrote:
>> On 01/08/16 10:19 -0400, Sean Dague wrote:
>>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
 Thierry, Ben, Doug,

 How can we distinguish between. "Project is doing the right thing, but
 others are not joining" vs "Project is actively trying to keep people
 out"?
>>>
>>> I think at some level, it's not really that different. If we treat them
>>> as different, everyone will always believe they did all the right
>>> things, but got no results. 3 cycles should be plenty of time to drop
>>> single entity contributions below 90%. That means prioritizing bugs /
>>> patches from outside groups (to drop below 90% on code commits),
>>> mentoring every outside member that provides feedback (to drop below
>>>90%
>>> on reviews), shifting development resources towards mentoring / docs /
>>> on ramp exercises for others in the community (to drop below 90% on
>>>core
>>> team).
>>>
>>> Digging out of a single vendor status is hard, and requires making that
>>> your top priority. If teams aren't interested in putting that ahead of
>>> development work, that's fine, but that doesn't make it a sustainable
>>> OpenStack project.
>>
>>
>> ++ to the above! I don't think they are that different either and we
>>might not
>> need to differentiate them after all.
>>
>> Flavio
>>
>
>I do have one question - how are teams getting out of
>"team:single-vendor" and towards "team:diverse-affiliation" ?
>
>We have tried to get more people involved with Designate using the ways
>we know how - doing integrations with other projects, pushing designate
>at conferences, helping DNS Server vendors to add drivers, adding
>drivers for DNS Servers and service providers ourselves, adding
>features - the lot.
>
>We have a lot of user interest (41% of users were interested in using
>us), and are quite widely deployed for a non tc-approved-release
>project (17% - 5% in production). We are actually the most deployed
>non tc-approved-release project.
>
>We still have 81% of the reviews done by 2 companies, and 83% by 3
>companies.

By the objective criteria of team:single-vendor Designate isn't a single
vendor project.  By the objective criteria of team:diverse-affiliation
your not a diversely affiliated project either.  This is why I had
suggested we need a third tag which accurately represents where Designate
is in its community building journey.
>
>I know our project is not "cool", and DNS is probably one of the most
>boring topics, but I honestly believe that it has a place in the
>majority of OpenStack clouds - both public and private. We are a small
>team of people dedicated to making Designate the best we can, but are
>still one company deciding to drop OpenStack / DNS development from
>joining the single-vendor party.

Agree Designate is important to OpenStack.  But IMO it is not a single
vendor project as defined by the criteria given the objective statistics
you mentioned above.

>
>We are definitely interested in putting community development ahead of
>development work - but what that actual work is seems to difficult to
>nail down. I do feel sometimes that I am flailing in the dark trying to
>improve this.

Fantastic its a high-prioiirty goal.  Sad to hear your struggling but
struggling is part of the activity.
>
>If projects could share how that got out of single-vendor or into
>diverse-affiliation this could really help teams progress in the
>community, and avoid being removed.

You bring up a fantastic point here - and that is that teams need to share
techniques for becoming multi-vendor and some day diversely affiliated.  I
am a super busy atm, or I would volunteer to lead a cross-project effort
with PTLs to coordinate community building from our shared knowledge pool
of expert Open Source contributors in the wider OpenStack community.

That said, I am passing the baton for Kolla PTL at the conclusion of
Newton (assuming the leadership pipeline I've built for Kolla wants to run
for Kolla PTL), and would be pleased to lead a cross project effort in
Occata on moving from single-vendor to multi-vendor and beyond if there is
enough PTL interest.  I take mentorship seriously and the various single
vendor (and others) PTL's won't be disappointed in such an effort.

>
>Making grand statements about "work harder on community" without any
>guidance about what we need to work on do not help the community.

Agree - lets fix that.  Unfortunately it can't be fixed in an email thread
- it requires a cross-project team based approach with atleast 6 months of
activity.

If PTLs can weigh in on this thread and commit to participation in such a
cross-project subgroup, I'd be happy to lead it.

Regards
-steve


>
>- Graham
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 

Re: [openstack-dev] [nova][ironic] A couple feature freeze exception requests

2016-08-02 Thread Matt Riedemann

On 8/1/2016 4:20 PM, Jim Rollenhagen wrote:

Yes, I know this is stupid late for these.

I'd like to request two exceptions to the non-priority feature freeze,
for a couple of features in the Ironic driver.  These were not requested
at the normal time as I thought they were nowhere near ready.

Multitenant networking
==

Ironic's top feature request for around 2 years now has been to make
networking safe for multitenant use, as opposed to a flat network
(including control plane access!) for all tenants. We've been working on
a solution for 3 cycles now, and finally have the Ironic pieces of it
done, after a heroic effort to finish things up this cycle.

There's just one patch left to make it work, in the virt driver in Nova.
That is here: https://review.openstack.org/#/c/297895/

It's important to note that this actually fixes some dead code we pushed
on before this feature was done, and is only ~50 lines, half of which
are comments/reno.

Reviewers on this unearthed a problem on the ironic side, which I expect
to be fixed in the next couple of days:
https://review.openstack.org/#/q/topic:bug/1608511

We also have CI for this feature in ironic, and I have a depends-on
testing all of this as a whole: https://review.openstack.org/#/c/347004/

Per Matt's request, I'm also adding that job to Nova's experimental
queue: https://review.openstack.org/#/c/349595/

A couple folks from the ironic team have also done some manual testing
of this feature, with the nova code in, using real switches.

Merging this patch would bring a *huge* win for deployers and operators,
and I don't think it's very risky. It'll be ready to go sometime this
week, once that ironic chain is merged.


I've reviewed this one and it looks good to me. It's dependent on 
python-ironicclient>=1.5.0 which Jim has a g-r bump up as a dependency. 
And the gate-tempest-dsvm-ironic-multitenant-network-nv job is testing 
this and passing on the test patch in ironic (and that job is in the 
nova experimental queue now).


The upgrade procedure had some people scratching their heads in IRC this 
week so I've stated that we need clear documentation there, which will 
probably live here:


http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html

Since Ironic isn't in here:

http://docs.openstack.org/ops-guide/ops_upgrades.html#update-services

But the docs in the Ironic repo say that Nova should be upgraded first 
when going from Juno to Kilo so it's definitely important to get those 
docs updated for upgrades from Mitaka to Newton, but Jim said he'd do 
that this cycle.


Given how long people have been asking for this in Ironic and the Ironic 
team has made it a priority to get it working on their side, and there 
is CI already and a small change in Nova, I'm OK with giving a 
non-priority FFE for this.




Multi-compute usage via a hash ring
===

One of the major problems with the ironic virt driver today is that we
don't support running multiple nova-compute daemons with the ironic driver
loaded, because each compute service manages all ironic nodes and stomps
on each other.

There's currently a hack in the ironic virt driver to
kind of make this work, but instance locking still isn't done:
https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py

That is also holding back removing the pluggable compute manager in nova:
https://github.com/openstack/nova/blob/master/nova/conf/service.py#L64-L69

And as someone that runs a deployment using this hack, I can tell you
first-hand that it doesn't work well.

We (the ironic and nova community) have been working on fixing this for
2-3 cycles now, trying to find a solution that isn't terrible and
doesn't break existing use cases. We've been conflating it with how we
schedule ironic instances and keep managing to find a big wedge with
each approach. The best approach we've found involves duplicating the
compute capabilities and affinity filters in ironic.

Some of us were talking at the nova midcycle and decided we should try
the hash ring approach (like ironic uses to shard nodes between
conductors) and see how it works out, even though people have said in
the past that wouldn't work. I did a proof of concept last week, and
started playing with five compute daemons in a devstack environment.
Two nerd-snipey days later and I had a fully working solution, with unit
tests, passing CI. That is here:
https://review.openstack.org/#/c/348443/

We'll need to work on CI for this with multiple compute services. That
shouldn't be crazy difficult, but I'm not sure we'll have it done this
cycle (and it might get interesting trying to test computes joining and
leaving the cluster).

It also needs some testing at scale, which is hard to do in the upstream
gate, but I'll be doing my best to ship this downstream as soon as I
can, and iterating on any problems we see there.

It's a huge win for operators, for only a few hundred lines (some of

Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-08-02 13:49:06 +:
> On 02/08/2016 14:37, Doug Hellmann wrote:
> > Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
> >> On 29/07/2016 21:59, Doug Hellmann wrote:
> >>> One of the outcomes of the discussion at the leadership training
> >>> session earlier this year was the idea that the TC should set some
> >>> community-wide goals for accomplishing specific technical tasks to
> >>> get the projects synced up and moving in the same direction.
> >>>
> >>> After several drafts via etherpad and input from other TC and SWG
> >>> members, I've prepared the change for the governance repo [1] and
> >>> am ready to open this discussion up to the broader community. Please
> >>> read through the patch carefully, especially the "goals/index.rst"
> >>> document which tries to lay out the expectations for what makes a
> >>> good goal for this purpose and for how teams are meant to approach
> >>> working on these goals.
> >>>
> >>> I've also prepared two patches proposing specific goals for Ocata
> >>> [2][3].  I've tried to keep these suggested goals for the first
> >>> iteration limited to "finish what we've started" type items, so
> >>> they are small and straightforward enough to be able to be completed.
> >>> That will let us experiment with the process of managing goals this
> >>> time around, and set us up for discussions that may need to happen
> >>> at the Ocata summit about implementation.
> >>>
> >>> For future cycles, we can iterate on making the goals "harder", and
> >>> collecting suggestions for goals from the community during the forum
> >>> discussions that will happen at summits starting in Boston.
> >>>
> >>> Doug
> >>>
> >>> [1] https://review.openstack.org/349068 describe a process for managing 
> >>> community-wide goals
> >>> [2] https://review.openstack.org/349069 add ocata goal "support python 
> >>> 3.5"
> >>> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> >>> libraries"
> >>
> >> I am confused. When I proposed my patch for doing something very similar
> >> (Equal Chances for all projects is basically a multiple release goal) I
> >> got the following rebuttals:
> >>
> >>  > "and it would be
> >>  > a mistake to try to require that because the issue is almost always
> >>  > lack of resources and not lack of desire. Volunteers to contribute
> >>  > to the work that's needed will do more to help than a
> >>  > one-size-fits-all policy."
> >>
> >>  > This isn't a thing that gets fixed with policy. It gets fixed with
> >>  > people.
> >>
> >>  > I am reading through the thread, and it puzzles me that I see a lot
> >>  > of right words about goals but not enough hints on who is going to
> >>  > implement that.
> >>
> >>  > I think the right solutions here are human ones. Talk with people.
> >>  > Figure out how you can help lighten their load so that they have
> >>  > breathing space. I think hiding behind policy becomes a way to make
> >>  > us more separate rather than engaging folks more directly.
> >>
> >>  > Coming at this with top down declarations of how things should work
> >>  > not only ignores reality of the ecosystem and the current state of
> >>  > these projects, but is also going about things backwards.
> >>
> >>  > This entirely ignores that these are all open source projects,
> >>  > which are often very sparsely contributed to. If you have an issue
> >>  > with a project or the interface it provides, then contribute to it.
> >>  > Don't make grandiose resolutions trying to force things into what you
> >>  > see as an ideal state, instead contribute to help fix the problems
> >>  > you've identified.
> >>
> >> And yet, we are currently suggesting a system that will actively (in an
> >> undefined way) penalise projects who do not comply with a different set
> >> of proposals, done in a top down manner.
> >>
> >> I may be missing the point, but the two proposals seem to have
> >> similarities - what is the difference?
> >>
> >> When I see comments like:
> >>
> >>  > Project teams who join the big tent sign up to the rights and
> >>  > responsibilities that go along with membership. Those responsibilities
> >>  > include taking some direction from the TC, including regarding work
> >>  > they may not give the same priority as the broader community.
> >>
> >> It sounds like top down is OK, but previous ML thread / TC feedback
> >> has been different.
> >
> > One difference is that these goals are not things like "the
> > documentation team must include every service project in the
> > installation guide" but rather would be phrased like "every project
> > must provide an installation guide". The work is distributed to the
> > vertical teams, and not focused in the horizontal teams.
> 
> Well, the wording was actually "the documentation team must provide a
> way for all projects to be included in the documentation guide". The
> work was on the horizontal teams to provide a method, and the 

Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Thierry Carrez
Doug Hellmann wrote:
> [...]
>> Likewise, what if the Manila project team decides they aren't interested 
>> in supporting Python 3.5 or a particular greenlet library du jour that 
>> has been mandated upon them? Is the only filesystem-as-a-service project 
>> going to be booted from the tent?
> 
> I hardly think "move off of the EOL-ed version of our language" and
> "use a library du jour" are in the same class.  All of the topics
> discussed so far are either focused on eliminating technical debt
> that project teams have not prioritized consistently or adding
> features that, again for consistency, are deemed important by the
> overall community (API microversioning falls in that category,
> though that's an example and not in any way an approved goal right
> now).

Right, the proposal is pretty clearly about setting a number of
reasonable, small goals for a release cycle that would be awesome to
collectively reach. Not really invasive top-down design mandates that we
would expect teams to want to resist.

IMHO if a team has a good reason for not wanting or not being able to
fulfill a common goal that's fine -- it just needs to get documented and
should not result in itself in getting kicked out from anything. If a
team regularly skips on common goals (and/or misses releases, and/or
doesn't fix security issues) that's a general sign that it's not really
behaving like an OpenStack project and then a case could be opened for
removal, but there is nothing new here.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-02 Thread Timofei Durakov
Hi,

Taking into account everything above I'd prefer to see
live_migration_tunnelled(that corresponds to VIR_MIGRATE_TUNNELLED)
defaulted to False. We just need to make a release note for this change,
and on the host startup do LOG.warning to notify the operator that there
are no tunnels for live-migration. For me, it will be enough. Then just put
[1] on top of it.

Thanks,
Timofey


On Tue, Aug 2, 2016 at 5:36 PM, Koniszewski, Pawel <
pawel.koniszew...@intel.com> wrote:

> In Mitaka development cycle 'live_migration_flag' and
> 'block_migration_flag' have been marked as deprecated for removal. I'm
> working on a patch [1] to remove both of them and want to ask what we
> should do with live_migration_tunnelled logic.
>
> The default configuration of both flags contain VIR_MIGRATE_TUNNELLED
> option. It is there to avoid the need to configure the network to allow
> direct communication between hypervisors. However, tradeoff is that it
> slows down all migrations by up to 80% due to increased number of memory
> copies and single-threaded encryption mechanism in Libvirt. By 80% here I
> mean that transfer between source and destination node is around 2Gb/s on a
> 10Gb network. I believe that this is a configuration issue and people
> deploying OpenStack are not aware that live migrations with this flag will
> not work. I'm not sure that this is something we wanted to achieve. AFAIK
> most operators are turning it OFF in order to make live migration usable.
>
> Going to a new flag that is there to keep possibility to turn tunneling on
> - Live_migration_tunnelled [2] which is a tri-state boolean - None, False,
> True:
>
> * True - means that live migrations will be tunneled through libvirt.
> * False - no tunneling, native hypervisor transport.
> * None - nova will choose default based on, e.g., the availability of
> native encryption support in the hypervisor. (Default value)
>
> Right now we don't have any logic implemented for None value which is a
> default value. So the question here is should I implement logic so that if
> live_migration_tunnelled=None it will still use VIR_MIGRATE_TUNNELLED if
> native encryption is not available? Given the impact of this flag I'm not
> sure that we really want to keep it there. Another option is to change
> default value of live_migration_tunnelled to be True. In both cases we will
> again end up with slower LM and people complaining that LM does not work at
> all in OpenStack.
>
> Thoughts?
>
> [1] https://review.openstack.org/#/c/334860/
> [2]
> https://github.com/openstack/nova/blob/be59c19c969acf6b25b0711f0ebfb26aaed0a171/nova/conf/libvirt.py#L107
>
> Kind Regards,
> Pawel Koniszewski
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-02 Thread James Page
Hi Andrey

On Tue, 2 Aug 2016 at 15:59 Andrey Pavlov  wrote:

> I need to add glance support via storing images in cinder instead of
> local files.
> (This works only from Mitaka version due to glance-store package)
>

OK


> First step I've made here -
> https://review.openstack.org/#/c/348336/
> This patchset adds ability to relate glance-charm to cinder-charm
> (it's similar to ceph/swift relations)
>

Looks like a good start, I'll comment directly on the review with any
specific comments.


> And also it configures glance's rootwrap - original glance package
> doesn't have such code
> (
>   I think that this is a bug in glance-common package - cinder and
> nova can do it themselves.
>   And if someone point me to bugtracker - I will file the bug there.
> )
>

Sounds like this should be in the glance package:

  https://bugs.launchpad.net/ubuntu/+source/glance/+filebug

 or use:

  ubuntu-bug glance-common

on an installed system.


> But main question is about additional configurations' steps -
> Some cinder backends need to store additional files in
> /etc/glance/rootwrap.d/ folder.
> I have two options to implement this -
> 1) relate my charm to glance:juju-info (it will be run on the same
> machine as glance)
> and do all work in this hook in my charm.
> 2) add one more relation to glance - like
> 'storage-backend:cinder-backend' in cinder.
> And write code in a same way - with ability to pass config options.
>

> I prefer option 2. It's more logical and more general. It will allow
> to configure any cinder's backend.
>

+1 the subordinate approach in cinder (and nova) works well; lets ensure
the semantics on the relation data mean its easy to restart the glance
services from the subordinate service if need be.

Taking this a step further, it might also make sense to have the relation
to cinder on the subordinate charm and pass up the data item to configure
glance to use cinder from the sub - does that make sense in this context?

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Pending removal of X-IO volume driver

2016-08-02 Thread Hedlind, Richard
Status update. Our CI is back up and has been passing tests successfully for 
~18h now. I will keep a close eye on it to make sure it stays up. Sorry about 
the down time.

Richard

-Original Message-
From: Hedlind, Richard [mailto:richard.hedl...@x-io.com] 
Sent: Thursday, July 28, 2016 9:37 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Cinder] Pending removal of X-IO volume driver

Hi Sean,
Thanks for the heads up. I have been busy on other projects and not been 
involved in maintaining the CI. I will look into it and get it back up and 
running.
I will keep you posted on the progress.

Thanks,
Richard

-Original Message-
From: Sean McGinnis [mailto:sean.mcgin...@gmx.com] 
Sent: Wednesday, July 27, 2016 2:26 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] Pending removal of X-IO volume driver

The Cinder policy for driver CI requires that all volume drivers have a CI 
reporting on any new patchset. CI's may have some down time, but if they do not 
report within a two week period they are considered out of compliance with our 
policy.

This is a notification that the X-IO OpenStack CI is out of compliance.
It has not reported since March 18th, 2016.

The patch for driver removal has been posted here:

https://review.openstack.org/348022

If this CI is not brought into compliance, the patch to remove the driver will 
be approved one week from now.

Thanks,
Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-08-02 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #89

2016-08-02 Thread Emilien Macchi
no item in our agenda, we cancelled the meeting, see you next week!

On Mon, Aug 1, 2016 at 3:31 PM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi Puppeteers!
>
> We'll have our weekly meeting tomorrow at 3pm UTC on #openstack-meeting-4.
>
> Here's a first agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160802
>
> Feel free to add topics, and any outstanding bug and patch.
>
> See you tomorrow!
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Hayes, Graham
On 02/08/2016 15:42, Flavio Percoco wrote:
> On 01/08/16 10:19 -0400, Sean Dague wrote:
>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
>>> Thierry, Ben, Doug,
>>>
>>> How can we distinguish between. "Project is doing the right thing, but
>>> others are not joining" vs "Project is actively trying to keep people
>>> out"?
>>
>> I think at some level, it's not really that different. If we treat them
>> as different, everyone will always believe they did all the right
>> things, but got no results. 3 cycles should be plenty of time to drop
>> single entity contributions below 90%. That means prioritizing bugs /
>> patches from outside groups (to drop below 90% on code commits),
>> mentoring every outside member that provides feedback (to drop below 90%
>> on reviews), shifting development resources towards mentoring / docs /
>> on ramp exercises for others in the community (to drop below 90% on core
>> team).
>>
>> Digging out of a single vendor status is hard, and requires making that
>> your top priority. If teams aren't interested in putting that ahead of
>> development work, that's fine, but that doesn't make it a sustainable
>> OpenStack project.
>
>
> ++ to the above! I don't think they are that different either and we might not
> need to differentiate them after all.
>
> Flavio
>

I do have one question - how are teams getting out of
"team:single-vendor" and towards "team:diverse-affiliation" ?

We have tried to get more people involved with Designate using the ways
we know how - doing integrations with other projects, pushing designate
at conferences, helping DNS Server vendors to add drivers, adding
drivers for DNS Servers and service providers ourselves, adding
features - the lot.

We have a lot of user interest (41% of users were interested in using
us), and are quite widely deployed for a non tc-approved-release
project (17% - 5% in production). We are actually the most deployed
non tc-approved-release project.

We still have 81% of the reviews done by 2 companies, and 83% by 3
companies.

I know our project is not "cool", and DNS is probably one of the most
boring topics, but I honestly believe that it has a place in the
majority of OpenStack clouds - both public and private. We are a small
team of people dedicated to making Designate the best we can, but are
still one company deciding to drop OpenStack / DNS development from
joining the single-vendor party.

We are definitely interested in putting community development ahead of
development work - but what that actual work is seems to difficult to
nail down. I do feel sometimes that I am flailing in the dark trying to
improve this.

If projects could share how that got out of single-vendor or into 
diverse-affiliation this could really help teams progress in the
community, and avoid being removed.

Making grand statements about "work harder on community" without any
guidance about what we need to work on do not help the community.

- Graham


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Steven Dake (stdake)


On 8/2/16, 7:17 AM, "Ed Leafe"  wrote:

>On Aug 2, 2016, at 8:50 AM, Steven Dake (stdake)  wrote:
>
>> For example tripleo is single-vendor, but is doing all the right things
>>to
>> dig out of single vendor by doing actual community building.  They
>>aren't
>> just trying, but are trying *very* hard with their activities.  They
>>have
>> the right intent but how to measure intent objectively?  That would be
>>my
>> major concern.
>
>This is exactly the sort of reason why an automatic expulsion is not
>being proposed, but rather a review by humans.
>
>-- Ed Leafe
>
Ed,

Concern answered.  Thanks!

-steve

>
>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-08-02 Thread Flavio Percoco

On 29/07/16 13:57 -0400, Doug Hellmann wrote:

Excerpts from Joshua Harlow's message of 2016-07-29 10:35:18 -0700:

I prefer 'one bucket repo for OpenStack community Errbot plug-ins' since
I don't like a bunch of repos (seems like a premature optimization ~at
this time~), but I could see either way on this one.

Jeremy Stanley wrote:
> On 2016-07-29 09:41:40 -0700 (-0700), Joshua Harlow wrote:
> [...]
>> What shall we name it???
> [...]
>
> Also, one bucket repo for OpenStack community Errbot plug-ins, or
> one repo per plug-in with a consistent naming scheme?



I agree. How about "openstack/irc-bot-plugins"? If we need to build an
artifact we can name that openstack-irc-bot-plugins and if we don't then
we can just install directly from the git repo (the docs for errbot talk
about installing from github, so I'm not sure what the "best practice"
is for that).


Count me in!

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pydotplus (taskflow) vs pydot-ng (fuel)

2016-08-02 Thread Igor Kalnitsky
Hi Thomas,

If I'm not mistaken, pydot-ng [1] has been made by ex-fueler in order
to overcome some limitations of pydot ( and do not change much. If
pydotplus is alive project and do the same thing, I vote for using it
in Fuel.

Thanks,
Igor


[1]: https://pypi.io/project/pydot-ng/

On Tue, Aug 2, 2016 at 4:44 PM, Thomas Goirand  wrote:
> Hi,
>
> Fuel uses pydot-ng, and (at least) taskflow uses pydotplus. I believe
> both aren't using pydot because that's dead upstream.
>
> Could we have a bit of consistency here, and have one or the other
> component to switch, so we could get rid of one more package that does
> the same thing in downstream distros?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-02 Thread Andrey Pavlov
Hi All,

I need to add glance support via storing images in cinder instead of
local files.
(This works only from Mitaka version due to glance-store package)

First step I've made here -
https://review.openstack.org/#/c/348336/
This patchset adds ability to relate glance-charm to cinder-charm
(it's similar to ceph/swift relations)
And also it configures glance's rootwrap - original glance package
doesn't have such code
(
  I think that this is a bug in glance-common package - cinder and
nova can do it themselves.
  And if someone point me to bugtracker - I will file the bug there.
)

But main question is about additional configurations' steps -
Some cinder backends need to store additional files in
/etc/glance/rootwrap.d/ folder.
I have two options to implement this -
1) relate my charm to glance:juju-info (it will be run on the same
machine as glance)
and do all work in this hook in my charm.
2) add one more relation to glance - like
'storage-backend:cinder-backend' in cinder.
And write code in a same way - with ability to pass config options.

I prefer option 2. It's more logical and more general. It will allow
to configure any cinder's backend.

But I prefer to know community opinion before I start to implement this.

-- 
Kind regards,
Andrey Pavlov.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Flavio Percoco

On 02/08/16 09:17 -0500, Ed Leafe wrote:

On Aug 2, 2016, at 8:50 AM, Steven Dake (stdake)  wrote:


For example tripleo is single-vendor, but is doing all the right things to
dig out of single vendor by doing actual community building.  They aren't
just trying, but are trying *very* hard with their activities.  They have
the right intent but how to measure intent objectively?  That would be my
major concern.


This is exactly the sort of reason why an automatic expulsion is not being 
proposed, but rather a review by humans.


Also the reason why neither the single-vendor or the diverse-affiliation tags
are applied automatically.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Flavio Percoco

On 01/08/16 10:28 -0400, Davanum Srinivas wrote:

Sean,

So we will programatically test the metrics (if we are not doing that
already) to apply/remove "team:single-vendor" tag:

https://governance.openstack.org/reference/tags/team_single-vendor.html

And trigger exit when the tag is present for more than 3 cycles in a
row (say as of release date?)


We update these tags frequently enough and yes, I guess it would be possible to
programmatically check for how long a project has had the single-vendor tag.

Flavio


Thanks,
-- Dims

On Mon, Aug 1, 2016 at 10:19 AM, Sean Dague  wrote:

On 08/01/2016 09:58 AM, Davanum Srinivas wrote:

Thierry, Ben, Doug,

How can we distinguish between. "Project is doing the right thing, but
others are not joining" vs "Project is actively trying to keep people
out"?


I think at some level, it's not really that different. If we treat them
as different, everyone will always believe they did all the right
things, but got no results. 3 cycles should be plenty of time to drop
single entity contributions below 90%. That means prioritizing bugs /
patches from outside groups (to drop below 90% on code commits),
mentoring every outside member that provides feedback (to drop below 90%
on reviews), shifting development resources towards mentoring / docs /
on ramp exercises for others in the community (to drop below 90% on core
team).

Digging out of a single vendor status is hard, and requires making that
your top priority. If teams aren't interested in putting that ahead of
development work, that's fine, but that doesn't make it a sustainable
OpenStack project.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Flavio Percoco

On 01/08/16 10:19 -0400, Sean Dague wrote:

On 08/01/2016 09:58 AM, Davanum Srinivas wrote:

Thierry, Ben, Doug,

How can we distinguish between. "Project is doing the right thing, but
others are not joining" vs "Project is actively trying to keep people
out"?


I think at some level, it's not really that different. If we treat them
as different, everyone will always believe they did all the right
things, but got no results. 3 cycles should be plenty of time to drop
single entity contributions below 90%. That means prioritizing bugs /
patches from outside groups (to drop below 90% on code commits),
mentoring every outside member that provides feedback (to drop below 90%
on reviews), shifting development resources towards mentoring / docs /
on ramp exercises for others in the community (to drop below 90% on core
team).

Digging out of a single vendor status is hard, and requires making that
your top priority. If teams aren't interested in putting that ahead of
development work, that's fine, but that doesn't make it a sustainable
OpenStack project.



++ to the above! I don't think they are that different either and we might not
need to differentiate them after all.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-02 Thread Koniszewski, Pawel
In Mitaka development cycle 'live_migration_flag' and 'block_migration_flag' 
have been marked as deprecated for removal. I'm working on a patch [1] to 
remove both of them and want to ask what we should do with 
live_migration_tunnelled logic.

The default configuration of both flags contain VIR_MIGRATE_TUNNELLED option. 
It is there to avoid the need to configure the network to allow direct 
communication between hypervisors. However, tradeoff is that it slows down all 
migrations by up to 80% due to increased number of memory copies and 
single-threaded encryption mechanism in Libvirt. By 80% here I mean that 
transfer between source and destination node is around 2Gb/s on a 10Gb network. 
I believe that this is a configuration issue and people deploying OpenStack are 
not aware that live migrations with this flag will not work. I'm not sure that 
this is something we wanted to achieve. AFAIK most operators are turning it OFF 
in order to make live migration usable.

Going to a new flag that is there to keep possibility to turn tunneling on - 
Live_migration_tunnelled [2] which is a tri-state boolean - None, False, True:

* True - means that live migrations will be tunneled through libvirt.
* False - no tunneling, native hypervisor transport.
* None - nova will choose default based on, e.g., the availability of native 
encryption support in the hypervisor. (Default value)

Right now we don't have any logic implemented for None value which is a default 
value. So the question here is should I implement logic so that if 
live_migration_tunnelled=None it will still use VIR_MIGRATE_TUNNELLED if native 
encryption is not available? Given the impact of this flag I'm not sure that we 
really want to keep it there. Another option is to change default value of 
live_migration_tunnelled to be True. In both cases we will again end up with 
slower LM and people complaining that LM does not work at all in OpenStack.

Thoughts?

[1] https://review.openstack.org/#/c/334860/
[2] 
https://github.com/openstack/nova/blob/be59c19c969acf6b25b0711f0ebfb26aaed0a171/nova/conf/libvirt.py#L107

Kind Regards,
Pawel Koniszewski

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Angular panel enable/disable not overridable in local_settings

2016-08-02 Thread Rob Cresswell
Hi all,

So we seem to be adopting a pattern of using UPDATE_HORIZON_CONFIG in the 
enabled files to add a legacy/angular toggle to the settings. I don't like 
this, because in settings.py the enabled files are processed *after* 
local_settings.py imports, meaning the angular panel will always be enabled, 
and would require a local/enabled file change to disable it.

My suggestion would be:

- Remove current UPDATE_HORIZON_CONFIG change in the swift panel and images 
panel patch
- Add equivalents ('angular') to the settings.py HORIZON_CONFIG dict, and then 
the 'legacy' version to the test settings.

I think that should run UTs as expected, and allow the legacy/angular panel to 
be toggled via local_settings.

Was there a reason we chose to use UPDATE_HORIZON_CONFIG, rather than just 
updating the dict in settings.py? I couldn't recall a reason, and the original 
patch ( https://review.openstack.org/#/c/293168/ ) doesn't seem to indicate why.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub

2016-08-02 Thread Spyros Trigazis
I just filed a ticket to acquire the username openstackmagnum.

I included Hongbin's contact information explaining that he's the project's
PTL.

Thanks Steve,
Spyros


On 2 August 2016 at 13:29, Steven Dake (stdake)  wrote:

> Ton,
>
> I may or may not have set it up early in Magnum's development.  I just
> don't remember.  My recommendation is to file a support ticket with docker
> and see if they will tell you who it belongs to (as in does it belong to
> one of the founders of Magnum) or if it belongs to some other third party.
> Their support is very fast.  They may not be able to give you the answer if
> its not an openstacker.
>
> Regards
> -steve
>
>
> From: Ton Ngo 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, August 1, 2016 at 1:06 PM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub
>
> Hi everyone,
> At the last IRC meeting, the team discussed the need for hosting some
> container images on Docker Hub
> to facilitate development. There is currently a Magnum account on Docker
> Hub, but this is not owned by anyone
> on the team, so we would like to find who the owner is and whether this
> account was set up for OpenStack Magnum.
> Thanks in advance!
> Ton Ngo,
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] race in keystone unit tests

2016-08-02 Thread Lance Bragstad
Hi Sean,

Thanks for the information. This obviously looks Fernet-related and I would
be happy to spend some cycles on it. We recently landed a bunch of
refactors in keystone to improve Fernet test coverage. This could be
related to those refactors. Just double checking - but you haven't opened a
bug in launchpad for this yet have you?

Thanks for the heads up!

On Tue, Aug 2, 2016 at 5:32 AM, Sean Dague  wrote:

> One of my concerns about stacking up project unit tests in the
> requirements jobs, is the unit tests aren't as free of races as you
> would imagine. Because they only previously impacted the one project
> team, those teams are often just fast to recheck instead of get to the
> bottom of it. Cross testing with them in a voting way changes their impact.
>
> The keystone unit tests have a existing race condition in them, which
> recently failed an unrelated requirements bump -
>
> http://logs.openstack.org/50/348250/6/check/gate-cross-keystone-python27-db-ubuntu-xenial/962327d/console.html#_2016-08-02_03_52_14_537923
>
> I'm not fully sure where to go from here. But wanted to make sure that
> data is out there. Any keystone folks who can dive into and sort it out
> would be highly appreciated.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Ed Leafe
On Aug 2, 2016, at 8:50 AM, Steven Dake (stdake)  wrote:

> For example tripleo is single-vendor, but is doing all the right things to
> dig out of single vendor by doing actual community building.  They aren't
> just trying, but are trying *very* hard with their activities.  They have
> the right intent but how to measure intent objectively?  That would be my
> major concern.

This is exactly the sort of reason why an automatic expulsion is not being 
proposed, but rather a review by humans. 

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does this api modification need Microversion?

2016-08-02 Thread Ed Leafe
On Aug 2, 2016, at 1:11 AM, han.ro...@zte.com.cn wrote:

> Allow "revert_resize" to recover error instance after resize/migrate. 
> 
> When resize/migrate instance, if error occurs on source compute node, 
> instance state can rollback to active currently. But if error occurs in 
> "finish_resize" function on destination compute node, the instance state 
> would not rollback to active. 
> 
> This patch is to rollback instance state from error to active when resize or 
> migrate action failed on destination compute node..

I haven’t looked at the patch yet, but in general correcting an error on the 
server doesn’t require a microversion, unless the response to the user would 
change. From your description it doesn’t sound like that is the case.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-02 Thread Matt Riedemann

On 8/2/2016 2:41 AM, Alex Xu wrote:

A little strange we have two API endpoints, one is
'/servers/{uuid}/os-interfaces', another one is
'/servers/{uuid}/os-virtual-interfaces'.

I prefer to keep os-attach-interface. Due to I think we should deprecate
the nova-network also. Actually we deprecate all the nova-network
related API in the 2.36 also. And os-attach-interface didn't support
nova-network, then it is the right choice.

So we can deprecate the os-virtual-interface in newton. And in Ocata, we
correct the implementation to get the vif info and tag.
os-attach-interface actually accept the server_id, and there is check
ensure the port belong to the server. So it shouldn't very hard to get
the vif info and tag.

And sorry for I missed that when coding patches also...let me if you
need any help at here.



--

Thanks,

Matt Riedemann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Alex,

os-interface will be deprecated, that's the APIs to show/list ports for 
a given server.


os-virtual-interfaces is not the same, and was never a proxy for neutron 
since before 2.32 we never stored anything in the virtual_interfaces 
table in the nova database for neutron, but now we do because that's 
where we store the VIF tags.


We have to keep os-attach-interface (attach/detach interface actions on 
a server).


Are you suggesting we drop os-virtual-interfaces and change the behavior 
of os-interfaces to use the nova virtual_interfaces table rather than 
proxying to neutron?


Note that with os-virtual-interfaces even if we start showing VIFs for 
neutron ports, any ports created before Newton won't be in there, which 
might be a bit confusing.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [nova] [neutron] get_all_bw_counters in the Ironic virt driver

2016-08-02 Thread Matt Riedemann

On 8/2/2016 6:22 AM, John Garbutt wrote:

On 29 July 2016 at 19:58, Sean Dague  wrote:

On 07/29/2016 02:29 PM, Jay Pipes wrote:

On 07/28/2016 09:02 PM, Devananda van der Veen wrote:

On 07/28/2016 05:40 PM, Brad Morgan wrote:

I'd like to solicit some advice about potentially implementing
get_all_bw_counters() in the Ironic virt driver.

https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L438
Example Implementation:
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L320


I'm ignoring the obvious question about how this data will actually be
collected/fetched as that's probably it's own topic (involving
neutron), but I
have a few questions about the Nova -> Ironic interaction:

Nova
* Is get_all_bw_counters() going to stick around for the foreseeable
future? If
not, what (if anything) is the replacement?


I don't think Nova should be in the business of monitoring *any*
transient metrics at all.

There are many tools out there -- Nagios, collectd, HEKA, Snap, gnocchi,
monasca just to name a few -- that can do this work.

What action is taken if some threshold is reached is entirely
deployment-dependent and not something that Nova should care about. Nova
should just expose an API for other services to use to control the guest
instances under its management, nothing more.


More importantly... *only* xenapi driver implements this, and it's not
exposed over the API. In reality that part of the virt driver layer
should probably be removed.


AFAIK, it is only exposed via notifications:
https://github.com/openstack/nova/blob/562a1fe9996189ddd9cc5c47ab070a498cfce258/nova/notifications/base.py#L276

I think its emitted here:
https://github.com/openstack/nova/blob/562a1fe9996189ddd9cc5c47ab070a498cfce258/nova/compute/manager.py#L5886

Agreed with not adding to the legacy, and not to encourage new users of this.

Long term, it feels like removing this from Nova is the correct thing
to do, but I do worry about not having an obvious direct replacement
yet, and a transition plan. (This also feeds back into being able to
list deleted instances in the API, and DB soft_delete). Its not
trivial.


Like jay said, there are better tools for collecting this than Nova.


I am out of touch with what folks should use to get this data, and
build a billing system? Ceilometer + something?

It feels like that story has to be solid before we delete this
support. Maybe thats already the case, and I just don't know what that
is yet?

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We also have the orphaned rows issue with the database which lxsli was 
trying to fix but it got messy:


https://review.openstack.org/#/q/topic:newton-db+status:abandoned

But that will break the DB archive command if you're actually using these.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Steven Dake (stdake)


On 8/1/16, 8:38 AM, "Doug Hellmann"  wrote:

>Excerpts from Adrian Otto's message of 2016-08-01 15:14:48 +:
>> I am struggling to understand why we would want to remove projects from
>>our big tent at all, as long as they are being actively developed under
>>the principles of "four opens". It seems to me that working to
>>disqualify such projects sends an alarming signal to our ecosystem. The
>>reason we made the big tent to begin with was to set a tone of
>>inclusion. This whole discussion seems like a step backward. What
>>problem are we trying to solve, exactly?
>> 
>> If we want to have tags to signal team diversity, that's fine. We do
>>that now. But setting arbitrary requirements for big tent inclusion
>>based on who participates definitely sounds like a mistake.
>
>Membership in the big tent comes with benefits that have a real
>cost born by the rest of the community. Space at PTG and summit
>forum events is probably the one that's easiest to quantify and to
>point to as something limited that we want to use as productively
>as possible. If 90% of the work of a project is being done by a
>single company or organization (our current definition for
>single-vendor), and that doesn't change after 18 months, then I
>would take that as a signal that the community isn't interested
>enough in the project to bear the associated costs.
>
>I'm interested in hearing other reasons that we should keep these
>sorts of projects, though. I'm not yet ready to propose the change
>to the policy myself.

Doug,

As a community, we need to carefully consider this action.  The costs to
OpenStack are high for single vendor projects but they do add value if
they behave with community in mind.  I don't think removal from Big Tent
is all that big of a deal as long as the projects can still participate in
the openstack namespace and use gerrit/ci and still be part of OpenStack
as you have previously stated.  My biggest concern is some projects are
really trying hard to increase their diversity while others are not trying
so much.  Unfortunately measuring intent objectively is difficult.  I
severely dislike exceptions, but perhaps projects could apply for
exceptions to this policy change if they are actively digging out of
single vendor by their activities.  Forgive me for singling out a single
project, but deployment is where I've spent the last 3 years of my life
and have an intimate understanding of what is happening in these
communities.

For example tripleo is single-vendor, but is doing all the right things to
dig out of single vendor by doing actual community building.  They aren't
just trying, but are trying *very* hard with their activities.  They have
the right intent but how to measure intent objectively?  That would be my
major concern.

There are more single vendor projects then non-single-vendor projects (the
last time I looked which was several months ago) covering many areas, so
tripleo is just one example of many that may be doing the right things to
build more diverse affiliations.

I don't have any insight into the community building going on in various
communities outside of deployment - perhaps some of those teams PTLs could
weigh in on this thread?

All that said the proposal for 18 months is super generous; Nearly any
project can dig out of single vendor in a 18 month window if they
prioritize it.  It would be better for these projects in the long run to
prioritize moving to more diversity for a whole slew of reasons.  In my 20
years of training, team affiliation diversity is _more_ important than
starting from an empty repository and is a best practice.

To fix the problem, perhaps another tag is needed - something between
single-vendor and diverse-affilliation (spitball -
single-vendor-with-diverse-affiliation.  Single-vendor would have an 18
month window associated with it, while the new tag would guarantee big
tent as long as the objective 90%  percentages were maintained.  The only
problem there is that could put OpenStack back on an
incubation/integration track which from my experience with founding Heat
was a serious hurdle for OpenStack in general and Ceilometer and Heat
specifically.

Regards,
-steve
>
>Doug
>
>> 
>> --
>> Adrian
>> 
>> > On Aug 1, 2016, at 5:11 AM, Sean Dague  wrote:
>> > 
>> >> On 07/31/2016 02:29 PM, Doug Hellmann wrote:
>> >> Excerpts from Steven Dake (stdake)'s message of 2016-07-31 18:17:28
>>+:
>> >>> Kevin,
>> >>> 
>> >>> Just assessing your numbers, the team:diverse-affiliation tag
>>covers what
>> >>> is required to maintain that tag.  It covers more then core
>>reviewers -
>> >>> also covers commits and reviews.
>> >>> 
>> >>> See:
>> >>> 
>>https://github.com/openstack/governance/blob/master/reference/tags/team_d
>>iv
>> >>> erse-affiliation.rst
>> >>> 
>> >>> 
>> >>> I can tell you from founding 3 projects with the
>>team:diverse-affiliation
>> >>> tag (Heat, Magnum, Kolla) team:deverse-affiliation is a very high
>>bar 

Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Hayes, Graham
On 02/08/2016 14:37, Doug Hellmann wrote:
> Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
>> On 29/07/2016 21:59, Doug Hellmann wrote:
>>> One of the outcomes of the discussion at the leadership training
>>> session earlier this year was the idea that the TC should set some
>>> community-wide goals for accomplishing specific technical tasks to
>>> get the projects synced up and moving in the same direction.
>>>
>>> After several drafts via etherpad and input from other TC and SWG
>>> members, I've prepared the change for the governance repo [1] and
>>> am ready to open this discussion up to the broader community. Please
>>> read through the patch carefully, especially the "goals/index.rst"
>>> document which tries to lay out the expectations for what makes a
>>> good goal for this purpose and for how teams are meant to approach
>>> working on these goals.
>>>
>>> I've also prepared two patches proposing specific goals for Ocata
>>> [2][3].  I've tried to keep these suggested goals for the first
>>> iteration limited to "finish what we've started" type items, so
>>> they are small and straightforward enough to be able to be completed.
>>> That will let us experiment with the process of managing goals this
>>> time around, and set us up for discussions that may need to happen
>>> at the Ocata summit about implementation.
>>>
>>> For future cycles, we can iterate on making the goals "harder", and
>>> collecting suggestions for goals from the community during the forum
>>> discussions that will happen at summits starting in Boston.
>>>
>>> Doug
>>>
>>> [1] https://review.openstack.org/349068 describe a process for managing 
>>> community-wide goals
>>> [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
>>> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
>>> libraries"
>>
>> I am confused. When I proposed my patch for doing something very similar
>> (Equal Chances for all projects is basically a multiple release goal) I
>> got the following rebuttals:
>>
>>  > "and it would be
>>  > a mistake to try to require that because the issue is almost always
>>  > lack of resources and not lack of desire. Volunteers to contribute
>>  > to the work that's needed will do more to help than a
>>  > one-size-fits-all policy."
>>
>>  > This isn't a thing that gets fixed with policy. It gets fixed with
>>  > people.
>>
>>  > I am reading through the thread, and it puzzles me that I see a lot
>>  > of right words about goals but not enough hints on who is going to
>>  > implement that.
>>
>>  > I think the right solutions here are human ones. Talk with people.
>>  > Figure out how you can help lighten their load so that they have
>>  > breathing space. I think hiding behind policy becomes a way to make
>>  > us more separate rather than engaging folks more directly.
>>
>>  > Coming at this with top down declarations of how things should work
>>  > not only ignores reality of the ecosystem and the current state of
>>  > these projects, but is also going about things backwards.
>>
>>  > This entirely ignores that these are all open source projects,
>>  > which are often very sparsely contributed to. If you have an issue
>>  > with a project or the interface it provides, then contribute to it.
>>  > Don't make grandiose resolutions trying to force things into what you
>>  > see as an ideal state, instead contribute to help fix the problems
>>  > you've identified.
>>
>> And yet, we are currently suggesting a system that will actively (in an
>> undefined way) penalise projects who do not comply with a different set
>> of proposals, done in a top down manner.
>>
>> I may be missing the point, but the two proposals seem to have
>> similarities - what is the difference?
>>
>> When I see comments like:
>>
>>  > Project teams who join the big tent sign up to the rights and
>>  > responsibilities that go along with membership. Those responsibilities
>>  > include taking some direction from the TC, including regarding work
>>  > they may not give the same priority as the broader community.
>>
>> It sounds like top down is OK, but previous ML thread / TC feedback
>> has been different.
>
> One difference is that these goals are not things like "the
> documentation team must include every service project in the
> installation guide" but rather would be phrased like "every project
> must provide an installation guide". The work is distributed to the
> vertical teams, and not focused in the horizontal teams.

Well, the wording was actually "the documentation team must provide a
way for all projects to be included in the documentation guide". The
work was on the horizontal teams to provide a method, and the vertical
teams to do the actual writing, as an example (that is actually
underway, so it is a bad example.)

A better example would be OSC / Horizon has to provide a way for plugins
to set quotas - it is up to the project teams to actually implement the
code the sets 

[openstack-dev] [ironic] weekly subteam status report

2016-08-02 Thread Ruby Loo
Hi,

We are yodelling to present this week's subteam report for Ironic. As
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 18 July 2016)
- Ironic: 216 bugs (+15) + 204 wishlist items (+3). 21 new (+8), 160 in
progress (+15), 0 critical, 31 high (+2) and 19 incomplete (-2)
- Inspector: 11 bugs (+2) + 20 wishlist items (-1). 0 new, 11 in progress
(+3), 0 critical, 2 high and 2 incomplete
- Nova bugs with Ironic tag: 11 (+2). 0 new, 0 critical, 1 high (+1)

Network isolation (Neutron/Ironic work) (jroll, TheJulia, devananda)

* trello:
https://trello.com/c/HWVHhxOj/1-multi-tenant-networking-network-isolation
- working on potential FFE for nova things

Gate improvements (jlvillal, lucasagomes, dtantsur)
===
- dtantsur has to catch up with recent gate changes, several jobs seem to
be added
- likely xenial jobs? some names changed too but those should be
replacements
- there was a refactoring that seems to have added a lot of jobs to
JJB, see the full list in the pad below
- discussed the number of jenkins jobs with openstack-infra team last week,
with some proposals on how to reorganize our jobs
- https://etherpad.openstack.org/p/ironic-jjb-matrix

Node search API (jroll, lintan, rloo)
=
* trello: https://trello.com/c/j35vJrSz/24-node-search-api
- probably not a priority with new scheduling/multi-compute plans

Node claims API (jroll, lintan)
===
* trello: https://trello.com/c/3ai8OQcA/25-node-claims-api
- probably not a priority with new scheduling/multi-compute plans

Multiple compute hosts (jroll, devananda)
=
* trello: https://trello.com/c/OXYBHStp/7-multiple-compute-hosts
- We have a simple path forward: https://review.openstack.org/#/c/348443/
- uses a hash ring to distribute nodes between compute services
- Nova team has agreed to let that in this cycle

Generic boot-from-volume (TheJulia, dtantsur, lucasagomes)
==
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- Specification updated
- Code being actively developed, although requires volume connection
informa
tion.
- Volume connection information changes need reviews as they are required
for boot-from-volume
-
https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1526231

Agent top-level API promotion (dtantsur)

* trello:
https://trello.com/c/37YuKIB8/28-promote-agent-vendor-passthru-to-core-api
- the heartbeat implementation was approved (fails in gate though), now
rebasing everything else

Driver composition (dtantsur)
=
* trello: https://trello.com/c/fTya14y6/14-driver-composition
- blocked by ongoing discussion about interfaces defaults
- to a lesser extent blocked by agent API promotion

OpenStackClient plugin for ironic (thrash, dtantsur, rloo)
==
* trello: https://trello.com/c/ckqtq3kG/16-openstackclient-plugin-for-ironic
- port and chassis commands:
https://review.openstack.org/#/q/topic:bug/1526479
- baremetal create command: https://review.openstack.org/328955

Notifications (mariojv)
===
* trello: https://trello.com/c/MD8HNcwJ/17-notifications
- Update August 1st, 2016
- Patches for notifications base class and power state notification up
for review
- Mario still has to respond to comments on base class patch,
should have another patch set up today
- yuriyz has proposed a spec for notifications on provision state
changes

Keystone policy support (JayF, devananda)
=
* trello: https://trello.com/c/P5q3Es2z/15-keystone-policy-support
- code complete, fixing minor issues, hope to land today.

Active node creation (TheJulia)
===
* trello: https://trello.com/c/BwA91osf/22-active-node-creation
- Tempest test and test substrate changes need reviews
https://review.openstack.org/#/q/topic:bug/1526315+status:open

Serial console (yossy, hshiina, yuikotakadamori)

* trello: https://trello.com/c/nm3I8djr/20-serial-console
- console_utils: merged
- IPMITool driver interface: merged
- follow-on patches: need review
- https://review.openstack.org/#/c/335378/
- https://review.openstack.org/#/c/349400/
- install guide: needs review (being reviewed by primary contacts)
https://review.openstack.org/#/c/293872/

Inspector (dtansur)
===
* trello: https://trello.com/c/PwH1pexJ/23-rescue-mode
- A proper grenade job is now running in our gate \o/
- switching to voting if/when it proofs stable enough
- ironic-inspector-client to be released soon

.

Until 

[openstack-dev] pydotplus (taskflow) vs pydot-ng (fuel)

2016-08-02 Thread Thomas Goirand
Hi,

Fuel uses pydot-ng, and (at least) taskflow uses pydotplus. I believe
both aren't using pydot because that's dead upstream.

Could we have a bit of consistency here, and have one or the other
component to switch, so we could get rid of one more package that does
the same thing in downstream distros?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][tempest] testing different configurations by exploiting the mutate attribute?

2016-08-02 Thread Markus Zoeller
IIUC, each gate testing job has a *fixed configuration* which will never
be changed when executing *all* tempest tests. If I need to test a
specific configuration, a new testing job is needed. As we have a
limited amount of test nodes, this creates testing gaps as we cannot
test all (reasonable/worthy) configuration permutations.

In [1] I asked for advice how I can test the Nova serial console feature
without creating a new test job and I'm wondering if we can exploit the
mutable attribute of config options [2] for testing purposes in general.
We already do something similar with the live-migration testing job [3].
This relies on restarting the services after changing the "nova.conf"
file however.

I'd like to discuss:
1) Do we see the need for more tempest tests with different
   configurations?
2) If ^ is true, what are the arguments against having a new testing
   job "gate-tempest-dsvm-full-mutate-configs" + writing tempest
   tests which send a SIGHUP to trigger the reload of previously
   changed "nova.conf" file?

References:
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-July/100029.html
[2] http://docs.openstack.org/developer/oslo.config/mutable.html
[3]
https://github.com/openstack/nova/blob/master/nova/tests/live_migration/hooks/run_tests.sh


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2016-08-02 11:16:29 +0100:
> On Mon, 1 Aug 2016, James Bottomley wrote:
> 
> > Making no judgments about the particular exemplars here, I would just
> > like to point out that one reason why projects exist with very little
> > diversity is that they "just work".  Usually people get involved when
> > something doesn't work or they need something changed to work for them.
> > However, people do have a high tolerance for "works well enough"
> > meaning that a project can be functional, widely used and not
> > attracting diverse contributors.  A case in point for this type of
> > project in the non-openstack world would be openssl but there are many
> > others.
> 
> In a somewhat related point, the kinds of metrics we use in OpenStack to
> evaluate project health tend to have the unintended consequence of
> requiring the projects to always be growing and changing (i.e. churning)
> rather than trending towards stability and maturity.
> 
> I'd like to think that we can have some projects that can be called
> "done".
> 
> So we need to consider the side effects of the measurements we're
> taking and not let the letter of the new laws kill the spirit.
> 
> Yours in cliches,

Sure. I think I'd want to set things up to trigger a review, and not an
automatic "expulsion". The TC at the time would be able to recognize a
stable project and take into account whether the level of activity is
appropriate for the maturity and nature of the project.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
> On 29/07/2016 21:59, Doug Hellmann wrote:
> > One of the outcomes of the discussion at the leadership training
> > session earlier this year was the idea that the TC should set some
> > community-wide goals for accomplishing specific technical tasks to
> > get the projects synced up and moving in the same direction.
> >
> > After several drafts via etherpad and input from other TC and SWG
> > members, I've prepared the change for the governance repo [1] and
> > am ready to open this discussion up to the broader community. Please
> > read through the patch carefully, especially the "goals/index.rst"
> > document which tries to lay out the expectations for what makes a
> > good goal for this purpose and for how teams are meant to approach
> > working on these goals.
> >
> > I've also prepared two patches proposing specific goals for Ocata
> > [2][3].  I've tried to keep these suggested goals for the first
> > iteration limited to "finish what we've started" type items, so
> > they are small and straightforward enough to be able to be completed.
> > That will let us experiment with the process of managing goals this
> > time around, and set us up for discussions that may need to happen
> > at the Ocata summit about implementation.
> >
> > For future cycles, we can iterate on making the goals "harder", and
> > collecting suggestions for goals from the community during the forum
> > discussions that will happen at summits starting in Boston.
> >
> > Doug
> >
> > [1] https://review.openstack.org/349068 describe a process for managing 
> > community-wide goals
> > [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
> > [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> > libraries"
> 
> I am confused. When I proposed my patch for doing something very similar
> (Equal Chances for all projects is basically a multiple release goal) I
> got the following rebuttals:
> 
>  > "and it would be
>  > a mistake to try to require that because the issue is almost always
>  > lack of resources and not lack of desire. Volunteers to contribute
>  > to the work that's needed will do more to help than a
>  > one-size-fits-all policy."
> 
>  > This isn't a thing that gets fixed with policy. It gets fixed with
>  > people.
> 
>  > I am reading through the thread, and it puzzles me that I see a lot
>  > of right words about goals but not enough hints on who is going to
>  > implement that.
> 
>  > I think the right solutions here are human ones. Talk with people.
>  > Figure out how you can help lighten their load so that they have
>  > breathing space. I think hiding behind policy becomes a way to make
>  > us more separate rather than engaging folks more directly.
> 
>  > Coming at this with top down declarations of how things should work
>  > not only ignores reality of the ecosystem and the current state of
>  > these projects, but is also going about things backwards.
> 
>  > This entirely ignores that these are all open source projects,
>  > which are often very sparsely contributed to. If you have an issue
>  > with a project or the interface it provides, then contribute to it.
>  > Don't make grandiose resolutions trying to force things into what you
>  > see as an ideal state, instead contribute to help fix the problems
>  > you've identified.
> 
> And yet, we are currently suggesting a system that will actively (in an
> undefined way) penalise projects who do not comply with a different set
> of proposals, done in a top down manner.
> 
> I may be missing the point, but the two proposals seem to have 
> similarities - what is the difference?
> 
> When I see comments like:
> 
>  > Project teams who join the big tent sign up to the rights and
>  > responsibilities that go along with membership. Those responsibilities
>  > include taking some direction from the TC, including regarding work
>  > they may not give the same priority as the broader community.
> 
> It sounds like top down is OK, but previous ML thread / TC feedback
> has been different.

One difference is that these goals are not things like "the
documentation team must include every service project in the
installation guide" but rather would be phrased like "every project
must provide an installation guide". The work is distributed to the
vertical teams, and not focused in the horizontal teams.

Doug

> 
> - Graham
> 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Doug Hellmann
Excerpts from Shamail's message of 2016-08-01 22:37:28 -0500:
> Thanks Doug,
> 
> > On Aug 1, 2016, at 10:44 AM, Doug Hellmann  wrote:
> > 
> > Excerpts from Shamail Tahir's message of 2016-08-01 09:49:35 -0500:
> >>> On Mon, Aug 1, 2016 at 7:58 AM, Doug Hellmann  
> >>> wrote:
> >>> 
> >>> Excerpts from Sean Dague's message of 2016-08-01 08:33:06 -0400:
> > On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> > One of the outcomes of the discussion at the leadership training
> > session earlier this year was the idea that the TC should set some
> > community-wide goals for accomplishing specific technical tasks to
> > get the projects synced up and moving in the same direction.
> > 
> > After several drafts via etherpad and input from other TC and SWG
> > members, I've prepared the change for the governance repo [1] and
> > am ready to open this discussion up to the broader community. Please
> > read through the patch carefully, especially the "goals/index.rst"
> > document which tries to lay out the expectations for what makes a
> > good goal for this purpose and for how teams are meant to approach
> > working on these goals.
> > 
> > I've also prepared two patches proposing specific goals for Ocata
> > [2][3].  I've tried to keep these suggested goals for the first
> > iteration limited to "finish what we've started" type items, so
> > they are small and straightforward enough to be able to be completed.
> > That will let us experiment with the process of managing goals this
> > time around, and set us up for discussions that may need to happen
> > at the Ocata summit about implementation.
> > 
> > For future cycles, we can iterate on making the goals "harder", and
> > collecting suggestions for goals from the community during the forum
> > discussions that will happen at summits starting in Boston.
> > 
> > Doug
> > 
> > [1] https://review.openstack.org/349068 describe a process for
> >>> managing community-wide goals
> > [2] https://review.openstack.org/349069 add ocata goal "support
> >>> python 3.5"
> > [3] https://review.openstack.org/349070 add ocata goal "switch to
> >>> oslo libraries"
>  
>  I like the direction this is headed. And I think for the test items, it
>  works pretty well.
>  
>  I'm trying to think about how we'd use a model like this to support
>  something a little more abstract such as making upgrades easier. Where
>  we've got a few things that we know get in the way (policy in files,
>  rootwrap rules, paste ini changes), as well as validation, as well as
>  configuration changes. And what it looks like for persistently important
>  items which are going to take more than a cycle to get through.
> >>> 
> >>> If we think the goal can be completed in a single cycle, then those
> >>> specific items can just be used to define "done" ("all policy
> >>> definitions have defaults in code and the service works without a policy
> >>> configuration file" or whatever). If the goal cannot be completed in a
> >>> single cycle, then it would need to be broken up in to phases.
> >>> 
>  
>  Definitely seems worth giving it a shot on the current set of items, and
>  see how it fleshes out.
>  
>  My only concern at this point is it seems like we're building nested
>  data structures that people are going to want to parse into some kind of
>  visualization in RST, which is a sub optimal parsing format. If we know
>  that people want to parse this in advance, yamling it up might be in
>  order. Because this mostly looks like it would reduce to one of those
>  green/yellow/red checker boards by project and task.
> >>> 
> >>> That's a good idea. How about if I commit to translate what we end
> >>> up with to YAML during Ocata, but we evolve the first version using
> >>> the RST since that's simpler to review for now?
> >> 
> >> We have created a tracker file[1][2] for user stories (minor changes
> >> pending based on feedback) in the Product WG repo.  We are currently
> >> working with the infra team to get a visualization tool deployed that shows
> >> the status for each artifact and provides links so that people can get more
> >> details as necessary.  Could something similar be (re)used here?
> > 
> > Possibly. I don't want to tie the governance part of the process
> > to tightly to any project management tools, since those tend to
> > change, but if the project-specific tracking artifacts exist in
> > that form then linking to them would be appropriate.
> The purpose of the tracking is to link existing project-level artifacts 
> including cross project specs and service level specs/blueprints.  Once the 
> tool is deployed, we can see if it fits this need.
> > 
> >> 
> >> I also have a general question about whether goals could be documented as
> >> 

[openstack-dev] [puppet] Temporarily unstable Fuel deployment test jobs

2016-08-02 Thread Vladimir Kozhukalov
Dear colleagues,

Please be informed that we are going to merge some patches on 08/02/2016 to
Fuel repositories to switch from Mitaka to Newton packages. These patches
will likely distabilize Fuel deployment tests for several days. We are
going fix all major features till 08/08/2016 and get these jobs stable
again.

Thanks.


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >