[openstack-dev] [meghdwar] use case for meghdwar API irc call agenda

2016-08-02 Thread prakash RAMCHANDRAN
Hi all,
Please join the meeting  for API and codes decision 
Meeting Wednesday 3rd August      (14 Hr-15 Hr UTC )   [7AM-8AM PDT]

irc #openstck-megdwar
Follow up on same points with updates aslast meeting.Since we had a irc issues 
last week we plan to conduct this meeting instead of skipping and discuss 
following agenda based on review and updates later in the week,
Summary of last irc meeting July 27th and updates for Aug 3rd meeting:
1 The last meeting  agenda and actions taken or pending 
We looked  Cloudlet failure on Ubuntu 16.04 and filed a request with Cloudlet 
upstream developers for Ubuntu 16.04 library support for Fabric (FUSE) suport 
in OEC 
(http://forum.openedgecomputing.org/t/fab-module-support-for-ubuntu-16-04/92)
The summary of previous week on Senlin was it has nodes to add to cluster and 
scale but not distribute. But if we can setup an yaml profile and policies we 
can distribute cloudlet nodes with a special meghdwar driver and may review it 
at Ocata summit, once we plan that through etherpad entries and engage with 
Senlin team(TBD)
2 Megdwar Gateway API discussions based on use case
Focus is what APIs needed for minimum use case of two cloudlets on two edges 
running 1 app each and how to move one of the apps from source edge gateway to 
destination edge gateway on compute nodes through those gateways.
Reviewed other Catalog modules (application-catalog-ui, murano, murano-agent to 
be tested) on Rcakspace3. What other modules are needed in OpenStack 'meghdwar' 
a. Cloudlet (existing) -  b Cloudlet Client (existing)To discuss Binder option 
for two cloudlets instead of clusters.   c. Cloudlet Gateway Management  
(Cluster Management) 
   d. python  Cloudlet  Cluster Management  To review murano-agent and API 
differences
   e. Cloudlet Agent To review application-catalog-gui with images, 
orchestraton and Murano with Application adn Components
   f. Cloudlet horizon plug-in for supporting d & e as GUI instead as CLI
4. How do we go about priority 
Consider two cloudlet Binders instead of Clusters
5. Any other missing items. 
The Directory structure to start from templates in OpenStack or codes upstream?
6. Plan for Barcelona Summit (TBD)
If any of our developers  have any comments to discuss on topics here or more 
feel free to add to the end, and will update the Wiki as we continue of our 
efforts to freez APIs and Architecture to suport Edge Services.

Thankspramchan__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread Tom Fifield

On 03/08/16 03:44, Joshua Harlow wrote:

Just a general pet-peeve of mine, but can we not have #rdo be the
tripleo user support channel, but say push people more toward #tripleo
or #tripleo-users or #openstack instead. I'd rather not have #rdo be the
way users get tripleo support, because at that point we might as well
call tripleo a redhat product and just move the repos to somewhere
inside redhat.


To comment only generally on the support issue, in redhat's defense they 
(hat tip Steve Gordon, Rich Bowen and others I've likely forgot) are 
doing a good job contributing to Ask.OpenStack.org, including in areas 
outside of RDO.




-Josh

Dan Sneddon wrote:

Speaking for myself and the other TripleO developers at Red Hat, we do
try our best to answer user questions in #rdo. You will also find some
production users hanging out there. The best times to ask questions are
during East Coast business hours, or during business hours of GMT+1
(we have a large development office in Brno, CZ with engineers that
work on TripleO). There is also an RDO-specific mailing list available
here:https://www.redhat.com/mailman/listinfo/rdo-list


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Amit Saha
Great! This is much needed. We will be glad to help in anyway possible.

Regards,
Amit

On Wed, Aug 3, 2016 at 12:02 AM, Ihar Hrachyshka 
wrote:

> Amit Kumar Saha (amisaha)  wrote:
>
> Hi,
>>
>> We would like to introduce the community to a new Python based project
>> called DON – Diagnosing OpenStack Networking. More details about the
>> project can be found at https://github.com/openstack/python-don.
>>
>> DON, written primarily in Python, and available as a dashboard in
>> OpenStack Horizon, Libery release, is a network analysis and diagnostic
>> system and provides a completely automated service for verifying and
>> diagnosing the networking functionality provided by OVS. The genesis of
>> this idea was presented at the Vancouver summit, May 2015. Hopefully the
>> community will find this project interesting and will give us valuable
>> feedback.
>>
>
> Amit,
>
> neutron team currently works on defining a new diagnostics API:
> https://review.openstack.org/#/c/308973/
>
> Please work with the community on API definition, and later, on backend
> specific implementation of desired checks.
>
> Ihar
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Smugmug coupon code: pwzn27r9CVvSg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Amit Saha
Great! This is much needed. We will be glad to help in anyway possible.

Regards,
Amit

On Wed, Aug 3, 2016 at 12:02 AM, Ihar Hrachyshka 
wrote:

> Amit Kumar Saha (amisaha)  wrote:
>
> Hi,
>>
>> We would like to introduce the community to a new Python based project
>> called DON – Diagnosing OpenStack Networking. More details about the
>> project can be found at https://github.com/openstack/python-don.
>>
>> DON, written primarily in Python, and available as a dashboard in
>> OpenStack Horizon, Libery release, is a network analysis and diagnostic
>> system and provides a completely automated service for verifying and
>> diagnosing the networking functionality provided by OVS. The genesis of
>> this idea was presented at the Vancouver summit, May 2015. Hopefully the
>> community will find this project interesting and will give us valuable
>> feedback.
>>
>
> Amit,
>
> neutron team currently works on defining a new diagnostics API:
> https://review.openstack.org/#/c/308973/
>
> Please work with the community on API definition, and later, on backend
> specific implementation of desired checks.
>
> Ihar
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Smugmug coupon code: pwzn27r9CVvSg
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-02 Thread zhi
I still have a problem about the fg device with private ip address.

In DVR mode, there is a external ip address in fq device, because we need
to figure out the default route.

If the fg device with a private ip address, how do we figure out the
default route in fip namespace?

Default route is not reachable by the private ip address, doesn't it?


Hopes for your reply. ;-)


2016-08-03 6:38 GMT+08:00 Carl Baldwin :

>
>
> On Tue, Aug 2, 2016 at 6:15 AM, huangdenghui  wrote:
>
>> hi john and brain
>>thanks for your information, if we get patch[1],patch[2] merged,then
>> fg can allocate private ip address. after that, we need consider floating
>> ip dataplane, in current dvr implementation, fg is used to reachment
>> testing for floating ip, now,with subnet types bp,fg has different subnet
>> than floating ip address, from fg'subnet gateway point view, to reach
>> floating ip, it need a routes entry, destination is some floating ip
>> address, fg'ip address is the nexthop, and this routes entry need be
>> populated at the event of floating ip creating, deleting when floating ip
>> is dissociated. any comments?
>>
>
> The fg device will still do proxy arp for the floating ip to other devices
> on the external network. This will be part of our testing. The upstream
> router should still have an on-link route on the network to the floating ip
> subnet. IOW, you shouldn't replace the floating ip subnet with the private
> fg subnet on the upstream router. You should add the new subnet to the
> already existing ones and the router should have an additional IP address
> on the new subnet to be used as the gateway address for north-bound traffic.
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to create a image using http img file

2016-08-02 Thread Fei Long Wang
As the error said, you need to see the disk_format and container_format.
I haven't digged the code, but I think you should try to set the
container_format and disk_format when you create the image like this:

image = self.glance.images.create(name="myNewImage",
  
container_format='bare',
  
disk_format='raw',)

On 03/08/16 13:27, kris...@itri.org.tw wrote:
>
>
>
> refer to: glance client Python API v2
>
> http://docs.openstack.org/developer/python-glanceclient/ref/v2/images.html
>
>  
>
> *add_location*(/image_id/, /url/, /metadata/)
>
> Add a new location entry to an image’s list of locations.
>
> It is an error to add a URL that is already present in the list of
> locations.
>
> *Parameters:*
>
>   
>
> · *image_id* –ID of image to which the location is to be added.
>
> · *url* –URL of the location to add.
>
> · *metadata* –Metadata associated with the location.
>
> *Returns:*
>
>   
>
> The updated image
>
> ---
>
> #--source code--
>
> from glanceclient.v2.client import Client
>
> ……
>
> url = 'http:///ubuntu1604.qcow2'
>
> image = self.glance.images.create(name="myNewImage")
>
> self.glance.images.add_location(image.id, url, {})
>
> #--end
>
> ---
>
>  
>
> I am sure the images.create is work.
>
> I got image .id  ‘be416e4a-f266-4ad5-a62f-979242d23633’.
>
> I don’t Know which data should be assign to metadata.
>
> Then I got :
>
>  
>
> self.glance.images.add_location(image.id, url, {})
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py",
> line 311, in add_location
>
> self._send_image_update_request(image_id, add_patch)
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py",
> line 296, in _send_image_update_request
>
> self.http_client.patch(url, headers=hdrs, data=json.dumps(patch_body))
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 284, in patch
>
> return self._request('PATCH', url, **kwargs)
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 267, in _request
>
> resp, body_iter = self._handle_response(resp)
>
>   File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
> line 83, in _handle_response
>
> raise exc.from_response(resp, resp.content)
>
> HTTPBadRequest: 400 Bad Request: Properties disk_format,
> container_format must be set prior to saving data. (HTTP 400)
>
>  
>
> Best Regards,
>
> Kristen
>
>  
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內
> 容,並請銷毀此信件。 This email may contain confidential information.
> Please do not use or disclose it in any way and delete it if you are
> not the intended recipient.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to create a image using http img file

2016-08-02 Thread kristen


refer to: glance client Python API v2
http://docs.openstack.org/developer/python-glanceclient/ref/v2/images.html

add_location(image_id, url, metadata)
Add a new location entry to an image’s list of locations.
It is an error to add a URL that is already present in the list of locations.
Parameters:

· image_id – ID of image to which the location is to be added.
· url – URL of the location to add.
· metadata – Metadata associated with the location.

Returns:

The updated image

---
#--source code--
from glanceclient.v2.client import Client
……
url = 'http:///ubuntu1604.qcow2'
image = self.glance.images.create(name="myNewImage")
self.glance.images.add_location(image.id, url, {})
#--end
---

I am sure the images.create is work.
I got image .id  ‘be416e4a-f266-4ad5-a62f-979242d23633’.
I don’t Know which data should be assign to metadata.
Then I got :

self.glance.images.add_location(image.id, url, {})
  File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py", line 311, 
in add_location
self._send_image_update_request(image_id, add_patch)
  File "/usr/lib/python2.7/dist-packages/glanceclient/v2/images.py", line 296, 
in _send_image_update_request
self.http_client.patch(url, headers=hdrs, data=json.dumps(patch_body))
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
284, in patch
return self._request('PATCH', url, **kwargs)
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
267, in _request
resp, body_iter = self._handle_response(resp)
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 83, 
in _handle_response
raise exc.from_response(resp, resp.content)
HTTPBadRequest: 400 Bad Request: Properties disk_format, container_format must 
be set prior to saving data. (HTTP 400)

Best Regards,
Kristen



--
本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain 
confidential information. Please do not use or disclose it in any way and 
delete it if you are not the intended recipient.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-02 Thread Carl Baldwin
On Aug 2, 2016 6:52 PM, "Kevin Benton"  wrote:
>
> >It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it.
>
> The reason I thought it was relevant to bring up is because it's going to
be difficult to actually fix it. If any of the following lines fail, none
of the IPAM rollback code will be called:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1195-L1215

Agreed. Just saying why i didn't bring it up on this context to start with.

> If we decide to just fix the exception handler inside of ipam itself for
rollbacks (which would be a quick fix), I would be okay with that but we
need to be clear that any driver depending on that alone for state
synchronization is in a very dangerous position of becoming inconsistent
(i.e. I want something to point people to if we get bug reports saying that
the delete call wasn't made when the port failed to create).

I think we could fix it in steps. I do think that both issues are worth
fixing and will pursue them both. I'll file a bugs.

Carl

> On Tue, Aug 2, 2016 at 3:27 PM, Carl Baldwin  wrote:
>>
>> On Tue, Aug 2, 2016 at 2:50 AM, Kevin Benton  wrote:
>>>
>>> >Given that it shares the session, it wouldn't have to do anything.
But, again, it wouldn't behave like an external driver.
>>>
>>> Why not? The only additional thing an external driver would be doing at
this step is calling an external system. Any local accounting in the DB
that driver would do would automatically be rolled back just like the
in-tree system.
>>
>> See below.
>>>
>>> Keep in mind that anything else can fail later in the transaction
outside of IPAM (e.g. ML2 driver precommit validation) and none of this
IPAM rollback logic will be called. Maybe the right answer is to get rid of
the IPAM rollback logic completely because it gives the wrong impression
that it is going to be called on all failures to commit when it's really
only called in failures inside of IPAM's module. Essentially every instance
of _safe_rollback in [1] is in an exception handler that isn't triggered if
there are exceptions anywhere in the core plugin after the initial base DB
calls.
>>
>> I noticed that there are failures which will not call rollback. I
started thinking about it when I wrote this note [1] (I did realize that
rollback might not even happen with the flush under the likes of galera and
that there are other failures that can happen outside of this and would
fail to rollback). I didn't bring it up here because I thought it was a
separate issue.
>>
>> It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it. If we eliminate the idea, it isn't
clear to me yet how drivers will handle leaked allocations (or
deallocations). We could talk about some alternatives. I've got a few
knocking around the back of my head but nothing that seems like a complete
picture yet.
>>
>> If one only cares about the in-tree driver which doesn't need an
explicit rollback call then one probably wouldn't care about having one at
all. This is the kind of thing I'd like to avoid by having the in-tree
driver work more like any other external driver. When the in-tree driver
works differently than others because it has a closer relationship to the
rest of the system, we quickly forget the needs of other drivers.
>>
>> Carl
>>
>> [1]
https://review.openstack.org/#/c/348956/1/neutron/tests/unit/extensions/test_segment.py@793
>>
>>> 1.
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py
>>>
>>>
>>> On Aug 1, 2016 18:11, "Carl Baldwin"  wrote:



 On Mon, Aug 1, 2016 at 2:29 PM, Kevin Benton  wrote:
>
> >We still want the exception to rollback the entire API operation and
stopping it with a nested operation I think would mess that up.
>
> Well I think you would want to start a nested transaction, capture
the duplicate, call the ipam delete methods, then throw a retryrequest. The
exception will still trigger a rollback of the entire operation.


 This is kind of what I was headed when I decided to solicit some
feedback. It is a possibility should still be considered.

>
> >Second, I've been throwing around the idea of not sharing the
session with the IPAM driver.
>
> If the IPAM driver does not have access to the session, it can't see
any of the uncommitted data. Would that be a problem? In particular,
doesn't the IPAM driver's DB table have foreign key constraints with the
data waiting to be committed in the other session? I'm hesitant to take
this approach because it means other (if the in-tree doesn't already) IPAM
drivers cannot have any relational integrity with the objects in 

Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-02 Thread Kevin Benton
>It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it.

The reason I thought it was relevant to bring up is because it's going to
be difficult to actually fix it. If any of the following lines fail, none
of the IPAM rollback code will be called:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1195-L1215

If we decide to just fix the exception handler inside of ipam itself for
rollbacks (which would be a quick fix), I would be okay with that but we
need to be clear that any driver depending on that alone for state
synchronization is in a very dangerous position of becoming inconsistent
(i.e. I want something to point people to if we get bug reports saying that
the delete call wasn't made when the port failed to create).

On Tue, Aug 2, 2016 at 3:27 PM, Carl Baldwin  wrote:

> On Tue, Aug 2, 2016 at 2:50 AM, Kevin Benton  wrote:
>
>> >Given that it shares the session, it wouldn't have to do anything. But,
>> again, it wouldn't behave like an external driver.
>>
> Why not? The only additional thing an external driver would be doing at
>> this step is calling an external system. Any local accounting in the DB
>> that driver would do would automatically be rolled back just like the
>> in-tree system.
>>
> See below.
>
>> Keep in mind that anything else can fail later in the transaction outside
>> of IPAM (e.g. ML2 driver precommit validation) and none of this IPAM
>> rollback logic will be called. Maybe the right answer is to get rid of the
>> IPAM rollback logic completely because it gives the wrong impression that
>> it is going to be called on all failures to commit when it's really only
>> called in failures inside of IPAM's module. Essentially every instance of
>> _safe_rollback in [1] is in an exception handler that isn't triggered if
>> there are exceptions anywhere in the core plugin after the initial base DB
>> calls.
>>
> I noticed that there are failures which will not call rollback. I started
> thinking about it when I wrote this note [1] (I did realize that rollback
> might not even happen with the flush under the likes of galera and that
> there are other failures that can happen outside of this and would fail to
> rollback). I didn't bring it up here because I thought it was a separate
> issue.
>
> It might be the wrong impression, but it was already given and there are
> drivers which have been written under it. That's why I tend toward fixing
> rollback instead of eliminating it. If we eliminate the idea, it isn't
> clear to me yet how drivers will handle leaked allocations (or
> deallocations). We could talk about some alternatives. I've got a few
> knocking around the back of my head but nothing that seems like a complete
> picture yet.
>
> If one only cares about the in-tree driver which doesn't need an explicit
> rollback call then one probably wouldn't care about having one at all. This
> is the kind of thing I'd like to avoid by having the in-tree driver work
> more like any other external driver. When the in-tree driver works
> differently than others because it has a closer relationship to the rest of
> the system, we quickly forget the needs of other drivers.
>
> Carl
>
> [1]
> https://review.openstack.org/#/c/348956/1/neutron/tests/unit/extensions/test_segment.py@793
>
> 1.
>> https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py
>>
>> On Aug 1, 2016 18:11, "Carl Baldwin"  wrote:
>>
>>>
>>>
>>> On Mon, Aug 1, 2016 at 2:29 PM, Kevin Benton  wrote:
>>>
 >We still want the exception to rollback the entire API operation and
 stopping it with a nested operation I think would mess that up.

 Well I think you would want to start a nested transaction, capture the
 duplicate, call the ipam delete methods, then throw a retryrequest. The
 exception will still trigger a rollback of the entire operation.

>>>
>>> This is kind of what I was headed when I decided to solicit some
>>> feedback. It is a possibility should still be considered.
>>>
>>>
 >Second, I've been throwing around the idea of not sharing the session
 with the IPAM driver.

 If the IPAM driver does not have access to the session, it can't see
 any of the uncommitted data. Would that be a problem? In particular,
 doesn't the IPAM driver's DB table have foreign key constraints with the
 data waiting to be committed in the other session? I'm hesitant to take
 this approach because it means other (if the in-tree doesn't already) IPAM
 drivers cannot have any relational integrity with the objects in question.

>>>
>>> The in-tree driver doesn't have any FK constraints back to the neutron
>>> db schema for IPAM [1]. I don't think that would make sense since it is
>>> supposed to work like an external 

Re: [Openstack] Disable compute node from accepting new VMs?

2016-08-02 Thread David Medberry
nova service-disable $SHORTNAME nova-compute --reason
NO_MORE_SCHEDULING_HERE

will prevent new VMs from going on but doesn't do anything with existing.

On Tue, Aug 2, 2016 at 3:01 PM, Ken D'Ambrosio  wrote:

> Hi, all.  Trying to figure out how to disable a compute node from getting
> new VMs scheduled for it on my Liberty cloud.  I did see the "nova
> host-update --maintenance" command, but (as noted elsewhere) it seems not
> to work for KVM-based VMs.  Is there a way to accomplish what I'm looking
> to do?  Note that I'm not looking to take the host down, just take it out
> of the pool of compute hosts ready to accept new VMs.
>
> Thanks!
>
> -Ken
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [horizon] Angular panel enable/disable not overridable in local_settings

2016-08-02 Thread Richard Jones
On 3 August 2016 at 00:32, Rob Cresswell 
wrote:

> Hi all,
>
> So we seem to be adopting a pattern of using UPDATE_HORIZON_CONFIG in the
> enabled files to add a legacy/angular toggle to the settings. I don't like
> this, because in settings.py the enabled files are processed *after*
> local_settings.py imports, meaning the angular panel will always be
> enabled, and would require a local/enabled file change to disable it.
>
> My suggestion would be:
>
> - Remove current UPDATE_HORIZON_CONFIG change in the swift panel and
> images panel patch
> - Add equivalents ('angular') to the settings.py HORIZON_CONFIG dict, and
> then the 'legacy' version to the test settings.
>
> I think that should run UTs as expected, and allow the legacy/angular
> panel to be toggled via local_settings.
>
> Was there a reason we chose to use UPDATE_HORIZON_CONFIG, rather than just
> updating the dict in settings.py? I couldn't recall a reason, and the
> original patch ( https://review.openstack.org/#/c/293168/ ) doesn't seem
> to indicate why.
>

It was an attempt to keep the change more self-contained, and since
UPDATE_HORIZON_CONFIG existed, it seemed reasonable to use it. It meant
that all the configuration regarding the visibility of the panel was in one
place, and since it's expected that deployers edit enabled files, I guess
your concern stated above didn't come into it.

I'm ambivalent about the change you propose, would be OK going either way
:-)


 Richard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-02 Thread Carl Baldwin
On Tue, Aug 2, 2016 at 6:15 AM, huangdenghui  wrote:

> hi john and brain
>thanks for your information, if we get patch[1],patch[2] merged,then fg
> can allocate private ip address. after that, we need consider floating ip
> dataplane, in current dvr implementation, fg is used to reachment testing
> for floating ip, now,with subnet types bp,fg has different subnet than
> floating ip address, from fg'subnet gateway point view, to reach floating
> ip, it need a routes entry, destination is some floating ip address, fg'ip
> address is the nexthop, and this routes entry need be populated at the
> event of floating ip creating, deleting when floating ip is dissociated.
> any comments?
>

The fg device will still do proxy arp for the floating ip to other devices
on the external network. This will be part of our testing. The upstream
router should still have an on-link route on the network to the floating ip
subnet. IOW, you shouldn't replace the floating ip subnet with the private
fg subnet on the upstream router. You should add the new subnet to the
already existing ones and the router should have an additional IP address
on the new subnet to be used as the gateway address for north-bound traffic.

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-02 Thread Carl Baldwin
On Tue, Aug 2, 2016 at 2:50 AM, Kevin Benton  wrote:

> >Given that it shares the session, it wouldn't have to do anything. But,
> again, it wouldn't behave like an external driver.
>
Why not? The only additional thing an external driver would be doing at
> this step is calling an external system. Any local accounting in the DB
> that driver would do would automatically be rolled back just like the
> in-tree system.
>
See below.

> Keep in mind that anything else can fail later in the transaction outside
> of IPAM (e.g. ML2 driver precommit validation) and none of this IPAM
> rollback logic will be called. Maybe the right answer is to get rid of the
> IPAM rollback logic completely because it gives the wrong impression that
> it is going to be called on all failures to commit when it's really only
> called in failures inside of IPAM's module. Essentially every instance of
> _safe_rollback in [1] is in an exception handler that isn't triggered if
> there are exceptions anywhere in the core plugin after the initial base DB
> calls.
>
I noticed that there are failures which will not call rollback. I started
thinking about it when I wrote this note [1] (I did realize that rollback
might not even happen with the flush under the likes of galera and that
there are other failures that can happen outside of this and would fail to
rollback). I didn't bring it up here because I thought it was a separate
issue.

It might be the wrong impression, but it was already given and there are
drivers which have been written under it. That's why I tend toward fixing
rollback instead of eliminating it. If we eliminate the idea, it isn't
clear to me yet how drivers will handle leaked allocations (or
deallocations). We could talk about some alternatives. I've got a few
knocking around the back of my head but nothing that seems like a complete
picture yet.

If one only cares about the in-tree driver which doesn't need an explicit
rollback call then one probably wouldn't care about having one at all. This
is the kind of thing I'd like to avoid by having the in-tree driver work
more like any other external driver. When the in-tree driver works
differently than others because it has a closer relationship to the rest of
the system, we quickly forget the needs of other drivers.

Carl

[1]
https://review.openstack.org/#/c/348956/1/neutron/tests/unit/extensions/test_segment.py@793

1.
> https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py
>
> On Aug 1, 2016 18:11, "Carl Baldwin"  wrote:
>
>>
>>
>> On Mon, Aug 1, 2016 at 2:29 PM, Kevin Benton  wrote:
>>
>>> >We still want the exception to rollback the entire API operation and
>>> stopping it with a nested operation I think would mess that up.
>>>
>>> Well I think you would want to start a nested transaction, capture the
>>> duplicate, call the ipam delete methods, then throw a retryrequest. The
>>> exception will still trigger a rollback of the entire operation.
>>>
>>
>> This is kind of what I was headed when I decided to solicit some
>> feedback. It is a possibility should still be considered.
>>
>>
>>> >Second, I've been throwing around the idea of not sharing the session
>>> with the IPAM driver.
>>>
>>> If the IPAM driver does not have access to the session, it can't see any
>>> of the uncommitted data. Would that be a problem? In particular, doesn't
>>> the IPAM driver's DB table have foreign key constraints with the data
>>> waiting to be committed in the other session? I'm hesitant to take this
>>> approach because it means other (if the in-tree doesn't already) IPAM
>>> drivers cannot have any relational integrity with the objects in question.
>>>
>>
>> The in-tree driver doesn't have any FK constraints back to the neutron db
>> schema for IPAM [1]. I don't think that would make sense since it is
>> supposed to work like an external driver.
>>
>>
>>> A related question is, why does the in-tree IPAM driver have to do
>>> anything at all on a rollback? It currently does share a session which is
>>> automatically going to rollback all of it's DB operations for it. If it's
>>> because the driver cannot distinguish a delete call from a rollback and a
>>> normal delete, I suggest we change the delete call to pass a flag
>>> indicating that it's for a rollback. That would allow any DB-based drivers
>>> to just do nothing at this step.
>>>
>>
>> Given that it shares the session, it wouldn't have to do anything. But,
>> again, it wouldn't behave like an external driver. I'd like to not have
>> special drivers that behave differently than drivers that are really
>> external; we end up finding things that the in-tree driver does in our
>> testing that doesn't work right for other drivers.
>>
>> Drivers might need to access uncommitted data from the neutron DB. I
>> think even external drivers do this. However, there is a hard line between
>> the Neutron tables (even IPAM related ones) and the pluggable 

Re: [Openstack] Disable compute node from accepting new VMs?

2016-08-02 Thread Rahul Sharma
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Installation_and_Configuration_Guide/Safely_Removing_Compute_Resources.html

*nova service-disable HOST nova-compute*


*Rahul Sharma*
*MS in Computer Science, 2016*
College of Computer and Information Science, Northeastern University
Mobile:  801-706-7860
Email: rahulsharma...@gmail.com
Linkedin: www.linkedin.com/in/rahulsharmaait

On Tue, Aug 2, 2016 at 5:01 PM, Ken D'Ambrosio  wrote:

> Hi, all.  Trying to figure out how to disable a compute node from getting
> new VMs scheduled for it on my Liberty cloud.  I did see the "nova
> host-update --maintenance" command, but (as noted elsewhere) it seems not
> to work for KVM-based VMs.  Is there a way to accomplish what I'm looking
> to do?  Note that I'm not looking to take the host down, just take it out
> of the pool of compute hosts ready to accept new VMs.
>
> Thanks!
>
> -Ken
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [neutron] - vlan-aware-vms

2016-08-02 Thread Armando M.
On 29 July 2016 at 12:59, Martinx - ジェームズ  wrote:

> Quick question:
>
> Can I start testing Newton VLAN Aware VMs now (Beta 2)?
>
> Thanks,
> Thiago
>
>
If you're paying close attention the LinuxBridge version is almost
functional, and the OVS one is coming along. I'd advise to wait a tad
longer. I am trying to keep [1] up to date, so you might want to check that
out before pulling down  the code.

[1] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms


> On 22 July 2016 at 04:45, Kevin Benton  wrote:
>
>> Since they are essentially regular ports in the neutron data model, the
>> regular rules for attaching to networks would apply. So you can should be
>> able to create a sub-port on another network if that network is shared with
>> you (either globally shared or via RBAC).
>>
>> On Wed, Jul 13, 2016 at 8:55 AM, Farhad Sunavala  wrote:
>>
>>>
>>> Below is the latest spec for vlan-aware-vms
>>>
>>>
>>> https://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html
>>> 
>>>
>>>
>>>
>>> I have a quick question on the above. (multi-tenancy).
>>>
>>> Assume the case of nested containers in a VM.
>>>
>>> Yes, the containers can be in different networks of the same tenant and
>>> the above blue-print will handle the case very well.
>>> How does it work when the containers are in different networks in
>>> different tenants ?
>>>
>>> The trick is to create neutron ports (for the subports) and then link
>>> them to the trunk port using
>>>
>>> neutron trunk-subport-add TRUNK \
>>>PORT[,SEGMENTATION-TYPE,SEGMENTATION-ID] \
>>>[PORT,...]
>>>
>>>
>>> In the above command all the neutron ports (trunk  ports and subports)
>>> must be in the same tenant.
>>> As far as I know, a tenant will not see neutron ports from another
>>> tenant.Or will this command allow
>>> neutron ports from different tenants to be attached ?
>>>
>>> Solution1:
>>>
>>>
>>> C1(ten1)   C2(ten2)
>>> |   |
>>> 
>>> OVS bridge inside VM
>>> 
>>> |
>>> | Trunk port
>>> |
>>> 
>>> br-trunk (vlan-aware-vms spec)
>>> 
>>>
>>> E.g.  VM "X" consists of containers C1 in Tenant 1 with portID = C1
>>> (network dn1)
>>> container C2 in Tenant 2 with portID = C2 (network dn2)
>>> The trunk port of VM "X" is in tenant 100 with portID = T1 (network
>>> dt)
>>>
>>> Will the above command allow a neutron trunk to have neutron sub-ports
>>> in different tenants ?
>>>
>>> neutron trunk-subport-add T1 \
>>>A  vlan 1 \
>>>B vlan 2
>>>
>>>
>>> Solution2:
>>> Have a separate trunk port for each tenant connected to the vM
>>>
>>> C1(Ten1)C2(Ten2)
>>> ||
>>> ||
>>> ---
>>> OVS bridge inside VM
>>> 
>>> |  |
>>> |Trunk(Ten1)  | (Trunk(Ten2)
>>> |  |
>>> -
>>> br-trunk (vlan-aware-vms spec)
>>> ---
>>>
>>> If the approach is solution2, then the issue is that Nova will not
>>> allow a neutron port to be attached to a VM (if the neutron port
>>> belongs to another tenant).
>>>
>>>
>>> Any pointers will be highly appreciated.
>>>
>>> thanks,
>>> Farhad.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-02 Thread Steven Hardy
On Tue, Aug 02, 2016 at 09:36:45PM +0200, Christian Schwede wrote:
> Hello everyone,
> 
> I'd like to improve the Swift deployments done by TripleO. There are a
> few problems today when deployed with the current defaults:

Thanks for digging into this, I'm aware this has been something of a
known-issue for some time, so it's great to see it getting addressed :)

Some comments inline;

> 1. Adding new nodes (or replacing existing nodes) is not possible,
> because the rings are built locally on each host and a new node doesn't
> know about the "history" of the rings. Therefore rings might become
> different on the nodes, and that results in an unusable state eventually.
> 
> 2. The rings are only using a single device, and it seems that this is
> just a directory and not a mountpoint with a real device. Therefore data
> is stored on the root device - even if you have 100TB disk space in the
> background. If not fixed manually your root device will run out of space
> eventually.
> 
> 3. Even if a real disk is mounted in /srv/node, replacing a faulty disk
> is much more troublesome. Normally you would simply unmount a disk, and
> then replace the disk sometime later. But because mount_check is set to
> False in the storage servers data will be written to the root device in
> the meantime; and when you finally mount the disk again, you can't
> simply cleanup.
> 
> 4. In general, it's not possible to change cluster layout (using
> different zones/regions/partition power/device weight, slowly adding new
> devices to avoid 25% of the data will be moved immediately when adding
> new nodes to a small cluster, ...). You could manually manage your
> rings, but they will be overwritten finally when updating your overcloud.
> 
> 5. Missing erasure coding support (or storage policies in general)
> 
> This sounds bad, however most of the current issues can be fixed using
> customized templates and some tooling to create the rings in advance on
> the undercloud node.
> 
> The information about all the devices can be collected from the
> introspection data, and by using node placement the nodenames in the
> rings are known in advance if the nodes are not yet powered on. This
> ensures a consistent ring state, and an operator can modify the rings if
> needed and to customize the cluster layout.
> 
> Using some customized templates we can already do the following:
> - disable rinbguilding on the nodes
> - create filesystems on the extra blockdevices
> - copy ringfiles from the undercloud, using pre-built rings
> - enable mount_check by default
> - (define storage policies if needed)
> 
> I started working on a POC using tripleo-quickstart, some custom
> templates and a small Python tool to build rings based on the
> introspection data:
> 
> https://github.com/cschwede/tripleo-swift-ring-tool
> 
> I'd like to get some feedback on the tool and templates.
> 
> - Does this make sense to you?

Yes, I think the basic workflow described should work, and it's good to see
that you're passing the ring data via swift as this is consistent with how
we already pass some data to nodes via our DeployArtifacts interface:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.yaml

Note however that there are no credentials to access the undercloud swift
on the nodes, so you'll need to pass a tempurl reference in (which is what
we do for deploy artifacts, obviously you will have credentials to create
the container & tempurl on the undercloud).

One slight concern I have is mandating the use of predictable placement -
it'd be nice to think about ways we might avoid that but the undercloud
centric approach seems OK for a first pass (in either case I think the
delivery via swift will be the same).

> - How (and where) could we integrate this upstream?

So I think the DeployArtefacts interface may work for this, and we have a
helper script that can upload data to swift:

https://github.com/openstack/tripleo-common/blob/master/scripts/upload-swift-artifacts

This basically pushes a tarball to swift, creates a tempurl, then creates a
file ($HOME/.tripleo/environments/deployment-artifacts.yaml) which is
automatically read by tripleoclient on deployment.

DeployArtifactURLs is already a list, but we'll need to test and confirm we
can pass both e.g swift ring data and updated puppet modules at the same
time.

The part that actually builds the rings on the undercloud will probably
need to be created as a custom mistral action:

https://github.com/openstack/tripleo-common/tree/master/tripleo_common/actions

These are then driven as part of the deployment workflow (although the
final workflow where this will wire in hasn't yet landed):

https://review.openstack.org/#/c/298732/

> - Templates might be included in tripleo-heat-templates?

Yes, although by the look of it there may be few template changes required.

If you want to remove the current ringbuilder puppet step completely, you
can simply remove 

Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-02 Thread Timofei Durakov
If operator haven't explicitly defined  live_migration_tunnelled param in
nova.conf, after upgrade is done it's default value will be set to False.
If operator set this param explicitly, everything will be unchanged. To
notify about this change I'm proposing to use release notes, as It's
usually done. So there will be no upgrades impact related to this change.

On Tue, Aug 2, 2016 at 10:51 PM, Chris Friesen 
wrote:

> On 08/02/2016 09:14 AM, Timofei Durakov wrote:
>
>> Hi,
>>
>> Taking into account everything above I'd prefer to see
>> live_migration_tunnelled(that corresponds to VIR_MIGRATE_TUNNELLED)
>> defaulted to
>> False. We just need to make a release note for this change, and on the
>> host
>> startup do LOG.warning to notify the operator that there are no tunnels
>> for
>> live-migration. For me, it will be enough. Then just put [1] on top of it.
>>
>
> How would upgrades work?  Presumably you'd have to get all the existing
> compute nodes able to handle un-tunnelled live migrations, then you'd
> live-migrate from the old compute nodes to the new ones using tunnelled
> migrations (where live migration is possible), but after that everything
> would be un-tunnelled?
>
> Chris
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Priorities for the rest of the cycle

2016-08-02 Thread Jim Rollenhagen
Hey all,

There's some deadlines coming up:

* non-client library freeze in 3 weeks
* client library freeze in 4 weeks
* final releases in 8 weeks

http://releases.openstack.org/newton/schedule.html

As usual, we don't do a hard feature freeze at the normal feature freeze
date (4 weeks from now), however we do want to stop merging big or risky
things around that time. We also need to keep in mind that features
which need client support should obviously be done with enough time to
get the client side done before client freeze.

So with that in mind, I've recently shuffled things around a bit on our
trello board:
https://trello.com/b/ROTxmGIc/ironic-newton-priorities

There's now two lists for code patches: must have, and nice to have.
Both lists are in the order that I think things should be prioritized.
Please do stand up if you disagree on any of it. :)

Please do keep the priorities in mind when writing code or doing
reviews; we have a lot of things that are getting close, and I'd like to
finish many of these so that we don't need to carry them over to Ocata.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-02 Thread Matt Riedemann

On 8/2/2016 9:09 AM, Matt Riedemann wrote:

On 8/2/2016 2:41 AM, Alex Xu wrote:

A little strange we have two API endpoints, one is
'/servers/{uuid}/os-interfaces', another one is
'/servers/{uuid}/os-virtual-interfaces'.

I prefer to keep os-attach-interface. Due to I think we should deprecate
the nova-network also. Actually we deprecate all the nova-network
related API in the 2.36 also. And os-attach-interface didn't support
nova-network, then it is the right choice.

So we can deprecate the os-virtual-interface in newton. And in Ocata, we
correct the implementation to get the vif info and tag.
os-attach-interface actually accept the server_id, and there is check
ensure the port belong to the server. So it shouldn't very hard to get
the vif info and tag.

And sorry for I missed that when coding patches also...let me if you
need any help at here.



--

Thanks,

Matt Riedemann




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Alex,

os-interface will be deprecated, that's the APIs to show/list ports for
a given server.

os-virtual-interfaces is not the same, and was never a proxy for neutron
since before 2.32 we never stored anything in the virtual_interfaces
table in the nova database for neutron, but now we do because that's
where we store the VIF tags.

We have to keep os-attach-interface (attach/detach interface actions on
a server).

Are you suggesting we drop os-virtual-interfaces and change the behavior
of os-interfaces to use the nova virtual_interfaces table rather than
proxying to neutron?

Note that with os-virtual-interfaces even if we start showing VIFs for
neutron ports, any ports created before Newton won't be in there, which
might be a bit confusing.



Here is the draft spec: https://review.openstack.org/#/c/350277/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Disable compute node from accepting new VMs?

2016-08-02 Thread Ken D'Ambrosio
Hi, all.  Trying to figure out how to disable a compute node from 
getting new VMs scheduled for it on my Liberty cloud.  I did see the 
"nova host-update --maintenance" command, but (as noted elsewhere) it 
seems not to work for KVM-based VMs.  Is there a way to accomplish what 
I'm looking to do?  Note that I'm not looking to take the host down, 
just take it out of the pool of compute hosts ready to accept new VMs.


Thanks!

-Ken

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-08-02 Thread Julien Danjou
On Tue, Aug 02 2016, gordon chung wrote:

> so from very rough testing, we can choose to lower it to 3600points which
> offers better split opportunities with negligible improvement/degradation, or
> even more to 900points with potentially small write degradation (massive
> batching).

3600 points sounds nice. :)

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Stepping Down.

2016-08-02 Thread Morgan Fainberg
Based upon my personal time demands among a number of other reasons I will
be stepping down from the Technical Committee. This is planned to take
effect with the next TC election so that my seat will be up to be filled at
that time.

For those who elected me in, thank you.

Regards,
--Morgan Fainberg
IRC: notmorgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Belated nova newton midcycle recap (part 2)

2016-08-02 Thread Matt Riedemann

On 8/2/2016 12:25 PM, Jim Rollenhagen wrote:

On Mon, Aug 01, 2016 at 09:15:46PM -0500, Matt Riedemann wrote:




* Placement API for resource providers

Jay's personal goal for Newton is for the resource tracker to be writing
inventory and allocation data via the placement API. We want to get the data
writing into the placement API in Newton so we can start using it in Ocata.

There are some spec amendments up for resource providers, at least one has
merged, and the initial placement API change merged today:

https://review.openstack.org/#/c/329149/

We talked about supporting dynamic resource classes for Ironic use cases
which is a stretch goal for Nova in Newton. Jay has a spec for that here:

https://review.openstack.org/#/c/312696/

There is a lot more detail in the etherpad and honestly Jay Pipes or Jim
Rollenhagen would be better to summarize what came out of this at the
midcycle and what's being worked on for dynamic resource classes right now.


I actually wrote a bit about this last week:
http://lists.openstack.org/pipermail/openstack-dev/2016-July/099922.html

I'm not sure it covers everything, but it's the important pieces I got
from it.

// jim


We talked about a separate placement API database but decided this should be
optional to avoid forcing yet another nova database on deployers in a couple
of releases. This would be available for deployers to use to avoid some
future upgrade pain when the placement service is split out from Nova, but
if not configured it will default to the API database for the placement API.
There are a bunch more details and discussion on that in this thread that
Chris Dent started after the midcycle:

http://lists.openstack.org/pipermail/openstack-dev/2016-July/100302.html



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Perfect, thanks! I totally missed that.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-02 Thread Chris Friesen

On 08/02/2016 09:14 AM, Timofei Durakov wrote:

Hi,

Taking into account everything above I'd prefer to see
live_migration_tunnelled(that corresponds to VIR_MIGRATE_TUNNELLED) defaulted to
False. We just need to make a release note for this change, and on the host
startup do LOG.warning to notify the operator that there are no tunnels for
live-migration. For me, it will be enough. Then just put [1] on top of it.


How would upgrades work?  Presumably you'd have to get all the existing compute 
nodes able to handle un-tunnelled live migrations, then you'd live-migrate from 
the old compute nodes to the new ones using tunnelled migrations (where live 
migration is possible), but after that everything would be un-tunnelled?


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][neutron][ipam][networking-infoblox] networking-infoblox 2.0.2

2016-08-02 Thread John Belamaric
I am happy to announce that we have released version 2.0.2 of the Infoblox IPAM 
driver for OpenStack. This driver uses the pluggable IPAM framework delivered 
in Neutron's Liberty release, enabling the use of Infoblox for allocating 
subnets and IP addresses, and automatically creating DNS zones and records in 
Infoblox.

The driver is compatible with both Liberty and Mitaka.

This version contains important bug fixes and some feature enhancements and is 
recommended for all users.

More information and the code may be found at the networking-infoblox Launchpad 
page [1], or PyPi [2]. Bugs may also be reported on the same page.

[1] https://launchpad.net/networking-infoblox
[2] https://pypi.python.org/pypi/networking-infoblox

We are continuing to develop this driver to offer additional functionality, so 
please do provide any feedback you may have.

Thank you,

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread Joshua Harlow
Just a general pet-peeve of mine, but can we not have #rdo be the 
tripleo user support channel, but say push people more toward #tripleo 
or #tripleo-users or #openstack instead. I'd rather not have #rdo be the 
way users get tripleo support, because at that point we might as well 
call tripleo a redhat product and just move the repos to somewhere 
inside redhat.


-Josh

Dan Sneddon wrote:

Speaking for myself and the other TripleO developers at Red Hat, we do
try our best to answer user questions in #rdo. You will also find some
production users hanging out there. The best times to ask questions are
during East Coast business hours, or during business hours of GMT+1
(we have a large development office in Brno, CZ with engineers that
work on TripleO). There is also an RDO-specific mailing list available
here:https://www.redhat.com/mailman/listinfo/rdo-list


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting Wednesday 0900 UTC

2016-08-02 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week in addition to the usual themes I thought it would be useful to share 
information about upcoming scientific OpenStack events, as I’ve heard of 
several this week and I am sure others have too.  We’ll also be joined by Anne 
Bertucio to talk about student discounts for the Certified OpenStack 
Administrator qualification.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_August_3rd_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [Ironic] v2 API - request for feedback on "problem description"

2016-08-02 Thread Devananda van der Veen
Hi all,

Today's ironic-v2-api meeting was pretty empty, so I am posting a summary of our
subteam's activity here.

I have taken the midcycle notes about our API's current pain points / usability
gaps, and written them up into the format we would use for a spec's "Problem
Description", and posted them into an etherpad:

  https://etherpad.openstack.org/p/ironic-v2-api

As context for folks that may not have been in the midcycle discussion, my goal
in this work is to, by Barcelona, have a concrete outline of the problems with
our API -- and a proposal of how we might solve them -- written up as a design
specification.

I would like to invite feedback on this very early draft ahead of our weekly
meeting next week, and then discuss it for a portion of the meeting on Monday.


Thanks for your time,
Devananda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-02 Thread Christian Schwede
Hello everyone,

I'd like to improve the Swift deployments done by TripleO. There are a
few problems today when deployed with the current defaults:

1. Adding new nodes (or replacing existing nodes) is not possible,
because the rings are built locally on each host and a new node doesn't
know about the "history" of the rings. Therefore rings might become
different on the nodes, and that results in an unusable state eventually.

2. The rings are only using a single device, and it seems that this is
just a directory and not a mountpoint with a real device. Therefore data
is stored on the root device - even if you have 100TB disk space in the
background. If not fixed manually your root device will run out of space
eventually.

3. Even if a real disk is mounted in /srv/node, replacing a faulty disk
is much more troublesome. Normally you would simply unmount a disk, and
then replace the disk sometime later. But because mount_check is set to
False in the storage servers data will be written to the root device in
the meantime; and when you finally mount the disk again, you can't
simply cleanup.

4. In general, it's not possible to change cluster layout (using
different zones/regions/partition power/device weight, slowly adding new
devices to avoid 25% of the data will be moved immediately when adding
new nodes to a small cluster, ...). You could manually manage your
rings, but they will be overwritten finally when updating your overcloud.

5. Missing erasure coding support (or storage policies in general)

This sounds bad, however most of the current issues can be fixed using
customized templates and some tooling to create the rings in advance on
the undercloud node.

The information about all the devices can be collected from the
introspection data, and by using node placement the nodenames in the
rings are known in advance if the nodes are not yet powered on. This
ensures a consistent ring state, and an operator can modify the rings if
needed and to customize the cluster layout.

Using some customized templates we can already do the following:
- disable rinbguilding on the nodes
- create filesystems on the extra blockdevices
- copy ringfiles from the undercloud, using pre-built rings
- enable mount_check by default
- (define storage policies if needed)

I started working on a POC using tripleo-quickstart, some custom
templates and a small Python tool to build rings based on the
introspection data:

https://github.com/cschwede/tripleo-swift-ring-tool

I'd like to get some feedback on the tool and templates.

- Does this make sense to you?
- How (and where) could we integrate this upstream?
- Templates might be included in tripleo-heat-templates?

IMO the most important change would be to avoid overwriting rings on the
overcloud. There is a good chance to mess up your cluster if the
template to disable ring building isn't used and you already have
working rings in place. Same for the mount_check option.

I'm curious about your thoughts!

Thanks,

Christian


-- 
Christian Schwede
_

Red Hat GmbH
Technopark II, Haus C, Werner-von-Siemens-Ring 11-15, 85630 Grasbrunn,
Handelsregister: Amtsgericht Muenchen HRB 153243
Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael Cunningham,
Charles Cachera

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] network_interface, defaults, and explicitness

2016-08-02 Thread Devananda van der Veen
On 08/01/2016 05:10 AM, Jim Rollenhagen wrote:
> Hey all,
> 
> Our nova patch for networking[0] got stuck for a bit, because Nova needs
> to know which network interface is in use for the node, in order to
> properly set up the port.
> 
> The code landed for network_interface follows the following order for
> what is actually used for the node:
> 1) node.network_interface, if that is None:
> 2) CONF.default_network_interface, if that isNone:
> 3) flat, if using neutron DHCP
> 4) noop, if not using neutron DHCP
> 
> The API will return None for node.network_interface in the API (GET
> /v1/nodes/uuid). This won't work for Nova, because Nova can't know what
> CONF.default_network_interface is.
> 
> I propose that if a network_interface is not sent in the node-create
> call, we write whatever the current default is, so that it is always set
> and not using an implicit value that could change.
> 
> For nodes that exist before the upgrade, we do a database migration to
> set network_interface to CONF.default_network_interface (or if that's
> None, set to flat/noop depending on the DHCP provider).

Sounds quite reasonable to me.

--deva


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Pending removal of Scality volume driver

2016-08-02 Thread Sean McGinnis
Tomorrow is the one week grace period. I just ran the last comment
script and it still shows it's been 112 days since the Scality CI has
reported on a patch.

Please let me know the status of the CI.

On Thu, Jul 28, 2016 at 07:28:26AM -0500, Sean McGinnis wrote:
> On Thu, Jul 28, 2016 at 11:28:42AM +0200, Jordan Pittier wrote:
> > Hi Sean,
> > 
> > Thanks for the heads up.
> > 
> > On Wed, Jul 27, 2016 at 11:13 PM, Sean McGinnis 
> > wrote:
> > 
> > > The Cinder policy for driver CI requires that all volume drivers
> > > have a CI reporting on any new patchset. CI's may have some down
> > > time, but if they do not report within a two week period they are
> > > considered out of compliance with our policy.
> > >
> > > This is a notification that the Scality OpenStack CI is out of compliance.
> > > It has not reported since April 12th, 2016.
> > >
> > Our CI is still running for every patchset, just that it doesn't report
> > back to Gerrit. I'll see what I can do about it.
> 
> Great! I'll watch for it to start reporting again. Thanks for being
> responsive and looking into it.
> 
> > 
> > >
> > > The patch for driver removal has been posted here:
> > >
> > > https://review.openstack.org/348032/
> > 
> > That link is about the Tegile driver, not ours.
> 
> Oops, copy/paste error. Here is the correct one:
> 
> https://review.openstack.org/#/c/348042/
> 
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Pending removal of X-IO volume driver

2016-08-02 Thread Sean McGinnis
Thanks Richard. The removal patch has been abandoned.

On Tue, Aug 02, 2016 at 03:20:41PM +, Hedlind, Richard wrote:
> Status update. Our CI is back up and has been passing tests successfully for 
> ~18h now. I will keep a close eye on it to make sure it stays up. Sorry about 
> the down time.
> 
> Richard
> 
> -Original Message-
> From: Hedlind, Richard [mailto:richard.hedl...@x-io.com] 
> Sent: Thursday, July 28, 2016 9:37 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [Cinder] Pending removal of X-IO volume driver
> 
> Hi Sean,
> Thanks for the heads up. I have been busy on other projects and not been 
> involved in maintaining the CI. I will look into it and get it back up and 
> running.
> I will keep you posted on the progress.
> 
> Thanks,
> Richard
> 
> -Original Message-
> From: Sean McGinnis [mailto:sean.mcgin...@gmx.com] 
> Sent: Wednesday, July 27, 2016 2:26 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Cinder] Pending removal of X-IO volume driver
> 
> The Cinder policy for driver CI requires that all volume drivers have a CI 
> reporting on any new patchset. CI's may have some down time, but if they do 
> not report within a two week period they are considered out of compliance 
> with our policy.
> 
> This is a notification that the X-IO OpenStack CI is out of compliance.
> It has not reported since March 18th, 2016.
> 
> The patch for driver removal has been posted here:
> 
> https://review.openstack.org/348022
> 
> If this CI is not brought into compliance, the patch to remove the driver 
> will be approved one week from now.
> 
> Thanks,
> Sean McGinnis (smcginnis)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-02 Thread Brian Haley

On 08/02/2016 08:15 AM, huangdenghui wrote:

hi john and brain
   thanks for your information, if we get patch[1],patch[2] merged,then fg can
allocate private ip address. after that, we need consider floating ip dataplane,
in current dvr implementation, fg is used to reachment testing for floating ip,
now,with subnet types bp,fg has different subnet than floating ip address, from
fg'subnet gateway point view, to reach floating ip, it need a routes entry,
destination is some floating ip address, fg'ip address is the nexthop, and this
routes entry need be populated at the event of floating ip creating, deleting
when floating ip is dissociated. any comments?


Yes, there could be a small change necessary to the l3-agent in order to route 
packets with DVR enabled, but I don't see it being a blocker.


-Brian



On 2016-08-01 23:38 , John Davidge  Wrote:

Yes, as Brian says this will be covered by the follow-up patch to [2]
which I¹m currently working on. Thanks for the question.

John


On 8/1/16, 3:17 PM, "Brian Haley"  wrote:

>On 07/31/2016 06:27 AM, huangdenghui wrote:
>> Hi
>>Now we have spec named subnet service types, which provides a
>>capability of
>> allowing different port of a network to allocate ip address from
>>different
>> subnet. In current implementation of DVR, fip also is distributed on
>>every
>> compute node, floating ip and fg's ip are both allocated from external
>>network's
>> subnets. In large public cloud deployment, current implementation will
>>consume
>> lots of public ip address. Do we need a RFE to apply subnet service
>>types spec
>> to resolve this problem. Any thoughts?
>
>Hi,
>
>This is going to be covered in the existing RFE for subnet service types
>[1].
>We currently have two reviews in progress for CRUD [2] and CLI [3], the
>IPAM
>changes are next.
>
>-Brian
>
>[1] https://review.openstack.org/#/c/300207/
>[2] https://review.openstack.org/#/c/337851/
>[3] https://review.openstack.org/#/c/342976/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rackspace Limited is a company registered in England & Wales (company
registered number 03897010) whose registered office is at 5 Millington Road,
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may
contain confidential or privileged information intended for the recipient.
Any dissemination, distribution or copying of the enclosed material is
prohibited. If you receive this transmission in error, please notify us
immediately by e-mail at ab...@rackspace.com and delete the original
message. Your cooperation is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-08-01 10:23:57 -0400:
> On 08/01/2016 08:33 AM, Sean Dague wrote:
> > On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> >> One of the outcomes of the discussion at the leadership training
> >> session earlier this year was the idea that the TC should set some
> >> community-wide goals for accomplishing specific technical tasks to
> >> get the projects synced up and moving in the same direction.
> >>
> >> After several drafts via etherpad and input from other TC and SWG
> >> members, I've prepared the change for the governance repo [1] and
> >> am ready to open this discussion up to the broader community. Please
> >> read through the patch carefully, especially the "goals/index.rst"
> >> document which tries to lay out the expectations for what makes a
> >> good goal for this purpose and for how teams are meant to approach
> >> working on these goals.
> >>
> >> I've also prepared two patches proposing specific goals for Ocata
> >> [2][3].  I've tried to keep these suggested goals for the first
> >> iteration limited to "finish what we've started" type items, so
> >> they are small and straightforward enough to be able to be completed.
> >> That will let us experiment with the process of managing goals this
> >> time around, and set us up for discussions that may need to happen
> >> at the Ocata summit about implementation.
> >>
> >> For future cycles, we can iterate on making the goals "harder", and
> >> collecting suggestions for goals from the community during the forum
> >> discussions that will happen at summits starting in Boston.
> >>
> >> Doug
> >>
> >> [1] https://review.openstack.org/349068 describe a process for managing 
> >> community-wide goals
> >> [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
> >> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> >> libraries"
> >
> > I like the direction this is headed. And I think for the test items, it
> > works pretty well.
> 
> I commented on the reviews, but I disagree with both the direction and 
> the proposed implementation of this.
> 
> In short, I think there's too much stick and not enough carrot. We 
> should create natural incentives for projects to achieve desired 
> alignment in certain areas, but placing mandates on project teams in a 
> diverse community like OpenStack is not useful.
> 
> The consequences of a project team *not* meeting these proposed mandates 
> has yet to be decided (and I made that point on the governance patch 
> review). But let's say that the consequences are that a project is 
> removed from the OpenStack big tent if they fail to complete these 
> "shared objectives".
> 
> What will we do when Swift decides that they have no intention of using 
> oslo.messaging or oslo.config because they can't stand fundamentals 
> about those libraries? Are we going to kick Swift, a founding project of 
> OpenStack, out of the OpenStack big tent?

Membership in the tent is the carrot, and ejection is the stick. The
big tent was an acknowledgement that giving out carrots makes everyone
stronger (all these well fed projects have led to a bigger supply of
carrots in general).

I think this proposal is an attempt to manage the ensuing chaos. We've
all seen carrot farmers abandon their farms, as well as duplicated effort
leading to a confusing experience for consumers of OpenStack's products.

I think there's room to build consensus around diversity in implementation
and even culture. We don't need to be a monolith. Our Swift development
community is bringing strong, powerful insight to the overall effort,
and strengthens the OpenStack brand considerably.  Certainly we can
support projects doing things their own way in some instances if they
so choose. What we don't want, however, is projects that operate in
relative isolation, without any cohesion, even loose cohesion, with the
rest.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominate Vladimir Khlyunev for fuel-qa core

2016-08-02 Thread Alexey Stepanov
+1

On Tue, Aug 2, 2016 at 2:56 PM, Artem Panchenko 
wrote:

> +1
>
> On Tue, Aug 2, 2016 at 1:52 PM, Dmitry Tyzhnenko 
> wrote:
>
>> +1
>>
>> On Tue, Aug 2, 2016 at 12:51 PM, Artur Svechnikov <
>> asvechni...@mirantis.com> wrote:
>>
>>> +1
>>>
>>> Best regards,
>>> Svechnikov Artur
>>>
>>> On Tue, Aug 2, 2016 at 12:40 PM, Andrey Sledzinskiy <
>>> asledzins...@mirantis.com> wrote:
>>>
 Hi,
 I'd like to nominate Vladimir Khlyunev for fuel-qa [0] core.

 Vladimir has become a valuable member of fuel-qa project in quite short
 period of time. His solid expertise and constant contribution gives me no
 choice but to nominate him for fuel-qa core.

 If anyone has any objections, speak now or forever hold your peace

 [0]
 http://stackalytics.com/?company=mirantis=all=fuel-qa_id=vkhlyunev
 

 --
 Thanks,
 Andrey Sledzinskiy
 QA Engineer,
 Mirantis, Kharkiv


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> WBR,
>> Dmitry T.
>> Fuel QA Engineer
>> http://www.mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Alexey Stepanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread Curtis
On Tue, Aug 2, 2016 at 11:54 AM, Dan Sneddon  wrote:
> On 08/02/2016 09:57 AM, Curtis wrote:
>> Hi,
>>
>> I'm just curious who, if anyone, is using TripleO in production?
>>
>> I'm having a hard time finding anywhere to ask end-user type
>> questions. #tripleo irc seems to be just a dev channel. Not sure if
>> there is anywhere for end users to ask questions. A quick look at
>> stackalytics shows it's mostly RedHat contributions, though again, it
>> was a quick look.
>>
>> If there were other users it would be cool to perhaps try to have a
>> session on it at the upcoming ops midcycle.
>>
>> Thanks,
>> Curtis.
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> Nearly every commercial customer of Red Hat OpenStack Platform (OSP)
> since version 7 (version 9 is coming out imminently) are using TripleO
> in production, since the installer is TripleO. That's hundreds of
> production installations, some of them very large scale. The exact same
> source code is used for RDO and OSP. HP used to use TripleO, but I
> don't think they have contributed to TripleO since they updated Helion
> with a proprietary installer.

Right, yeah, am very aware of RedHat's use of TripleO.

>
> Speaking for myself and the other TripleO developers at Red Hat, we do
> try our best to answer user questions in #rdo. You will also find some
> production users hanging out there. The best times to ask questions are
> during East Coast business hours, or during business hours of GMT +1
> (we have a large development office in Brno, CZ with engineers that
> work on TripleO). There is also an RDO-specific mailing list available
> here: https://www.redhat.com/mailman/listinfo/rdo-list

Cool, thanks. I don't think I've tried asking the RDO community
questions about TripleO. I will check it out.

Thanks,
Curtis.

>
> --
> Dan Sneddon |  Principal OpenStack Engineer
> dsned...@redhat.com |  redhat.com/openstack
> 650.254.4025|  dsneddon:irc   @dxs:twitter
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Blog: serverascode.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Ihar Hrachyshka

Amit Kumar Saha (amisaha)  wrote:


Hi,

We would like to introduce the community to a new Python based project  
called DON – Diagnosing OpenStack Networking. More details about the  
project can be found at https://github.com/openstack/python-don.


DON, written primarily in Python, and available as a dashboard in  
OpenStack Horizon, Libery release, is a network analysis and diagnostic  
system and provides a completely automated service for verifying and  
diagnosing the networking functionality provided by OVS. The genesis of  
this idea was presented at the Vancouver summit, May 2015. Hopefully the  
community will find this project interesting and will give us valuable  
feedback.


Amit,

neutron team currently works on defining a new diagnostics API:  
https://review.openstack.org/#/c/308973/


Please work with the community on API definition, and later, on backend  
specific implementation of desired checks.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Gearman-plugin for Jenkins: support for dockerized executors

2016-08-02 Thread Zaro
I think the zuul parameter issue you are seeing is due to issue
https://issues.jenkins-ci.org/browse/JENKINS-34885

See if the workaround in that issue works for you on newer versions of Jenkins.

According to zuul v3 spec[1], the next versions of zuul will continue
to support gearman so as long as the gearman-plugin continues to work
with Jenkins then everything is happy :).With that said, I'm not
sure we have much motivation to update the Jenkins gearman plugin
since we are no longer use it ourselves.  We hope people who intend to
use it can contribute and keep it updated as necessary :)

[1] https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html

-Khai

On Tue, Aug 2, 2016 at 1:42 AM, Wolniewicz, Maciej (Nokia -
PL/Wroclaw)  wrote:
> Hi,
>
>
>
> Khai, Clark thank you for your answers.
>
>
>
> It looked like we had problem with dynamic slaves because we tried to use
> gearman plugin with newest Jenkins LTS release (2.7.1).
>
> We also had problems with sending Zuul parameter (ZUUL_PROJECT, ZUUL_COMMIT,
> etc.) to jenkins jobs. Those parameters could be seen in job description in
> build history:
>
> 
>
>   Triggered by change:
>
> https://gerrite1.ext.net.nokia.com:443/10541;>10541,41
>
>   Branch: master
>
>   Pipeline: check
>
> 
>
> however were not passed as environment variables to job.
>
>
>
> When we used Jenkins release 1.625.3 - the one suggested on plugin's wiki
> (https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin) – everything
> started to work. Now Gearman plugin sees dynamic slaves and is passing Zuul
> parameters to jobs.
>
>
>
> Could you tell me if there is a plan to support newest Jenkins LTS releases
> in near future (for example 2.7.1) ?
>
>
>
> On Zuul documentation page
> (http://docs.openstack.org/infra/system-config/zuul.html ) we can see that
> openstack is moving away from Jenkins/Zuul to Ansible/Zuul for launching
> jobs.
>
> Will Gearman plugin be still developed in such case? Are you planning to
> support this plugin in long term period?
>
>
>
> Do you have any knowledge that future releases of Zuul (3.x.x) will also you
> gearman deamon to handle job executions so that we could use it with gearman
> plugin?
>
>
>
> Br,
>
> Maciek
>
>
>
> -Original Message-
>
> From: Zaro [mailto:zaro0...@gmail.com]
>
> Sent: Monday, July 25, 2016 6:45 PM
>
> To: Foerster, Thomas (Nokia - DE/Munich) 
>
> Cc: openstack-infra@lists.openstack.org; Wilkocki, Michal (Nokia -
> PL/Wroclaw) ; Wolniewicz, Maciej (Nokia -
> PL/Wroclaw) 
>
> Subject: Re: [OpenStack-Infra] Gearman-plugin for Jenkins: support for
> dockerized executors
>
>
>
> Jenkins still doesn to provide the ability to listen for executor
>
> changes.  It only allows listening to slave node changes with the
>
> ComputerListener[1] extension point.  It doesn't seem like there's any
>
> plans in Jenkins core to provide this in future releases.  If that's
>
> not available then gearman cannot provide the functionality that you
>
> request.
>
>
>
> [1]
> https://wiki.jenkins-ci.org/display/JENKINS/Extension+points#Extensionpoints-hudson.slaves.ComputerListener
>
>
>
> -Khai
>
>
>
> On Mon, Jul 25, 2016 at 5:49 AM, Foerster, Thomas (Nokia - DE/Munich)
>
>  wrote:
>
>> Hi,
>
>>
>
>> We are using the Gearman-plugin (version: 0.2.0) at our Nokia’s Continuous
>
>> Integration environment together with Jenkins (version: 2.7.1). Except the
>
>> Gerrit server, the entire CI environment is dockerized: Zuul servers,
>
>> Jenkins Master instances and build executers being able to scale according
>
>> the demand. The Gearman is being used to handle multiple Jenkins Master
>> and
>
>> build executers across the project.
>
>>
>
>> We would like to start docker machines as build executors on demand and
>
>> according the real CI load. However there seems to be a limitation at the
>
>> Gearman-plugin (0.2.0), that all available build executors have to be know
>
>> and running during plugin start-up time. Docker machines started and
>
>> integrated to Jenkins after plugin start, won’t be recognized by the
>> plugin.
>
>>
>
>> We found that is a known issue and documented at:
>
>> https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin
>
>>
>
>> === CLIP ===
>
>> Known Issues
>
>> Adding or removing executors on nodes will require restarting the gearman
>
>> plugin.  This is because Jenkins does NOT provide provide a way to listen
>
>> for changes to executors therefore the gearman plugin does not know that
>> it
>
>> needs to re-register functions due to executor updates.
>
>> === CLIP ===
>
>>
>
>> The Gearman-plugin seems to be still maintained.
>
>> Do you know whether that issue has been taken up for next upcoming plugin
>
>> release?
>
>>
>
>> Thanks in advance for your support.
>
>>
>
>> Best regards.
>
>> ---
>
>> Thomas 

Re: [Openstack-operators] [openstack-dev] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Ihar Hrachyshka

Amit Kumar Saha (amisaha)  wrote:


Hi,

We would like to introduce the community to a new Python based project  
called DON – Diagnosing OpenStack Networking. More details about the  
project can be found at https://github.com/openstack/python-don.


DON, written primarily in Python, and available as a dashboard in  
OpenStack Horizon, Libery release, is a network analysis and diagnostic  
system and provides a completely automated service for verifying and  
diagnosing the networking functionality provided by OVS. The genesis of  
this idea was presented at the Vancouver summit, May 2015. Hopefully the  
community will find this project interesting and will give us valuable  
feedback.


Amit,

neutron team currently works on defining a new diagnostics API:  
https://review.openstack.org/#/c/308973/


Please work with the community on API definition, and later, on backend  
specific implementation of desired checks.


Ihar

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [new][oslo] oslotest 2.8.0 release (newton)

2016-08-02 Thread no-reply
We are excited to announce the release of:

oslotest 2.8.0: Oslo test framework

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslotest

With package available at:

https://pypi.python.org/pypi/oslotest

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

For more details, please see below.

Changes in oslotest 2.7.0..2.8.0


425d465 Import mock so that it works on Python 3.x
9c03983 Fix parameters of assertEqual are misplaced
9779729 Updated from global requirements
7bff0fc Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

setup.cfg   |  1 +
test-requirements.txt   |  2 +-
tox.ini |  2 +-
6 files changed, 14 insertions(+), 13 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index cecb61e..8282002 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.config>=3.10.0 # Apache-2.0
+oslo.config>=3.12.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] taskflow 2.4.0 release (newton)

2016-08-02 Thread no-reply
We are exuberant to announce the release of:

taskflow 2.4.0: Taskflow structured state management library.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

For more details, please see below.

Changes in taskflow 2.3.0..2.4.0


8eed13c Updated from global requirements
a643e92 Updated from global requirements
c290741 Remove white space between print and ()
7fa93b9 Updated from global requirements
18024a6 Add Python 3.5 classifier and venv
38c5812 Replace assertEqual(None, *) with assertIsNone in tests
35a9305 Ensure the fetching jobs does not fetch anything when in bad state


Diffstat (except docs and test files)
-

requirements.txt  |  6 +-
setup.cfg |  1 +
taskflow/examples/retry_flow.py   |  6 +-
taskflow/jobs/backends/impl_zookeeper.py  | 74 ---
test-requirements.txt |  2 +-
tox.ini   |  1 +
10 files changed, 83 insertions(+), 23 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 73819aa..6e54d79 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17 +17 @@ enum34;python_version=='2.7' or python_version=='2.6' or 
python_version=='3.3' #
-futurist>=0.11.0 # Apache-2.0
+futurist!=0.15.0,>=0.11.0 # Apache-2.0
@@ -29 +29 @@ contextlib2>=0.4.0 # PSF License
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
@@ -44 +44 @@ automaton>=0.5.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 6606911..172b449 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -31 +31 @@ psycopg2>=2.5 # LGPL/ZPL
-sqlalchemy-utils # BSD License
+SQLAlchemy-Utils # BSD License



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslosphinx 4.7.0 release (newton)

2016-08-02 Thread no-reply
We are content to announce the release of:

oslosphinx 4.7.0: OpenStack Sphinx Extensions and Theme

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslosphinx

With package available at:

https://pypi.python.org/pypi/oslosphinx

Please report issues through launchpad:

http://bugs.launchpad.net/oslosphinx

For more details, please see below.

Changes in oslosphinx 4.6.0..4.7.0
--

3bcdfc6 Allow "Other Versions" section to be configurable
3fc15a5 fix other versions sidebar links


Diffstat (except docs and test files)
-

oslosphinx/__init__.py | 18 ++
oslosphinx/theme/openstack/layout.html |  7 ---
oslosphinx/theme/openstack/theme.conf  |  1 +
4 files changed, 30 insertions(+), 7 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] tooz 1.42.0 release (newton)

2016-08-02 Thread no-reply
We are mirthful to announce the release of:

tooz 1.42.0: Coordination library for distributed systems.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.41.0..1.42.0
--

e5c530a etcd: don't run heartbeat() concurrently
f296519 etcd: properly block when using 'wait'
0d2bd80 Share _get_random_uuid() among all tests
b322024 Updated from global requirements
c09b20b Clean leave group hooks when unwatching.
fcc7ea1 Fix the test test_unwatch_elected_as_leader.
324482f Updated from global requirements


Diffstat (except docs and test files)
-

requirements.txt|  6 +--
tooz/coordination.py|  3 ++
tooz/drivers/etcd.py| 14 +-
tox.ini |  2 +-
11 files changed, 137 insertions(+), 67 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index a9cecef..0513da4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@ pbr>=1.6 # Apache-2.0
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
@@ -13,2 +13,2 @@ futures>=3.0;python_version=='2.7' or python_version=='2.6' # 
BSD
-futurist>=0.11.0 # Apache-2.0
-oslo.utils>=3.14.0 # Apache-2.0
+futurist!=0.15.0,>=0.11.0 # Apache-2.0
+oslo.utils>=3.15.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] stevedore 1.17.0 release (newton)

2016-08-02 Thread no-reply
We are mirthful to announce the release of:

stevedore 1.17.0: Manage dynamic plugins for Python applications

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/stevedore

With package available at:

https://pypi.python.org/pypi/stevedore

Please report issues through launchpad:

https://bugs.launchpad.net/python-stevedore

For more details, please see below.

Changes in stevedore 1.16.0..1.17.0
---

0c6b78c Remove discover from test-requirements
4ec5022 make error reporting for extension loading quieter
76c14b1 Add Python 3.5 classifier and venv
c8a3964 Replace assertEquals() with assertEqual()


Diffstat (except docs and test files)
-

setup.cfg  |  1 +
stevedore/extension.py | 11 +--
test-requirements.txt  |  1 -
tox.ini|  2 +-
5 files changed, 12 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 977194a..6b0ae8d 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10 +9,0 @@ testrepository>=0.0.18 # Apache-2.0/BSD
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.service 1.14.0 release (newton)

2016-08-02 Thread no-reply
We are happy to announce the release of:

oslo.service 1.14.0: oslo.service library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

For more details, please see below.

Changes in oslo.service 1.13.0..1.14.0
--

0c3a29d Updated from global requirements
aac1d89 Fix parameters of assertEqual are misplaced


Diffstat (except docs and test files)
-

requirements.txt |  2 +-
6 files changed, 50 insertions(+), 50 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 83d22d4..8df3c2b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ monotonic>=0.6 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.vmware 2.12.0 release (newton)

2016-08-02 Thread no-reply
We are happy to announce the release of:

oslo.vmware 2.12.0: Oslo VMware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.vmware

With package available at:

https://pypi.python.org/pypi/oslo.vmware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.vmware

For more details, please see below.

Changes in oslo.vmware 2.11.0..2.12.0
-

37283c8 Updated from global requirements
0258fe0 Add http_method to download_stream_optimized_data
2f9af24 Refactor the image transfer
7c893ca Remove discover from test-requirements
170d6b7 Updated from global requirements


Diffstat (except docs and test files)
-

oslo_vmware/image_transfer.py| 424 ---
requirements.txt |   4 +-
test-requirements.txt|   1 -
4 files changed, 67 insertions(+), 669 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 4f4f3a6..0637801 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ pbr>=1.6 # Apache-2.0
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
@@ -12 +12 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index b8c2e46..e9eac53 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +7,0 @@ hacking<0.11,>=0.10.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.messaging 5.6.0 release (newton)

2016-08-02 Thread no-reply
We are excited to announce the release of:

oslo.messaging 5.6.0: Oslo Messaging API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

5.6.0
^


New Features


   * Idle connections in the pool will be expired and closed. Default
 ttl is 1200s. Next configuration params was added


  * *conn_pool_ttl* (defaul 1200)

  * *conn_pool_min_size* (default 2)


Deprecation Notes
*

* The rabbitmq driver option "DEFAULT/max_retries" has been
  deprecated for removal (at a later point in the future) as it did
  not make logical sense for notifications and for RPC.

Changes in oslo.messaging 5.5.0..5.6.0
--

d946fb1 Fix pika functional tests
7576497 fix a typo in impl_rabbit.py
1288621 Updated from global requirements
317641c Fix syntax error on notification listener docs
a6f0aae Delete fanout queues on gracefully shutdown
564e423 Properly cleanup listener and driver on simulator exit
18c8bc9 [zmq] Let proxy serve on a static port numbers
162f6e9 Introduce TTL for idle connections
9ed95bb Fix parameters of assertEqual are misplaced
95d0402 Fix misstyping issue
d1cbca8 Updated from global requirements
73b3286 Updated from global requirements
ff9b4bb notify: add a CLI tool to manually send notifications
538c84b Add deprecated relnote for max_retries rabbit configuration option
ae1123e [zmq] Add py34 configuration for functional tests
07187f9 [zmq] Merge publishers
8e77865 Add Python 3.5 classifier and venv
689ba08 Replace assertEqual(None, *) with assertIsNone in tests
c6c70ab Updated from global requirements
66ded1f [zmq] Use json/msgpack instead of pickle
ac484f6 [zmq] Refactor publishers
96438a3 Add Python 3.4 functional tests for AMQP 1.0 driver
3514638 tests: allow to override the functionnal tests suite args
2b50ea5 [zmq] Additional configurations for f-tests
6967bd7 Remove discover from test-requirements
865bfec tests: rabbitmq failover tests
df9a009 Imported Translations from Zanata
6945323 Updated from global requirements
861a3ac Remove rabbitmq max_retries
61aae0f Config: no need to set default=None
dc1309a Improve the impl_rabbit logging


Diffstat (except docs and test files)
-

oslo_messaging/_cmd/zmq_proxy.py   |  34 +++-
oslo_messaging/_drivers/amqp1_driver/opts.py   |   2 -
oslo_messaging/_drivers/base.py|   7 +-
oslo_messaging/_drivers/impl_kafka.py  |  13 +-
oslo_messaging/_drivers/impl_rabbit.py |  82 ++---
oslo_messaging/_drivers/impl_zmq.py|   7 +-
.../pika_driver/pika_connection_factory.py |   8 +-
oslo_messaging/_drivers/pool.py|  65 ---
.../_drivers/zmq_driver/broker/__init__.py |   0
.../_drivers/zmq_driver/broker/zmq_proxy.py|  80 -
.../_drivers/zmq_driver/broker/zmq_queue_proxy.py  | 140 ---
.../publishers/dealer/zmq_dealer_call_publisher.py | 106 ---
.../publishers/dealer/zmq_dealer_publisher.py  |  89 -
.../publishers/dealer/zmq_dealer_publisher_base.py | 110 
.../dealer/zmq_dealer_publisher_direct.py  |  53 ++
.../dealer/zmq_dealer_publisher_proxy.py   | 199 +
.../client/publishers/dealer/zmq_reply_waiter.py   |  66 ---
.../client/publishers/zmq_pub_publisher.py |  71 
.../client/publishers/zmq_publisher_base.py| 158 +++-
.../client/publishers/zmq_push_publisher.py|  52 --
.../_drivers/zmq_driver/client/zmq_client.py   |  54 ++
.../_drivers/zmq_driver/client/zmq_client_base.py  |  25 ++-
.../_drivers/zmq_driver/client/zmq_receivers.py| 145 +++
.../_drivers/zmq_driver/client/zmq_response.py |  18 +-
.../zmq_driver/client/zmq_routing_table.py |  65 +++
.../_drivers/zmq_driver/client/zmq_senders.py  | 105 +++
.../zmq_driver/client/zmq_sockets_manager.py   |  96 ++
.../_drivers/zmq_driver/proxy/__init__.py  |   0
.../_drivers/zmq_driver/proxy/zmq_proxy.py |  98 ++
.../zmq_driver/proxy/zmq_publisher_proxy.py|  74 
.../_drivers/zmq_driver/proxy/zmq_queue_proxy.py   | 150 
.../server/consumers/zmq_dealer_consumer.py|  78 ++--
.../server/consumers/zmq_pull_consumer.py  |  69 ---
.../server/consumers/zmq_router_consumer.py|  66 +++
.../server/consumers/zmq_sub_consumer.py   |  26 +--
.../zmq_driver/server/zmq_incoming_message.py  |  51 +++---
oslo_messaging/_drivers/zmq_driver/zmq_names.py|  18 +-
oslo_messaging/_drivers/zmq_driver/zmq_socket.py   |  80 +++--

[openstack-dev] [new][oslo] oslo.versionedobjects 1.14.0 release (newton)

2016-08-02 Thread no-reply
We are satisfied to announce the release of:

oslo.versionedobjects 1.14.0: Oslo Versioned Objects library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.

Changes in oslo.versionedobjects 1.13.0..1.14.0
---

67ba3a0 Updated from global requirements
def295f Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt | 4 ++--
setup.cfg| 1 +
tox.ini  | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 7813946..2e55edb 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
@@ -10 +10 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.serialization 2.12.0 release (newton)

2016-08-02 Thread no-reply
We are jazzed to announce the release of:

oslo.serialization 2.12.0: Oslo Serialization library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.serialization

With package available at:

https://pypi.python.org/pypi/oslo.serialization

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.serialization

For more details, please see below.

Changes in oslo.serialization 2.11.0..2.12.0


afb5332 Updated from global requirements
5ae0432 Fix parameters of assertEqual are misplaced
aa0e480 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt  |  2 +-
setup.cfg |  1 +
tox.ini   |  2 +-
6 files changed, 71 insertions(+), 70 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 54901dd..33ada12 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13 +13 @@ msgpack-python>=0.4.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.reports 1.13.0 release (newton)

2016-08-02 Thread no-reply
We are pleased to announce the release of:

oslo.reports 1.13.0: oslo.reports library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.reports

With package available at:

https://pypi.python.org/pypi/oslo.reports

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.reports

For more details, please see below.

Changes in oslo.reports 1.12.0..1.13.0
--

329eb7c Updated from global requirements
0565210 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt | 2 +-
setup.cfg| 2 +-
tox.ini  | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 67f73ae..56641ec 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11 +11 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.privsep 1.11.0 release (newton)

2016-08-02 Thread no-reply
We are chuffed to announce the release of:

oslo.privsep 1.11.0: OpenStack library for privilege separation

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

Changes in oslo.privsep 1.10.0..1.11.0
--

108b201 Updated from global requirements
9510ac0 Drop python3.3 support in classifier


Diffstat (except docs and test files)
-

requirements.txt | 2 +-
setup.cfg| 1 -
2 files changed, 1 insertion(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1397b11..34304cd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.db 4.9.0 release (newton)

2016-08-02 Thread no-reply
We are delighted to announce the release of:

oslo.db 4.9.0: Oslo Database library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

4.9.0
^

Upgrade Notes

* The allowed values for the "connection_debug" option are now
  restricted to the range between 0 and 100 (inclusive). Previously a
  number lower than 0 or higher than 100 could be given without error.
  But now, a "ConfigFileValueError" will be raised when the option
  value is outside this range.

Changes in oslo.db 4.8.0..4.9.0
---

6bdb99f Updated from global requirements
60b5b14 Memoize sys.exc_info() before attempting a savepoint rollback
1dc55b8 Updated from global requirements
a9ec13d Consolidate pifpaf commands into variables
a794790 Updated from global requirements
5da12af Updated from global requirements
abebffc Fixed unit tests running on Windows
20613c3 Remove discover from setup.cfg
7b76cdf Add dispose_pool() method to enginefacade context, factory
e0cc306 Set a min and max on the connection_debug option
d594f62 Set max pool size default to 5
72bab42 tox: add py35 envs for convenience


Diffstat (except docs and test files)
-

oslo_db/exception.py   |  7 ++-
oslo_db/options.py | 15 ++---
oslo_db/sqlalchemy/enginefacade.py | 14 +
oslo_db/sqlalchemy/engines.py  |  2 +-
oslo_db/sqlalchemy/exc_filters.py  | 46 ++-
.../connection_debug_min_max-bf6d53d49be7ca52.yaml |  7 +++
requirements.txt   |  6 +-
setup.cfg  |  4 +-
tox.ini| 37 
12 files changed, 209 insertions(+), 44 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index fbc015b..6befe2a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10,2 +10,2 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
@@ -14 +14 @@ sqlalchemy-migrate>=0.9.6 # Apache-2.0
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.config 3.14.0 release (newton)

2016-08-02 Thread no-reply
We are amped to announce the release of:

oslo.config 3.14.0: Oslo Configuration API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.config

With package available at:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

3.14.0
^^

New Features

* Added minimum and maximum value limits to FloatOpt.

Changes in oslo.config 3.13.0..3.14.0
-

b409253 disable lazy translation in sphinx extension
2fdc1cf Trivial: adjust import order to fit the import order guideline
c115719 Make error message more clear
15d3ab8 Add min and max values to Float type and Opt
61224ce Fix parameters of assertEqual are misplaced
55c2026 Updated from global requirements
8ed5f75 Add max_length to URIOpt
f48a897 Remove discover from test-requirements
6f2c57c update docs for sphinxconfiggen
9b05dc9 Add URIOpt to doced option types


Diffstat (except docs and test files)
-

oslo_config/cfg.py |  38 +-
oslo_config/sphinxext.py   |  10 +
oslo_config/types.py   | 110 +++--
.../notes/add-float-min-max-b1a2e16301c8435c.yaml  |   3 +
requirements.txt   |   2 +-
test-requirements.txt  |   1 -
12 files changed, 555 insertions(+), 297 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index d1ac579..972e955 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ six>=1.9.0 # MIT
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index d444b33..a11d8f2 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +6,0 @@ hacking<0.11,>=0.10.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.middleware 3.16.0 release (newton)

2016-08-02 Thread no-reply
We are grateful to announce the release of:

oslo.middleware 3.16.0: Oslo Middleware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.middleware

With package available at:

https://pypi.python.org/pypi/oslo.middleware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

For more details, please see below.

Changes in oslo.middleware 3.15.0..3.16.0
-

2697995 Updated from global requirements
3a18916 Updated from global requirements
0c00db8 Fix unit tests on Windows


Diffstat (except docs and test files)
-

oslo_middleware/healthcheck/disable_by_file.py | 4 +++-
requirements.txt   | 6 +++---
3 files changed, 11 insertions(+), 6 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index b80e5c6..fdbfbf4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
@@ -12 +12 @@ six>=1.9.0 # MIT
-stevedore>=1.10.0 # Apache-2.0
+stevedore>=1.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.policy 1.13.0 release (newton)

2016-08-02 Thread no-reply
We are thrilled to announce the release of:

oslo.policy 1.13.0: Oslo Policy library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.policy

With package available at:

https://pypi.python.org/pypi/oslo.policy

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.policy

For more details, please see below.

Changes in oslo.policy 1.12.0..1.13.0
-

10a81ba Updated from global requirements
cce967a Add note about not all APIs support policy enforcement by user_id
5273d2c Adds debug logging for policy file validation
09c5588 Fixed unit tests running on Windows
7204311 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

oslo_policy/policy.py| 16 ++
requirements.txt |  2 +-
setup.cfg|  1 +
tox.ini  |  2 +-
5 files changed, 56 insertions(+), 34 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 09e1525..a954394 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.log 3.13.0 release (newton)

2016-08-02 Thread no-reply
We are stoked to announce the release of:

oslo.log 3.13.0: oslo.log library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 3.12.0..3.13.0
--

656cef3 Updated from global requirements
92b6ff6 Fix parameters of assertEqual are misplaced
12de127 Updated from global requirements
8cb90f4 Remove discover from test-requirements
3ae0e87 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt|  4 ++--
setup.cfg   |  1 +
test-requirements.txt   |  1 -
tox.ini |  2 +-
5 files changed, 29 insertions(+), 29 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 9bac65f..a288b0b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.12.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 4c9bc0c..673f993 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +6,0 @@ hacking<0.11,>=0.10.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Belated nova newton midcycle recap (part 2)

2016-08-02 Thread Brian Haley

On 08/01/2016 10:15 PM, Matt Riedemann wrote:

Starting from where I accidentally left off:





We also talked a bit about live migration with Neutron. There has been a fix up
for live migration + DVR since Mitaka:

https://review.openstack.org/#/c/275073

It's a bit of a hacky workaround but the longer term solution that we all want (
https://review.openstack.org/#/c/309416 ) is not going to be in Newton and will
need discussion at the Ocata summit in Barcelona (John Garbutt was going to work
with the Neutron team on preparing for the summit on that one). So we agreed to
go with Swami's DVR fix but it needs to be rebased (which still hasn't happened
since the midcycle).


I just pushed an update to the DVR live-migration patch re-based to master, so 
feel free to review again.  Swami or myself will answer any other comments as 
quickly as possible.


Thanks,

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.cache 1.12.0 release (newton)

2016-08-02 Thread no-reply
We are enthusiastic to announce the release of:

oslo.cache 1.12.0: Cache storage for Openstack projects.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

For more details, please see below.

Changes in oslo.cache 1.11.0..1.12.0


3009e5f Updated from global requirements
e989c40 Add Python 3.5 classifier and venv
6e9091d Imported Translations from Zanata
30a7cf4 Updated from global requirements


Diffstat (except docs and test files)
-

oslo_cache/locale/en_GB/LC_MESSAGES/oslo_cache.po  | 53 ++
oslo_cache/locale/es/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/fr/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/it/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/ko_KR/LC_MESSAGES/oslo_cache.po  | 12 +++--
oslo_cache/locale/pt_BR/LC_MESSAGES/oslo_cache.po  | 12 +++--
oslo_cache/locale/ru/LC_MESSAGES/oslo_cache.po | 12 +++--
oslo_cache/locale/tr_TR/LC_MESSAGES/oslo_cache.po  | 14 +++---
oslo_cache/locale/zh_CN/LC_MESSAGES/oslo_cache.po  | 12 +++--
oslo_cache/locale/zh_TW/LC_MESSAGES/oslo_cache.po  | 14 +++---
.../locale/en_GB/LC_MESSAGES/releasenotes.po   | 30 
requirements.txt   |  4 +-
setup.cfg  |  2 +-
tox.ini|  2 +-
14 files changed, 152 insertions(+), 51 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index b1defe2..2f4ebb9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ six>=1.9.0 # MIT
-oslo.config>=3.10.0 # Apache-2.0
+oslo.config>=3.12.0 # Apache-2.0
@@ -10 +10 @@ oslo.log>=1.14.0 # Apache-2.0
-oslo.utils>=3.14.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] Use ResourceProviderTags instead of ResourceClass?

2016-08-02 Thread Jay Pipes

On 08/02/2016 08:19 AM, Alex Xu wrote:

Chris have a thought about using ResourceClass to describe Capabilities
with an infinite inventory. In the beginning we brain storming the idea
of Tags, Tan Lin have same thought, but we say no very quickly, due to
the ResourceClass is really about Quantitative stuff. But Chris give
very good point about simplify the ResourceProvider model and the API.

After rethinking about those idea, I like simplify the ResourceProvider
model and the API. But I think the direction is opposite. ResourceClass
with infinite inventory is really hacky. The Placement API is simple,
but the usage of those API isn't simple for user, they need create a
ResourceClass, then create an infinite inventory. And ResourceClass
isn't managable like Tags, look at the Tags API, there are many query
parameter.

But look at the ResourceClass and ResourceProviderTags, they are totally
same, two columns: one is integer id, another one is string.
ResourceClass is just for naming the quantitative stuff. So what we need
is thing used to 'naming'. ResourceProviderTags is higher abstract, Tags
is generic thing to name something, we totally can use Tag instead of
ResourceClass. So user can create inventory with tags, also user can
create ResourceProvider with tags.


No, this sounds like actually way more complexity than is needed and 
will make the schema less explicit.



But yes, there may still have problem isn't resolved, one of problem is
pointed out when I discuss with YingXin about how to distinguish the Tag
is about quantitative or qualitative. He think we need attribute for Tag
to distinguish it. But the attribute isn't thing I like, I prefer leave
that alone due to the user of placement API is admin-user.

Any thought? or I'm too crazy at here...maybe I just need put this in
the alternative section in the spec...


A resource class is not a capability, though. It's an indication of a 
type of quantitative consumable that is exposed on a resource provider.


A capability is a string that indicates a feature that a resource 
provider offers. A capability isn't "consumed".


BTW, this is why I continue to think that using the term "tags" in the 
placement API is wrong. The placement API should clearly indicate that a 
resource provider has a set of capabilities. Tags, in Nova at least, are 
end-user-defined simple categorization strings that have no 
standardization and no cataloguing or collation to them.


Capabilities are not end-user-defined -- they can be defined by an 
operator but they are not things that a normal end-user can simply 
create. And capabilities are specifically *not* for categorization 
purposes. They are an indication of a set of features that a resource 
provider exposes.


This is why I think the placement API for capabilities should use the 
term "capabilities" and not "tags".


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.concurrency 3.13.0 release (newton)

2016-08-02 Thread no-reply
We are eager to announce the release of:

oslo.concurrency 3.13.0: Oslo Concurrency library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

With package available at:

https://pypi.python.org/pypi/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.

Changes in oslo.concurrency 3.12.0..3.13.0
--

2e8d548 Updated from global requirements
0c3a39e Fix parameters of assertEqual are misplaced
e9a0914 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

requirements.txt |  2 +-
setup.cfg|  1 +
tox.ini  |  6 +--
5 files changed, 44 insertions(+), 45 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 864afc1..81d8537 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] mox3 0.18.0 release (newton)

2016-08-02 Thread no-reply
We are glad to announce the release of:

mox3 0.18.0: Mock object framework for Python

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/mox3

With package available at:

https://pypi.python.org/pypi/mox3

Please report issues through launchpad:

http://bugs.launchpad.net/python-mox3

For more details, please see below.

Changes in mox3 0.17.0..0.18.0
--

22c02dc Remove discover from test-requirements


Diffstat (except docs and test files)
-

test-requirements.txt | 1 -
1 file changed, 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index df05b72..b24a31f 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12 +11,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] debtcollector 1.7.0 release (newton)

2016-08-02 Thread no-reply
We are glowing to announce the release of:

debtcollector 1.7.0: A collection of Python deprecation patterns and
strategies that help you collect your technical debt in a non-
destructive manner.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/debtcollector

With package available at:

https://pypi.python.org/pypi/debtcollector

Please report issues through launchpad:

http://bugs.launchpad.net/debtcollector

For more details, please see below.

Changes in debtcollector 1.6.0..1.7.0
-

279bbca Remove discover from test-requirements
18f7de4 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

setup.cfg | 2 +-
test-requirements.txt | 1 -
tox.ini   | 6 +-
3 files changed, 6 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 16f75c6..0c903ff 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +7,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] futurist 0.17.0 release (newton)

2016-08-02 Thread no-reply
We are tickled pink to announce the release of:

futurist 0.17.0: Useful additions to futures, from the future.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/futurist

With package available at:

https://pypi.python.org/pypi/futurist

Please report issues through launchpad:

http://bugs.launchpad.net/futurist

For more details, please see below.

Changes in futurist 0.16.0..0.17.0
--

2a0d270 Remove discover from test-requirements


Diffstat (except docs and test files)
-

test-requirements.txt | 1 -
1 file changed, 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 68ed7a0..b18f71d 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12 +11,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-announce] [new][telemetry] gnocchi 2.2.0 release

2016-08-02 Thread no-reply
We are enthusiastic to announce the release of:

gnocchi 2.2.0: Metric as a Service

For more details, please see below.

Changes in gnocchi 2.1.0..2.2.0
---

fb3e98e sqlalchemy: increase the number of max_retries
9170d7b test: fix race condition in update testing
5e71480 add support for coordination
fba36a1 tests: extend the test timeout to 120s for migration sync testing
4022bdb Add home-page in setup.cfg
defda73 carbonara: embed a benchmark tool
bfefeb2 carbonara: do not use oslo_log
b580c09 indexer: put extend_existing in __tables_args__
0ddbe2f improve task distribution
5baba42 sqlalchemy: simplify kwarg of retry
dbc3b97 Add iso8601 to requirements
4896b9c metricd: cleanup logging message for progress
95aae75 sqlalchemy: remove deprecated kwargs retry_on_request
5551a27 sqlalchemy: fix PostgreSQL transaction aborted in 
unmap_and_delete_tables
d272d07 Fix list resource race
d46fa37 rest: set useful default values for CORS middleware
db0eb3f rest: enable CORS middleware without Paste
880b3b8 truncate AggregatedTimeSerie on init
e57eed5 return explicitly InvalidPagination sort key
a2a7fe5 fix object_exists reference
c36efb1 Indicate we added a bunch of new features
ac86707 doc: Update grafana plugin documentation
a65f0d9 metricd: use Cotyledon lib
db1aeab devstack: Move to grafana 3.x
95e47fc add missing key param to method definition
e7a9a57 rest: allow to use X-Domain-Id in policy rules
b2742c0 Add support for Python 3.5
b9a0707 Fix CORS middleware setup
e39b265 fix tooz requirement
62913af ceph: uses only one ioctx
fe711a1 simplify model loading
d2044e3 Revert "carbonara: compress all TimeSerie classes using LZ4"
932741b devstack: Fix requirement typo
78a37f0 Use pbr WSGI script to build gnocchi-api
c2ef2b0 ceph: change make method names for new measures
439d5a3 Expose resource type state to the API
778affc track resource_type creation/deletion state
a771322 carbonara: compress all TimeSerie classes using LZ4
24d2854 separate cleanup into own worker
f52e62a Tuneup gabbi metric.yaml file to modern standards
da74bc2 Tuneup gabbi resource_type.yaml file to modern standards
4576f1b Tuneup gabbi search_metric.yaml file to modern standards
9ac00bd Tuneup gabbi resource_aggregation.yaml file to modern standards
ebb1e33 Tuneup gabbi resource.yaml file to modern standards
f6a7f4e sqlalchemy: fix MySQL error handling in list_resources
c64132a _carbonara: use tooz heartbeat management
3156b9d _carbonara: set default aggregation_workers_number to 1
e275feb Enable CORS by default
6980d7a Rename gabbits with _ to have - instead
2d7151e Correct concurrency of gabbi tests for gabbi 1.22.0
1da7668 tests: fix Gabbi live test to not rely on legacy resource types
0bbf079 swift: force retry to 1
3af0fcb swift: raise an explicit error if bulk-delete is unavailable
4f2102d Added endpoint type on swift configuration.
8a4ddb3 use async delete when remove measures
93d83cd Fix tempest tests that use SSL
18a260f _carbonara: fix race condition in heartbeat stop condition
37e17ce enable pagination when querying metrics
dbfe050 doc: include an example with the `like' operator
8932aad metricd: only retry on attended errors and print error when coordinator 
fails
8da588d metricd: no max wait, fix comment
10975c5 test: move root tests to their own class
227d5c6 rest: report dynamic aggregation methods in capabilities in a different 
field
4934c2b _carbonara: stop heartbeat thread on stop()
f181d4b tests: create common resources at class init time
ee2eb67 tests: remove skip_archive_policies_creation
955591a tests: do not create legacy resources
09377bf sqlalchemy: add missing constraint delete_resource_type()
3467486 sqlalchemy: no fail if resources and type are deleted under our feet
7ce5051 sqlalchemy: retry on PostgreSQL catalog errors too
b1788f2 sqlalchemy: retry on deadlock in delete_resource_type()
aceb647 Enable releasenotes documentation
72c8b5f Tuneup gabbi transformedids.yaml file to modern standards
cc06c73 Tuneup gabbi search.yaml file to modern standards
5a35b71 Tuneup gabbi pagination.yaml file to modern standards
6dea9de Tuneup gabbi metric_granularity.yaml file to modern standards
6aae457 raise NoSuchMetric when deleting metric already marked deleted
24073a3 Tuneup gabbi history.yaml file to modern standards
fc7877c Tuneup gabbi batch_measures.yaml file to modern standards
d1cecc5 Tuneup gabbi base.yaml file to modern standards
7224d9e Tuneup gabbi async.yaml file to modern standards
18b8baa Tuneup gabbi archive.yaml file to modern standards
505a6a2 Tuneup gabbi archive_rule.yaml file to modern standards
e91d7e6 Tuneup gabbi aggregation.yaml file to modern standards
10ce637 fix some typos in doc, comment & code
d74ea92 add unit column for metric
3cfcd07 devstack: ensure grafana plugin for 2.6 is installed
1b7f01d Revert "tests: protect database upgrade for gabbi tests"
831ed2f Make tempest tests compatible with keystone v3
fb972ac sqlalchemy: retry on deadlock for create_resource_type()

[openstack-dev] [new][oslo] automaton 1.4.0 release (newton)

2016-08-02 Thread no-reply
We are joyful to announce the release of:

automaton 1.4.0: Friendly state machines for python.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/automaton

With package available at:

https://pypi.python.org/pypi/automaton

Please report issues through launchpad:

http://bugs.launchpad.net/automaton

For more details, please see below.

Changes in automaton 1.3.0..1.4.0
-

dab7331 Remove discover from test-requirements
e87dc55 Add Python 3.5 classifier and venv


Diffstat (except docs and test files)
-

setup.cfg | 1 +
test-requirements.txt | 1 -
tox.ini   | 2 +-
3 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 2c695bc..958c5dd 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9 +8,0 @@ coverage>=3.6 # Apache-2.0
-discover # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread Dan Sneddon
On 08/02/2016 09:57 AM, Curtis wrote:
> Hi,
> 
> I'm just curious who, if anyone, is using TripleO in production?
> 
> I'm having a hard time finding anywhere to ask end-user type
> questions. #tripleo irc seems to be just a dev channel. Not sure if
> there is anywhere for end users to ask questions. A quick look at
> stackalytics shows it's mostly RedHat contributions, though again, it
> was a quick look.
> 
> If there were other users it would be cool to perhaps try to have a
> session on it at the upcoming ops midcycle.
> 
> Thanks,
> Curtis.
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

Nearly every commercial customer of Red Hat OpenStack Platform (OSP)
since version 7 (version 9 is coming out imminently) are using TripleO
in production, since the installer is TripleO. That's hundreds of
production installations, some of them very large scale. The exact same
source code is used for RDO and OSP. HP used to use TripleO, but I
don't think they have contributed to TripleO since they updated Helion
with a proprietary installer.

Speaking for myself and the other TripleO developers at Red Hat, we do
try our best to answer user questions in #rdo. You will also find some
production users hanging out there. The best times to ask questions are
during East Coast business hours, or during business hours of GMT +1
(we have a large development office in Brno, CZ with engineers that
work on TripleO). There is also an RDO-specific mailing list available
here: https://www.redhat.com/mailman/listinfo/rdo-list

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-08-02 Thread gordon chung


On 29/07/16 03:29 PM, gordon chung wrote:

i'm using Ceph. but i should mention i also only have 1 thread enabled
because python+threading is... yeah.

i'll give it a try again with threads enabled.


I tried this again with 16 threads. as expected, python (2.7.x) threads do jack 
all.

i also tried lowering the points per object to 900 (~8KB max). this performed 
~4% worse for read/writes. i should probably add a disclaimer that i'm batching 
75K points/metric at once which is probably not normal.

so from very rough testing, we can choose to lower it to 3600points which 
offers better split opportunities with negligible improvement/degradation, or 
even more to 900points with potentially small write degradation (massive 
batching).


--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Belated nova newton midcycle recap (part 2)

2016-08-02 Thread Jim Rollenhagen
On Mon, Aug 01, 2016 at 09:15:46PM -0500, Matt Riedemann wrote:
> 
> 
> 
> * Placement API for resource providers
> 
> Jay's personal goal for Newton is for the resource tracker to be writing
> inventory and allocation data via the placement API. We want to get the data
> writing into the placement API in Newton so we can start using it in Ocata.
> 
> There are some spec amendments up for resource providers, at least one has
> merged, and the initial placement API change merged today:
> 
> https://review.openstack.org/#/c/329149/
> 
> We talked about supporting dynamic resource classes for Ironic use cases
> which is a stretch goal for Nova in Newton. Jay has a spec for that here:
> 
> https://review.openstack.org/#/c/312696/
> 
> There is a lot more detail in the etherpad and honestly Jay Pipes or Jim
> Rollenhagen would be better to summarize what came out of this at the
> midcycle and what's being worked on for dynamic resource classes right now.

I actually wrote a bit about this last week:
http://lists.openstack.org/pipermail/openstack-dev/2016-July/099922.html

I'm not sure it covers everything, but it's the important pieces I got
from it.

// jim

> We talked about a separate placement API database but decided this should be
> optional to avoid forcing yet another nova database on deployers in a couple
> of releases. This would be available for deployers to use to avoid some
> future upgrade pain when the placement service is split out from Nova, but
> if not configured it will default to the API database for the placement API.
> There are a bunch more details and discussion on that in this thread that
> Chris Dent started after the midcycle:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-July/100302.html
> 
> 
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Amit Saha
Hi,

We would like to introduce the community to a new Python based project
called DON – Diagnosing OpenStack Networking. More details about the
project can be found at https://github.com/openstack/python-don.

DON, written primarily in Python, and available as a dashboard in OpenStack
Horizon, Libery release, is a network analysis and diagnostic system and
provides a completely automated service for verifying and diagnosing the
networking functionality provided by OVS. The genesis of this idea was
presented at the Vancouver summit, May 2015. Hopefully the community will
find this project interesting and will give us valuable feedback.

Regards,
Amit
Cisco Bangalore
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-08-02 16:30:06 +:
> On 02/08/2016 16:37, Doug Hellmann wrote:
> > Excerpts from Hayes, Graham's message of 2016-08-02 13:49:06 +:
> >> On 02/08/2016 14:37, Doug Hellmann wrote:
> >>> Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
>  On 29/07/2016 21:59, Doug Hellmann wrote:
> > One of the outcomes of the discussion at the leadership training
> > session earlier this year was the idea that the TC should set some
> > community-wide goals for accomplishing specific technical tasks to
> > get the projects synced up and moving in the same direction.
> >
> > After several drafts via etherpad and input from other TC and SWG
> > members, I've prepared the change for the governance repo [1] and
> > am ready to open this discussion up to the broader community. Please
> > read through the patch carefully, especially the "goals/index.rst"
> > document which tries to lay out the expectations for what makes a
> > good goal for this purpose and for how teams are meant to approach
> > working on these goals.
> >
> > I've also prepared two patches proposing specific goals for Ocata
> > [2][3].  I've tried to keep these suggested goals for the first
> > iteration limited to "finish what we've started" type items, so
> > they are small and straightforward enough to be able to be completed.
> > That will let us experiment with the process of managing goals this
> > time around, and set us up for discussions that may need to happen
> > at the Ocata summit about implementation.
> >
> > For future cycles, we can iterate on making the goals "harder", and
> > collecting suggestions for goals from the community during the forum
> > discussions that will happen at summits starting in Boston.
> >
> > Doug
> >
> > [1] https://review.openstack.org/349068 describe a process for managing 
> > community-wide goals
> > [2] https://review.openstack.org/349069 add ocata goal "support python 
> > 3.5"
> > [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> > libraries"
> 
>  I am confused. When I proposed my patch for doing something very similar
>  (Equal Chances for all projects is basically a multiple release goal) I
>  got the following rebuttals:
> 
>   > "and it would be
>   > a mistake to try to require that because the issue is almost always
>   > lack of resources and not lack of desire. Volunteers to contribute
>   > to the work that's needed will do more to help than a
>   > one-size-fits-all policy."
> 
>   > This isn't a thing that gets fixed with policy. It gets fixed with
>   > people.
> 
>   > I am reading through the thread, and it puzzles me that I see a lot
>   > of right words about goals but not enough hints on who is going to
>   > implement that.
> 
>   > I think the right solutions here are human ones. Talk with people.
>   > Figure out how you can help lighten their load so that they have
>   > breathing space. I think hiding behind policy becomes a way to make
>   > us more separate rather than engaging folks more directly.
> 
>   > Coming at this with top down declarations of how things should work
>   > not only ignores reality of the ecosystem and the current state of
>   > these projects, but is also going about things backwards.
> 
>   > This entirely ignores that these are all open source projects,
>   > which are often very sparsely contributed to. If you have an issue
>   > with a project or the interface it provides, then contribute to it.
>   > Don't make grandiose resolutions trying to force things into what you
>   > see as an ideal state, instead contribute to help fix the problems
>   > you've identified.
> 
>  And yet, we are currently suggesting a system that will actively (in an
>  undefined way) penalise projects who do not comply with a different set
>  of proposals, done in a top down manner.
> 
>  I may be missing the point, but the two proposals seem to have
>  similarities - what is the difference?
> 
>  When I see comments like:
> 
>   > Project teams who join the big tent sign up to the rights and
>   > responsibilities that go along with membership. Those responsibilities
>   > include taking some direction from the TC, including regarding work
>   > they may not give the same priority as the broader community.
> 
>  It sounds like top down is OK, but previous ML thread / TC feedback
>  has been different.
> >>>
> >>> One difference is that these goals are not things like "the
> >>> documentation team must include every service project in the
> >>> installation guide" but rather would be phrased like "every project
> >>> must provide an installation guide". The work 

[openstack-dev] [neutron][horizon][announce] Introducing DON

2016-08-02 Thread Amit Kumar Saha (amisaha)
Hi,

We would like to introduce the community to a new Python based project called 
DON - Diagnosing OpenStack Networking. More details about the project can be 
found at https://github.com/openstack/python-don.

DON, written primarily in Python, and available as a dashboard in OpenStack 
Horizon, Libery release, is a network analysis and diagnostic system and 
provides a completely automated service for verifying and diagnosing the 
networking functionality provided by OVS. The genesis of this idea was 
presented at the Vancouver summit, May 2015. Hopefully the community will find 
this project interesting and will give us valuable feedback.

Regards,
Amit
Cisco Bangalore

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [networking-sfc] Flow classifier conflict logic

2016-08-02 Thread Farhad Sunavala
 Please send the tenant ids of all six neutron ports.
>From admin:neutron port-show  | grep tenant_id
Thanks,Farhad.

On Monday, August 1, 2016 7:44 AM, Artem Plakunov  
wrote:
 

  Thanks. 
 
 You said though that classifier must be unique within a tenant. I tried 
creating chains in two different tenants by different users without any RBAC 
rules. So there are two tenants, each has 1 network, 2 vms (source, service) 
and an admin user. I used different openrc configs for each user yet still get 
the same conflict. 
 
 Info about the test is in the attachment
 31.07.2016 5:25, Farhad Sunavala пишет:
  
  Yes, this was intentionally done. The logical-source-port is important only 
at the point of classification. All successive classifications rely only on the 
5 tuple and MPLS label (chain ID). 
  Consider an extension of the scenario you mention below. 
  Sources: (similar to your case) a  b 
  Port-pairs: (added ppe and ppf) ppc ppd ppe ppf 
  Port-pair-groups: (added ppge and ppgf) ppgc ppgd ppge ppgf 
  Flow-classifiers: fc1: logical-source-port of a && tcp fc2: 
logical-source-port of b && tcp 
  Port-chains: pc1: fc1 && (ppgc + ppge) pc2: fc2 && (ppgd + ppgc + ppgf) 
  
  
  The flow-classifier has logical-src-port and protocol=tcp The 
logical-src-port has no relevance in the middle of the chain. 
  In the middle of the chain, the only relevant flow-classifier is 
protocol=tcp. 
  If we allow it, we cannot distinguish TCP traffic coming out of ppgc (and 
subsequently ppc)  as to whether to mark it with the label for pc1 or the label 
for pc2. 
  In other words, within a tenant the flow-classifiers need to be unique wrt 
the 5 tuples. 
  thanks, Farhad. 
 Date: Fri, 29 Jul 2016 18:01:05 +0300
 From: Artem Plakunov 
 To: openstack@lists.openstack.org
 Subject: [Openstack] [networking-sfc] Flow classifier conflict logic
 Message-ID: <579b6fb1.3030...@lvk.cs.msu.su>
 Content-Type: text/plain; charset="utf-8"; Format="flowed"
 
 Hello.
 We have two deployments with networking-sfc:
 mirantis 8.0 (liberty) and mirantis 9.0 (mitaka).
 
 I noticed a difference in how flow classifiers conflict with each other 
 which I do not understand. I'm not sure if it is a bug or not.
 
 I did the following on mitaka:
 1. Create tenant 1 and network 1
 2. Launch vms A and B in network 1
 3. Create tenant 2, share network 1 to it with RBAC policy, launch vm C 
 in network 1
 4. Create tenant 3, share network 1 to it with RBAC policy, launch vm D 
 in network 1
 5. Setup sfc:
     create two port pairs for vm C and vm D with a bidirectional port
     create two port pair groups with these pairs (one pair in one group)
     create flow classifier 1: logical-source-port = vm A port, protocol 
 = tcp
     create flow classifier 2: logical-source-port = vm B port, protocol 
 = tcp
     create chain with group 1 and classifier 1
     create chain with group 2 and classifier 2 - this step gives the 
 following error:
 
 Flow Classifier 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow 
 Classifier 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
 d1070955-fae9-4483-be9e-0e30f2859282.
 Neutron server returns request_ids: 
 ['req-9d0eecec-2724-45e8-84b4-7ccf67168b03']
 
 The only thing neutron logs have is this from server.log:
 2016-07-29 14:15:57.889 18917 INFO neutron.api.v2.resource 
 [req-9d0eecec-2724-45e8-84b4-7ccf67168b03 
 0b807c8616614b84a4b16a318248d28c 9de9dcec18424398a75a518249707a61 - - -] 
 create failed (client error): Flow Classifier 
 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow Classifier 
 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
 d1070955-fae9-4483-be9e-0e30f2859282.
 
 I tried the same in liberty and it works and sfc successfully routes 
 traffic from both vms to their respective port groups
 
 Liberty setup:
 neutron version 7.0.4
 neutronclient version 3.1.1
 networking-sfc version 1.0.0 (from pip package)
 
 Mitaka setup:
 neutron version 8.1.1
 neutronclient version 5.0.0 (tried using 3.1.1 with same outcome)
 networking-sfc version 1.0.1.dev74 (from master branch commit 
 6730b6810355761cf55f04a40cd645f065f15752)
 
 I'll attach the output of commands neutron port-list, port-pair-list, 
 port-pair-group-list, flow-classifier-list and port-chain-list.
 
 Is this an intended flow classifier behavior? If so, why? The port 
 chains and all their participants are different.
 
 
 
   
 
 

  ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread Pedro Sousa
Hi,

I'm using it with CentOS. I've installed mitaka from CentOS Sig Repos and
followed Redhat Documentation:
https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/director-installation-and-usage/director-installation-and-usage

Let me know if you have more questions.

Regards


On Tue, Aug 2, 2016 at 5:57 PM, Curtis  wrote:

> Hi,
>
> I'm just curious who, if anyone, is using TripleO in production?
>
> I'm having a hard time finding anywhere to ask end-user type
> questions. #tripleo irc seems to be just a dev channel. Not sure if
> there is anywhere for end users to ask questions. A quick look at
> stackalytics shows it's mostly RedHat contributions, though again, it
> was a quick look.
>
> If there were other users it would be cool to perhaps try to have a
> session on it at the upcoming ops midcycle.
>
> Thanks,
> Curtis.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [OpenStack-Infra] Work toward a translations checksite and call for help

2016-08-02 Thread Elizabeth K. Joseph
On Mon, Aug 1, 2016 at 12:12 PM, Jeremy Stanley  wrote:
> I'm hesitant to rely on unstack/clean/stack working consistently
> over time, though maybe others have seen them behave more reliably
> than I think they do. I had assumed we'd replace with fresh servers
> each time and bootstrap DevStack from scratch, though perhaps that's
> overkill?

This is what I was assuming as well, since we'd need a fresh version
of DevStack itself each time so the latest translations cleanly apply.
It would be hard to track all the changes by just doing
unstack/clean/fresh DevStack clone/stack, even if it was reliable over
time (my experience has also been that it's not).

I also learned the other day that the rejoin_stack.sh script was
largely unmaintained and removed in Mitaka, so any reboots cause you
to have to run unstack/clean/stack again, which is worthy to consider
as we discuss snapshots.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread John van Ommen
The 1.x versions of Helion Open Stack used Triple O.

On Aug 2, 2016 9:59 AM, "Curtis"  wrote:

> Hi,
>
> I'm just curious who, if anyone, is using TripleO in production?
>
> I'm having a hard time finding anywhere to ask end-user type
> questions. #tripleo irc seems to be just a dev channel. Not sure if
> there is anywhere for end users to ask questions. A quick look at
> stackalytics shows it's mostly RedHat contributions, though again, it
> was a quick look.
>
> If there were other users it would be cool to perhaps try to have a
> session on it at the upcoming ops midcycle.
>
> Thanks,
> Curtis.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [Manila] Migration APIs 2-phase vs. 1-phase

2016-08-02 Thread Ben Swartzlander
It occurred to me that if we write the 2-phase migration APIs correctly, 
then it will be fairly trivial to implement 1-phase migration outside 
Manila (in the client, or even higher up).


I would like to propose that we change the migration API to actually 
work that way, because I think it will have positive impact on the 
driver interface and it will make the internals for migration a lot 
simpler. Specifically, I'm proposing that the Manila REST API only 
supports starting/completing migrations, and querying the status of an 
ongoing migration -- there should be no automatic looping inside Manila 
to perform a start+complete in 1 shot.


Additionally I think it makes sense to make all the migration driver 
interfaces more asynchronous, but that change is less urgent. Getting 
the driver interface exactly right is less important than getting the 
REST API right in Newton. Nevertheless, I think we should aim for a 
driver interface that expects all the migration calls to return quickly 
and for status polling to occur automatically on long running 
operations. This will enable much better behavior when restarting 
services during a migration.


I'm going to put a topic on the meeting agenda for Thursday to discuss 
this in more detail, but if anyone has other feelings please chime in here.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [networking-sfc] Flow classifier conflict logic

2016-08-02 Thread Farhad Sunavala
 Please send the tenant ids of all six neutron ports.
>From admin:neutron port-show  | grep tenant_id
Thanks,Farhad.

On Monday, August 1, 2016 7:44 AM, Artem Plakunov  
wrote:
 

  Thanks. 
 
 You said though that classifier must be unique within a tenant. I tried 
creating chains in two different tenants by different users without any RBAC 
rules. So there are two tenants, each has 1 network, 2 vms (source, service) 
and an admin user. I used different openrc configs for each user yet still get 
the same conflict. 
 
 Info about the test is in the attachment
 31.07.2016 5:25, Farhad Sunavala пишет:
  
  Yes, this was intentionally done. The logical-source-port is important only 
at the point of classification. All successive classifications rely only on the 
5 tuple and MPLS label (chain ID). 
  Consider an extension of the scenario you mention below. 
  Sources: (similar to your case) a  b 
  Port-pairs: (added ppe and ppf) ppc ppd ppe ppf 
  Port-pair-groups: (added ppge and ppgf) ppgc ppgd ppge ppgf 
  Flow-classifiers: fc1: logical-source-port of a && tcp fc2: 
logical-source-port of b && tcp 
  Port-chains: pc1: fc1 && (ppgc + ppge) pc2: fc2 && (ppgd + ppgc + ppgf) 
  
  
  The flow-classifier has logical-src-port and protocol=tcp The 
logical-src-port has no relevance in the middle of the chain. 
  In the middle of the chain, the only relevant flow-classifier is 
protocol=tcp. 
  If we allow it, we cannot distinguish TCP traffic coming out of ppgc (and 
subsequently ppc)  as to whether to mark it with the label for pc1 or the label 
for pc2. 
  In other words, within a tenant the flow-classifiers need to be unique wrt 
the 5 tuples. 
  thanks, Farhad. 
 Date: Fri, 29 Jul 2016 18:01:05 +0300
 From: Artem Plakunov 
 To: openst...@lists.openstack.org
 Subject: [Openstack] [networking-sfc] Flow classifier conflict logic
 Message-ID: <579b6fb1.3030...@lvk.cs.msu.su>
 Content-Type: text/plain; charset="utf-8"; Format="flowed"
 
 Hello.
 We have two deployments with networking-sfc:
 mirantis 8.0 (liberty) and mirantis 9.0 (mitaka).
 
 I noticed a difference in how flow classifiers conflict with each other 
 which I do not understand. I'm not sure if it is a bug or not.
 
 I did the following on mitaka:
 1. Create tenant 1 and network 1
 2. Launch vms A and B in network 1
 3. Create tenant 2, share network 1 to it with RBAC policy, launch vm C 
 in network 1
 4. Create tenant 3, share network 1 to it with RBAC policy, launch vm D 
 in network 1
 5. Setup sfc:
     create two port pairs for vm C and vm D with a bidirectional port
     create two port pair groups with these pairs (one pair in one group)
     create flow classifier 1: logical-source-port = vm A port, protocol 
 = tcp
     create flow classifier 2: logical-source-port = vm B port, protocol 
 = tcp
     create chain with group 1 and classifier 1
     create chain with group 2 and classifier 2 - this step gives the 
 following error:
 
 Flow Classifier 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow 
 Classifier 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
 d1070955-fae9-4483-be9e-0e30f2859282.
 Neutron server returns request_ids: 
 ['req-9d0eecec-2724-45e8-84b4-7ccf67168b03']
 
 The only thing neutron logs have is this from server.log:
 2016-07-29 14:15:57.889 18917 INFO neutron.api.v2.resource 
 [req-9d0eecec-2724-45e8-84b4-7ccf67168b03 
 0b807c8616614b84a4b16a318248d28c 9de9dcec18424398a75a518249707a61 - - -] 
 create failed (client error): Flow Classifier 
 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow Classifier 
 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
 d1070955-fae9-4483-be9e-0e30f2859282.
 
 I tried the same in liberty and it works and sfc successfully routes 
 traffic from both vms to their respective port groups
 
 Liberty setup:
 neutron version 7.0.4
 neutronclient version 3.1.1
 networking-sfc version 1.0.0 (from pip package)
 
 Mitaka setup:
 neutron version 8.1.1
 neutronclient version 5.0.0 (tried using 3.1.1 with same outcome)
 networking-sfc version 1.0.1.dev74 (from master branch commit 
 6730b6810355761cf55f04a40cd645f065f15752)
 
 I'll attach the output of commands neutron port-list, port-pair-list, 
 port-pair-group-list, flow-classifier-list and port-chain-list.
 
 Is this an intended flow classifier behavior? If so, why? The port 
 chains and all their participants are different.
 
 
 
   
 
 

  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Gearman-plugin for Jenkins: support for dockerized executors

2016-08-02 Thread Wolniewicz, Maciej (Nokia - PL/Wroclaw)
Hi,



Khai, Clark thank you for your answers.



It looked like we had problem with dynamic slaves because we tried to use 
gearman plugin with newest Jenkins LTS release (2.7.1).

We also had problems with sending Zuul parameter (ZUUL_PROJECT, ZUUL_COMMIT, 
etc.) to jenkins jobs. Those parameters could be seen in job description in 
build history:



  Triggered by change:

​https://gerrite1.ext.net​.nokia.com:443/10541;>10541,41​

  Branch: master

  Pipeline: check



however were not passed as environment variables to job.



When we used Jenkins release 1.625.3 - the one suggested on plugin's wiki 
(https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin) – everything 
started to work. Now Gearman plugin sees dynamic slaves and is passing Zuul 
parameters to jobs.



Could you tell me if there is a plan to support newest Jenkins LTS releases in 
near future (for example 2.7.1) ?



On Zuul documentation page 
(http://docs.openstack.org/infra/system-config/zuul.html ) we can see that 
openstack is moving away from Jenkins/Zuul to Ansible/Zuul for launching jobs.

Will Gearman plugin be still developed in such case? Are you planning to 
support this plugin in long term period?



Do you have any knowledge that future releases of Zuul (3.x.x) will also you 
gearman deamon to handle job executions so that we could use it with gearman 
plugin?



Br,

Maciek



-Original Message-

From: Zaro [mailto:zaro0...@gmail.com]

Sent: Monday, July 25, 2016 6:45 PM

To: Foerster, Thomas (Nokia - DE/Munich) 

Cc: openstack-infra@lists.openstack.org; Wilkocki, Michal (Nokia - PL/Wroclaw) 
; Wolniewicz, Maciej (Nokia - PL/Wroclaw) 


Subject: Re: [OpenStack-Infra] Gearman-plugin for Jenkins: support for 
dockerized executors



Jenkins still doesn to provide the ability to listen for executor

changes.  It only allows listening to slave node changes with the

ComputerListener[1] extension point.  It doesn't seem like there's any

plans in Jenkins core to provide this in future releases.  If that's

not available then gearman cannot provide the functionality that you

request.



[1] 
https://wiki.jenkins-ci.org/display/JENKINS/Extension+points#Extensionpoints-hudson.slaves.ComputerListener



-Khai



On Mon, Jul 25, 2016 at 5:49 AM, Foerster, Thomas (Nokia - DE/Munich)

 wrote:

> Hi,

>

> We are using the Gearman-plugin (version: 0.2.0) at our Nokia’s Continuous

> Integration environment together with Jenkins (version: 2.7.1). Except the

> Gerrit server, the entire CI environment is dockerized: Zuul servers,

> Jenkins Master instances and build executers being able to scale according

> the demand. The Gearman is being used to handle multiple Jenkins Master and

> build executers across the project.

>

> We would like to start docker machines as build executors on demand and

> according the real CI load. However there seems to be a limitation at the

> Gearman-plugin (0.2.0), that all available build executors have to be know

> and running during plugin start-up time. Docker machines started and

> integrated to Jenkins after plugin start, won’t be recognized by the plugin.

>

> We found that is a known issue and documented at:

> https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin

>

> === CLIP ===

> Known Issues

> Adding or removing executors on nodes will require restarting the gearman

> plugin.  This is because Jenkins does NOT provide provide a way to listen

> for changes to executors therefore the gearman plugin does not know that it

> needs to re-register functions due to executor updates.

> === CLIP ===

>

> The Gearman-plugin seems to be still maintained.

> Do you know whether that issue has been taken up for next upcoming plugin

> release?

>

> Thanks in advance for your support.

>

> Best regards.

> ---

> Thomas Förster

> Manager R, A Network Management & SON BU

> NOKIA

>

> Werinherstr. 91

> D-81541 Munich

> Germany

> Building 5541, Room 3056

>

> Mob:  +49 173-25 57 169

> Soft: 8045691

>

> mailto:thomas.foers...@nokia.com

> 

>

> Nokia Solutions and Networks Deutschland GmbH

> Geschäftsleitung / Board of Directors: Wichard von Bredow, Birgit Königsheim

> Sitz der Gesellschaft: München / Registered office: Munich

> Registergericht: München / Commercial registry: Munich, HRB 198136

>

>

>

>

> ___

> OpenStack-Infra mailing list

> OpenStack-Infra@lists.openstack.org

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[Openstack-operators] Who's using TripleO in production?

2016-08-02 Thread Curtis
Hi,

I'm just curious who, if anyone, is using TripleO in production?

I'm having a hard time finding anywhere to ask end-user type
questions. #tripleo irc seems to be just a dev channel. Not sure if
there is anywhere for end users to ask questions. A quick look at
stackalytics shows it's mostly RedHat contributions, though again, it
was a quick look.

If there were other users it would be cool to perhaps try to have a
session on it at the upcoming ops midcycle.

Thanks,
Curtis.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [HA] RFC: High Availability feedback session for ops in Barcelona

2016-08-02 Thread Adam Spiers
Adam Spiers  wrote:
> Hi all,
> 
> I doubt anyone would dispute that High Availability is a really
> important topic within OpenStack, yet none of the OpenStack
> conferences or Design Summits so far have provided an "official" track
> or similar dedicated space for discussion on HA topics.

[snipped]

> Other possible topics:
> 
>   - Input from operators on their experiences of deployment,
> maintenance, and effectiveness of highly available OpenStack
> infrastructure

I'd like to follow up on this list specifically regarding this idea ...

[snipped]

> Whilst we do have the #openstack-ha IRC channel, weekly IRC meetings,
> and of course the mailing lists, I think it would be helpful to have
> an official space in the design summits for continuation of those
> technical discussions face-to-face.

How much appetite is there for an Ops fishbowl session in the
Barcelona design summit specifically as a forum for operators to
provide feedback on experiences with High Availability?  This would be
an opportunity for operators to bitc^H^H^H^Hpraise existing solutions
and provide feedback to developers on future changes in HA
architecture / functionality :-)

And if there is enough appetite, what's the best way to propose it
being added to the Ops track schedule?  Is it organised by the same
folks who organize other Ops meetups?

Thanks!
Adam

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack] Federated users login into Horizon but, an error appear on apache logs: "Unable to retrieve project list".

2016-08-02 Thread Martinx - ジェームズ
Guys,

 I trying to configure OpenStack Federation and, right after logging into
Horizon with a Federated user, the following error appear on Apache /
Keystone logs:

---
Unable to retrieve project list.
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/openstack_auth/user.py", line 313,
in authorized_tenants
is_federated=self.is_federated)
  File "/usr/lib/python2.7/dist-packages/openstack_auth/utils.py", line
307, in get_project_list
projects = client.federation.projects.list()
  File
"/usr/lib/python2.7/dist-packages/keystoneclient/v3/contrib/federation/base.py",
line 34, in list
tenant_list = self._list(url, self.object_type)
  File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 124,
in _list
resp, body = self.client.get(url, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line
173, in get
return self.request(url, 'GET', **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line
331, in request
resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line
98, in request
return self.session.request(url, method, **kwargs)
  File "/usr/lib/python2.7/dist-packages/positional/__init__.py", line 94,
in inner
return func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line
467, in request
raise exceptions.from_response(resp, method, url)
BadRequest: Bad Request (HTTP 400)
---

 What this means?

Thanks!
Thiago
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (keystone/horizon) ActiveDirectory/ldap for users/groups

2016-08-02 Thread Sean.Boran

1. For example, to list users:
ldapsearch -x -D cn='service-account,dc=example,dc=net' 
'(&(objectClass=person)(cn=*))'  -W

2. admin_token is not commented it has a hash value, so doing

curl -v -s -H "X-Auth-Token: " http://192.168.0.2:5000/v3/users

< HTTP/1.1 401 Unauthorized

in the keystone logs
2016-08-02 16:26:56.559 5368 INFO keystone.common.wsgi 
[req-27e218af-921d-46dd-9432-e871a35d5908 - - - - -] GET 
http://192.168.0.2:5000/v3/users
2016-08-02 16:26:56.560 5368 WARNING keystone.common.controller 
[req-27e218af-921d-46dd-9432-e871a35d5908 - - - - -] RBAC: Bypassing 
authorization
2016-08-02 16:26:56.561 5368 WARNING keystone.common.utils 
[req-27e218af-921d-46dd-9432-e871a35d5908 - - - - -] Couldn't find the auth 
context.
2016-08-02 16:26:56.562 5368 WARNING keystone.common.wsgi 
[req-27e218af-921d-46dd-9432-e871a35d5908 - - - - -] Authorization failed. The 
request you have made requires authentication. from 192.168.0.2

I don’t see any ldap in syslog.

Sean


From: Kseniya Tychkova 
Date: Tuesday 2 August 2016 at 16:46
To: "openstack@lists.openstack.org" , "Boran 
Sean, INI-INO-BX-IT" 
Subject: [Openstack] (keystone/horizon) ActiveDirectory/ldap for users/groups

Sean,
I would like to help you, but I need more information
1. could you please explain what means your phrase:
"On the command line with ldapsearch, users and groups can be listed (so the 
attributes configured should be ok?)"
2. please try to use curl to debug:
 - uncomment "admin_token = ADMIN" in your /etc/keystone/keystone.conf and 
restart keystone
 - curl -s -H "X-Auth-Token: ADMIN" http://localhost:5000/v3/users
 - curl -s -H "X-Auth-Token: ADMIN" http://localhost:5000/v3/groups
3. If something wrong go to keystone log, keystone logs ldap requests, so you 
can see them and verify them



Kind regards, Kseniya
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (keystone/horizon) ActiveDirectory/ldap for users/groups

2016-08-02 Thread Sean.Boran
Hi,

So I logged in as admin/default, then switched to the ldap 
domain(horizon/identity/domains/), added a role.
Next try to add a user to that role (/horizon/identity/users), but “Unable to 
retrieve user list”.

In /var/log/user.log I see

LDAP bind: who=cn=bind-user,dc=example,dc=net
<14>Aug  2 16:12:45 node-16 admin: 2016-08-02 16:12:45.473 5366 INFO 
keystone.common.ldap.core [req-a18130f2-58e4-43e3-8cb2-aed4c112334b 
8ce0f5b503914e08a8e4f24a1ebf83f8 7166483dcbc64ef79390795b9c425be5 - default 
default] LDAP search: base=dc=example,dc=net scope=2 
filterstr=(&(objectClass=person)(cn=*)) attrs=['cn', 'userPassword', 
'userAccountControl', 'sAMAccountName', 'mail', 'description'] attrsonly=0

2016-08-02 16:12:45.473 5366 INFO keystone.common.ldap.core 
[req-a18130f2-58e4-43e3-8cb2-aed4c112334b 8ce0f5b503914e08a8e4f24a1ebf83f8 
7166483dcbc64ef79390795b9c425be5 - default default] LDAP search: 
base=dc=example,dc=net scope=2 filterstr=(&(objectClass=person)(cn=*)) 
attrs=['cn', 'userPassword', 'userAccountControl', 'sAMAccountName', 'mail', 
'description'] attrsonly=0

If the ldap query “(&(objectClass=person)(cn=*))” is run through the CLI 
ldapsearch, it does return a long list of thousands of users.

Ah, just noticed /var/log/keystone/admin.log

2016-08-02 16:17:40.477 5365 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 99, in _ldap_call
2016-08-02 16:17:40.477 5365 ERROR keystone.common.wsgi result = 
func(*args,**kwargs)
2016-08-02 16:17:40.477 5365 ERROR keystone.common.wsgi SIZELIMIT_EXCEEDED: 
{'desc': 'Size limit exceeded'}

I wonder if there is a way for the UI to only fetch the first 100 users, or not 
to fetch any list, but just one by one?

Thanks,

Sean



On 02/08/16 17:46, "Alexander Makarov"  wrote:

Sean,

the problem may be in the following: in Mitaka release keystone requires 
user to have a role in the domain it's getting authZ'ing in. We ran into 
the problem when Horizon tried to authZ user in Default domain and got 
the same error.


On 02.08.2016 16:25, sean.bo...@swisscom.com wrote:
> Hi,
>
> I’m having a bit of fun try to use AD for identifying and authorising Users 
> on Openstack .
> The idea is to use AD for read-only access to users/group definitions, but 
> all authorisation data to be stored in SQL.
>
> What works: Users can be authenticated (LDAP bind works, verification of the 
> user), but not yet authorised – one gets "You are not authorized for any 
> projects or domains" after authentication (integration of groups).
> On the command line with ldapsearch, users and groups can be listed (so the 
> attributes configured should be ok?)
>
> Problems when testing with horizon:
> - Login via ldap fails on authorization
> - If logged in as admin in the default (sql) domain, the LDAP domain can be 
> viewed at /horizon/identity/domains/ but users and groups cannot be managed 
> “Unable to retrieve group list”, “Unable to retrieve user list”
> This may also be since the AD contains about 20’000 users (too much data for 
> the user/group management screen)
>
> The /etc/keystone/domains/keystone.example.com is as follows.
>
> [ldap]
> user_enabled_attribute=userAccountControl
> query_scope=sub
> user_filter=
> group_allow_delete=False
> page_size=0
> use_tls=False
> password=NOT_HERE
> user_allow_update=False
> user_id_attribute=cn
> user_enabled_mask=2
> suffix= dc=example,dc=com
> user_enabled_default=512
> group_allow_update=False
> user_name_attribute=sAMAccountName
> chase_referrals=False
> group_allow_create=False
> user_allow_delete=False
>
> group_name_attribute=cn
> group_filter=
> group_member_attribute=member
> group_tree_dn=dc=example,dc=com
> group_objectclass = group
> group_desc_attribute=
> group_id_attribute=
>
> user_pass_attribute=userPassword
> user=cn=my-service-user
> user_allow_create=False
> user_tree_dn=dc=example,dc=com
> url=ldap://ldap.example.com
> user_objectclass=person
>
> [identity]
> driver=keystone.identity.backends.ldap.Identity
>
> Debugging for ldap was enabled to see the ldap bins/queries being sent out.
>
> Versions:
> keystone –version shows 2.3
> Mikata (with initial install done by Fuel).
>
> Resources consulted so far:
> http://docs.openstack.org/developer/keystone/configuration.html#configuring-the-ldap-identity-provider
> http://docs.openstack.org/admin-guide/keystone_integrate_with_ldap.html
> Book: openstack production recipies.
> Also: https://wiki.openstack.org/wiki/Horizon/DomainWorkFlow but got confused 
> there.
>
> Questions:
> - Are there any good resources out there for AD integration? E.g. How 
> user/group/roles work within an ldap context?
> - Or tips on he above?
> - How can one assign users from LDAP to the _members_ or admin groups to get 
> started?
>
> Thanks in advance,
>
> Sean
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : 

Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Hayes, Graham
On 02/08/2016 16:37, Doug Hellmann wrote:
> Excerpts from Hayes, Graham's message of 2016-08-02 13:49:06 +:
>> On 02/08/2016 14:37, Doug Hellmann wrote:
>>> Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
 On 29/07/2016 21:59, Doug Hellmann wrote:
> One of the outcomes of the discussion at the leadership training
> session earlier this year was the idea that the TC should set some
> community-wide goals for accomplishing specific technical tasks to
> get the projects synced up and moving in the same direction.
>
> After several drafts via etherpad and input from other TC and SWG
> members, I've prepared the change for the governance repo [1] and
> am ready to open this discussion up to the broader community. Please
> read through the patch carefully, especially the "goals/index.rst"
> document which tries to lay out the expectations for what makes a
> good goal for this purpose and for how teams are meant to approach
> working on these goals.
>
> I've also prepared two patches proposing specific goals for Ocata
> [2][3].  I've tried to keep these suggested goals for the first
> iteration limited to "finish what we've started" type items, so
> they are small and straightforward enough to be able to be completed.
> That will let us experiment with the process of managing goals this
> time around, and set us up for discussions that may need to happen
> at the Ocata summit about implementation.
>
> For future cycles, we can iterate on making the goals "harder", and
> collecting suggestions for goals from the community during the forum
> discussions that will happen at summits starting in Boston.
>
> Doug
>
> [1] https://review.openstack.org/349068 describe a process for managing 
> community-wide goals
> [2] https://review.openstack.org/349069 add ocata goal "support python 
> 3.5"
> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> libraries"

 I am confused. When I proposed my patch for doing something very similar
 (Equal Chances for all projects is basically a multiple release goal) I
 got the following rebuttals:

  > "and it would be
  > a mistake to try to require that because the issue is almost always
  > lack of resources and not lack of desire. Volunteers to contribute
  > to the work that's needed will do more to help than a
  > one-size-fits-all policy."

  > This isn't a thing that gets fixed with policy. It gets fixed with
  > people.

  > I am reading through the thread, and it puzzles me that I see a lot
  > of right words about goals but not enough hints on who is going to
  > implement that.

  > I think the right solutions here are human ones. Talk with people.
  > Figure out how you can help lighten their load so that they have
  > breathing space. I think hiding behind policy becomes a way to make
  > us more separate rather than engaging folks more directly.

  > Coming at this with top down declarations of how things should work
  > not only ignores reality of the ecosystem and the current state of
  > these projects, but is also going about things backwards.

  > This entirely ignores that these are all open source projects,
  > which are often very sparsely contributed to. If you have an issue
  > with a project or the interface it provides, then contribute to it.
  > Don't make grandiose resolutions trying to force things into what you
  > see as an ideal state, instead contribute to help fix the problems
  > you've identified.

 And yet, we are currently suggesting a system that will actively (in an
 undefined way) penalise projects who do not comply with a different set
 of proposals, done in a top down manner.

 I may be missing the point, but the two proposals seem to have
 similarities - what is the difference?

 When I see comments like:

  > Project teams who join the big tent sign up to the rights and
  > responsibilities that go along with membership. Those responsibilities
  > include taking some direction from the TC, including regarding work
  > they may not give the same priority as the broader community.

 It sounds like top down is OK, but previous ML thread / TC feedback
 has been different.
>>>
>>> One difference is that these goals are not things like "the
>>> documentation team must include every service project in the
>>> installation guide" but rather would be phrased like "every project
>>> must provide an installation guide". The work is distributed to the
>>> vertical teams, and not focused in the horizontal teams.
>>
>> Well, the wording was actually "the documentation team must provide a
>> way for all projects to be included in the documentation guide". The
>> work was on the 

[Openstack-operators] [openstack-operators] Ops Midcycle Evening Event!

2016-08-02 Thread Melvin Hillsman
Hey everyone,

If you have not heard already, surely you will be hearing soon, the Ops 
Midcycle is coming August 25 and 26th! After the first day we are looking to 
have an evening event and this is a great opportunity for sponsorship. We are 
asking that if you think or know your company would be interested in sponsoring 
the evening event to reply back and we can get the details worked out.

Kind regards,
--
Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center
mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
Learner | Ideation | Belief | Responsibility | Command
http://osic.org

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [HA] RFC: High Availability track at future Design Summits

2016-08-02 Thread Adam Spiers
Hi Thierry,

Thierry Carrez  wrote:
> Adam Spiers wrote:
> > I doubt anyone would dispute that High Availability is a really
> > important topic within OpenStack, yet none of the OpenStack
> > conferences or Design Summits so far have provided an "official" track
> > or similar dedicated space for discussion on HA topics.
> > [...]
> 
> We do not provide a specific track at the "Design Summit" for HA (or for
> hot upgrades for the matter) but we have space for "cross-project
> workshops" in which HA topics would be discussed. I suspect what you
> mean here is that the one of two sessions that the current setup allows
> are far from enough to tackle that topic efficiently ?

Yes, I think that's probably true.  I get the impression cross-project
workshops are intended more for coordination of common topics between
many official big tent projects, whereas our topics typically involve
a small handful of projects, some of which are currently unofficial.

> IMHO there is dedicated space -- just not enough of it. It's one of the
> issues with the current Design Summit setup -- just not enough time and
> space to tackle everything in one week. With the new event format I
> expect we'll be able to free up more time to discuss such horizontal
> issues

Right.  I'm looking forward to the new format :-)

> but as far as Barcelona goes (where we have less space and less
> time than in Austin), I'd encourage you to still propose cross-project
> workshops (and engage on the Ops side of the Design Summit to get
> feedback from there as well).

OK thanks - I'll try to figure out the best way of following up on
these two points.  I see that

  https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads

is still empty, so I guess we're still on the early side of planning
for design summit tracks, which hopefully means there's still time
to propose a fishbowl session for Ops feedback on HA.

Thanks a lot for the advice!
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Federated users can use Horizon, but can not use `openstack` CLI, error "Could not find user".

2016-08-02 Thread Martinx - ジェームズ
Hey guys,

 I'm facing a hard time here to configure OpenStack Federation...

 So far, I can login into Horizon using my Windows AD credentials but, I
can not use command line interface, the `openstack` command, with Federated
users.

 Here is the error:

---
ubuntu@controller-1:~$ source ~/tmartins-openrc.conf



Please enter your OpenStack Password:

ubuntu@controller-1:~$ openstack token issue



The request you have made requires authentication. (HTTP 401) (Request-ID:
req-15d779ba-211e-4bc9-b7b6-a3fc887c7f92)
---

 On Apache log, I see the following error ("Could not find user"):

---
http://paste.openstack.org/show/545677/
---

 Note that the "~/tmartins-openrc.conf" file was downloaded from Horizon
itself, after logging in with that very same user.

 It worth to mention that both "admin" and "demo" local users still works
even after enabling Federation, on both Horizon and `openstack` CLI.

 I really appreciate any help!

Thanks!
Thiago
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Jay Pipes

On 08/02/2016 11:29 AM, Thierry Carrez wrote:

Doug Hellmann wrote:

[...]

Likewise, what if the Manila project team decides they aren't interested
in supporting Python 3.5 or a particular greenlet library du jour that
has been mandated upon them? Is the only filesystem-as-a-service project
going to be booted from the tent?


I hardly think "move off of the EOL-ed version of our language" and
"use a library du jour" are in the same class.  All of the topics
discussed so far are either focused on eliminating technical debt
that project teams have not prioritized consistently or adding
features that, again for consistency, are deemed important by the
overall community (API microversioning falls in that category,
though that's an example and not in any way an approved goal right
now).


Right, the proposal is pretty clearly about setting a number of
reasonable, small goals for a release cycle that would be awesome to
collectively reach. Not really invasive top-down design mandates that we
would expect teams to want to resist.

IMHO if a team has a good reason for not wanting or not being able to
fulfill a common goal that's fine -- it just needs to get documented and
should not result in itself in getting kicked out from anything. If a
team regularly skips on common goals (and/or misses releases, and/or
doesn't fix security issues) that's a general sign that it's not really
behaving like an OpenStack project and then a case could be opened for
removal, but there is nothing new here.


Sure, I have no disagreement with any of the above. I just see TC 
mandates as a slippery slope. I'm practicing my OpenStack civic "duty" 
to guard against the encroachment of project technical independence.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] (keystone/horizon) ActiveDirectory/ldap for users/groups

2016-08-02 Thread Alexander Makarov

Sean,

the problem may be in the following: in Mitaka release keystone requires 
user to have a role in the domain it's getting authZ'ing in. We ran into 
the problem when Horizon tried to authZ user in Default domain and got 
the same error.



On 02.08.2016 16:25, sean.bo...@swisscom.com wrote:

Hi,

I’m having a bit of fun try to use AD for identifying and authorising Users on 
Openstack .
The idea is to use AD for read-only access to users/group definitions, but all 
authorisation data to be stored in SQL.

What works: Users can be authenticated (LDAP bind works, verification of the user), but 
not yet authorised – one gets "You are not authorized for any projects or 
domains" after authentication (integration of groups).
On the command line with ldapsearch, users and groups can be listed (so the 
attributes configured should be ok?)

Problems when testing with horizon:
- Login via ldap fails on authorization
- If logged in as admin in the default (sql) domain, the LDAP domain can be 
viewed at /horizon/identity/domains/ but users and groups cannot be managed 
“Unable to retrieve group list”, “Unable to retrieve user list”
This may also be since the AD contains about 20’000 users (too much data for 
the user/group management screen)

The /etc/keystone/domains/keystone.example.com is as follows.

[ldap]
user_enabled_attribute=userAccountControl
query_scope=sub
user_filter=
group_allow_delete=False
page_size=0
use_tls=False
password=NOT_HERE
user_allow_update=False
user_id_attribute=cn
user_enabled_mask=2
suffix= dc=example,dc=com
user_enabled_default=512
group_allow_update=False
user_name_attribute=sAMAccountName
chase_referrals=False
group_allow_create=False
user_allow_delete=False

group_name_attribute=cn
group_filter=
group_member_attribute=member
group_tree_dn=dc=example,dc=com
group_objectclass = group
group_desc_attribute=
group_id_attribute=

user_pass_attribute=userPassword
user=cn=my-service-user
user_allow_create=False
user_tree_dn=dc=example,dc=com
url=ldap://ldap.example.com
user_objectclass=person

[identity]
driver=keystone.identity.backends.ldap.Identity

Debugging for ldap was enabled to see the ldap bins/queries being sent out.

Versions:
keystone –version shows 2.3
Mikata (with initial install done by Fuel).

Resources consulted so far:
http://docs.openstack.org/developer/keystone/configuration.html#configuring-the-ldap-identity-provider
http://docs.openstack.org/admin-guide/keystone_integrate_with_ldap.html
Book: openstack production recipies.
Also: https://wiki.openstack.org/wiki/Horizon/DomainWorkFlow but got confused 
there.

Questions:
- Are there any good resources out there for AD integration? E.g. How 
user/group/roles work within an ldap context?
- Or tips on he above?
- How can one assign users from LDAP to the _members_ or admin groups to get 
started?

Thanks in advance,

Sean

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Tim Bell

> On 02 Aug 2016, at 17:13, Hayes, Graham  wrote:
> 
> On 02/08/2016 15:42, Flavio Percoco wrote:
>> On 01/08/16 10:19 -0400, Sean Dague wrote:
>>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
 Thierry, Ben, Doug,
 
 How can we distinguish between. "Project is doing the right thing, but
 others are not joining" vs "Project is actively trying to keep people
 out"?
>>> 
>>> I think at some level, it's not really that different. If we treat them
>>> as different, everyone will always believe they did all the right
>>> things, but got no results. 3 cycles should be plenty of time to drop
>>> single entity contributions below 90%. That means prioritizing bugs /
>>> patches from outside groups (to drop below 90% on code commits),
>>> mentoring every outside member that provides feedback (to drop below 90%
>>> on reviews), shifting development resources towards mentoring / docs /
>>> on ramp exercises for others in the community (to drop below 90% on core
>>> team).
>>> 
>>> Digging out of a single vendor status is hard, and requires making that
>>> your top priority. If teams aren't interested in putting that ahead of
>>> development work, that's fine, but that doesn't make it a sustainable
>>> OpenStack project.
>> 
>> 
>> ++ to the above! I don't think they are that different either and we might 
>> not
>> need to differentiate them after all.
>> 
>> Flavio
>> 
> 
> I do have one question - how are teams getting out of
> "team:single-vendor" and towards "team:diverse-affiliation" ?
> 
> We have tried to get more people involved with Designate using the ways
> we know how - doing integrations with other projects, pushing designate
> at conferences, helping DNS Server vendors to add drivers, adding
> drivers for DNS Servers and service providers ourselves, adding
> features - the lot.
> 
> We have a lot of user interest (41% of users were interested in using
> us), and are quite widely deployed for a non tc-approved-release
> project (17% - 5% in production). We are actually the most deployed
> non tc-approved-release project.
> 
> We still have 81% of the reviews done by 2 companies, and 83% by 3
> companies.
> 
> I know our project is not "cool", and DNS is probably one of the most
> boring topics, but I honestly believe that it has a place in the
> majority of OpenStack clouds - both public and private. We are a small
> team of people dedicated to making Designate the best we can, but are
> still one company deciding to drop OpenStack / DNS development from
> joining the single-vendor party.
> 
> We are definitely interested in putting community development ahead of
> development work - but what that actual work is seems to difficult to
> nail down. I do feel sometimes that I am flailing in the dark trying to
> improve this.
> 
> If projects could share how that got out of single-vendor or into 
> diverse-affiliation this could really help teams progress in the
> community, and avoid being removed.
> 
> Making grand statements about "work harder on community" without any
> guidance about what we need to work on do not help the community.
> 
> - Graham
> 
> 

Interesting thread… it raises some questions for me

- Some projects in the big tent are inter-related. For example, if we identify 
a need for a project in our production cloud, we contribute a puppet module 
upstream into the openstack-puppet project. If the project is then evicted, 
does this mean that the puppet module would also be removed from the puppet 
openstack project ? Documentation repositories ? 

- Operators considering including a project in their cloud portfolio look at 
various criteria in places like the project navigator. If a project does not 
have diversity, there is a risk that it would not remain in the big tent after 
an 18 month review of diversity. An operator may therefore delay their testing 
and production deployment of that project which makes it more difficult to 
achieve the diversity given lack of adoption.

I think there is a difference between projects which are meeting a specific set 
of needs with the user community but are not needing major support and one 
which is not meeting the 4 opens. We’ve really appreciated projects which solve 
a need for us such as EC2 API and RDO which have been open but also had 
significant support from a vendor. They could have improved their diversity by 
submitting less commits to get the percentages better...

Tim

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Hayes, Graham
On 02/08/2016 16:48, Steven Dake (stdake) wrote:
> Responses inline:
>
> On 8/2/16, 8:13 AM, "Hayes, Graham"  wrote:
>
>> On 02/08/2016 15:42, Flavio Percoco wrote:
>>> On 01/08/16 10:19 -0400, Sean Dague wrote:
 On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
> Thierry, Ben, Doug,
>
> How can we distinguish between. "Project is doing the right thing, but
> others are not joining" vs "Project is actively trying to keep people
> out"?

 I think at some level, it's not really that different. If we treat them
 as different, everyone will always believe they did all the right
 things, but got no results. 3 cycles should be plenty of time to drop
 single entity contributions below 90%. That means prioritizing bugs /
 patches from outside groups (to drop below 90% on code commits),
 mentoring every outside member that provides feedback (to drop below
 90%
 on reviews), shifting development resources towards mentoring / docs /
 on ramp exercises for others in the community (to drop below 90% on
 core
 team).

 Digging out of a single vendor status is hard, and requires making that
 your top priority. If teams aren't interested in putting that ahead of
 development work, that's fine, but that doesn't make it a sustainable
 OpenStack project.
>>>
>>>
>>> ++ to the above! I don't think they are that different either and we
>>> might not
>>> need to differentiate them after all.
>>>
>>> Flavio
>>>
>>
>> I do have one question - how are teams getting out of
>> "team:single-vendor" and towards "team:diverse-affiliation" ?
>>
>> We have tried to get more people involved with Designate using the ways
>> we know how - doing integrations with other projects, pushing designate
>> at conferences, helping DNS Server vendors to add drivers, adding
>> drivers for DNS Servers and service providers ourselves, adding
>> features - the lot.
>>
>> We have a lot of user interest (41% of users were interested in using
>> us), and are quite widely deployed for a non tc-approved-release
>> project (17% - 5% in production). We are actually the most deployed
>> non tc-approved-release project.
>>
>> We still have 81% of the reviews done by 2 companies, and 83% by 3
>> companies.
>
> By the objective criteria of team:single-vendor Designate isn't a single
> vendor project.  By the objective criteria of team:diverse-affiliation
> your not a diversely affiliated project either.  This is why I had
> suggested we need a third tag which accurately represents where Designate
> is in its community building journey.
>>
>> I know our project is not "cool", and DNS is probably one of the most
>> boring topics, but I honestly believe that it has a place in the
>> majority of OpenStack clouds - both public and private. We are a small
>> team of people dedicated to making Designate the best we can, but are
>> still one company deciding to drop OpenStack / DNS development from
>> joining the single-vendor party.
>
> Agree Designate is important to OpenStack.  But IMO it is not a single
> vendor project as defined by the criteria given the objective statistics
> you mentioned above.

My point is that we are close to being single vendor - it is a real
possibility over then next few months, if a big contributor was to
leave the project, which may happen.

The obvious solution to avoid this is to increase participation - which
is what we are struggling with.

>>
>> We are definitely interested in putting community development ahead of
>> development work - but what that actual work is seems to difficult to
>> nail down. I do feel sometimes that I am flailing in the dark trying to
>> improve this.
>
> Fantastic its a high-prioiirty goal.  Sad to hear your struggling but
> struggling is part of the activity.
>>
>> If projects could share how that got out of single-vendor or into
>> diverse-affiliation this could really help teams progress in the
>> community, and avoid being removed.
>
> You bring up a fantastic point here - and that is that teams need to share
> techniques for becoming multi-vendor and some day diversely affiliated.  I
> am a super busy atm, or I would volunteer to lead a cross-project effort
> with PTLs to coordinate community building from our shared knowledge pool
> of expert Open Source contributors in the wider OpenStack community.
>
> That said, I am passing the baton for Kolla PTL at the conclusion of
> Newton (assuming the leadership pipeline I've built for Kolla wants to run
> for Kolla PTL), and would be pleased to lead a cross project effort in
> Occata on moving from single-vendor to multi-vendor and beyond if there is
> enough PTL interest.  I take mentorship seriously and the various single
> vendor (and others) PTL's won't be disappointed in such an effort.
>
>>
>> Making grand statements about "work harder on community" without any
>> guidance about what we need to work on do not help the community.
>
> Agree - 

Re: [openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-02 Thread Andrey Pavlov
James, thank you for your answer.

I'll file bug to glance - but in current releases glance-charm have to
do it himself, right?

I'm not sure that I'm correctly understand your question.
I suppose that deployment will have glance and cinder on different machines.
Also there will be one relation between cinder and glance to configure
glance to store images in cinder.
Other steps are optional -
If cinder is used specific backend that needs additional configuration
- then it can be done via storage-backend relation (from subordinate
charm).
If this backend needs to configure glances' filters or glance's config
- then it should be done via any subordinate charm to glance (but
glance doesn't have such relations now).
Due to these suggestions I think that additional relation needs to be
added to glance to allow connecting charms (with installation in the
same container as glance).



On Tue, Aug 2, 2016 at 6:15 PM, James Page  wrote:
> Hi Andrey
>
> On Tue, 2 Aug 2016 at 15:59 Andrey Pavlov  wrote:
>>
>> I need to add glance support via storing images in cinder instead of
>> local files.
>> (This works only from Mitaka version due to glance-store package)
>
>
> OK
>
>>
>> First step I've made here -
>> https://review.openstack.org/#/c/348336/
>> This patchset adds ability to relate glance-charm to cinder-charm
>> (it's similar to ceph/swift relations)
>
>
> Looks like a good start, I'll comment directly on the review with any
> specific comments.
>
>>
>> And also it configures glance's rootwrap - original glance package
>> doesn't have such code
>> (
>>   I think that this is a bug in glance-common package - cinder and
>> nova can do it themselves.
>>   And if someone point me to bugtracker - I will file the bug there.
>> )
>
>
> Sounds like this should be in the glance package:
>
>   https://bugs.launchpad.net/ubuntu/+source/glance/+filebug
>
>  or use:
>
>   ubuntu-bug glance-common
>
> on an installed system.
>
>>
>> But main question is about additional configurations' steps -
>> Some cinder backends need to store additional files in
>> /etc/glance/rootwrap.d/ folder.
>> I have two options to implement this -
>> 1) relate my charm to glance:juju-info (it will be run on the same
>> machine as glance)
>> and do all work in this hook in my charm.
>> 2) add one more relation to glance - like
>> 'storage-backend:cinder-backend' in cinder.
>> And write code in a same way - with ability to pass config options.
>>
>>
>> I prefer option 2. It's more logical and more general. It will allow
>> to configure any cinder's backend.
>
>
> +1 the subordinate approach in cinder (and nova) works well; lets ensure the
> semantics on the relation data mean its easy to restart the glance services
> from the subordinate service if need be.
>
> Taking this a step further, it might also make sense to have the relation to
> cinder on the subordinate charm and pass up the data item to configure
> glance to use cinder from the sub - does that make sense in this context?
>
> Cheers
>
> James
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind regards,
Andrey Pavlov.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] A couple feature freeze exception requests

2016-08-02 Thread Dan Smith
> Multitenant networking
> ==

I haven't reviewed this one much either, but it looks smallish and if
other people are good with it then I think it's probably something we
should do.

> Multi-compute usage via a hash ring
> ===

I'm obviously +2 on this one :)

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Update from Ops Meetups team meeting

2016-08-02 Thread Allison Price
Hi everyone, 

I wanted to provide an update after meeting with the Ops Meetups Team this 
morning to discuss the promotion of the NYC meetup that is later this month 
(August 25-26). aBelow is a recap of the meeting with next steps. If you are 
interested in attending and haven’t registered yet, please do so now: 
https://opsmidcyclenyc2016.eventbrite.com 
. If you would like to provide 
feedback on the below options, you can email me directly or provide feedback on 
this etherpad . 

 Here are the proposed promotional items: 
Email previous Summit attendees that have indicated ops in their registration
Email previous attendees of the ops midcycles 
Remind the Ops Mailing list - this serves as one of your reminders :) 
Email a subset of the Active User Contributors that would benefit from 
attending the ops midcycle
Reach out to user survey respondents who have indicated that they can be 
contacted 

Please let me know if you have any questions.  

Cheers,
Allison

Allison Price
OpenStack Marketing
alli...@openstack.org


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Steven Dake (stdake)
Responses inline:

On 8/2/16, 8:13 AM, "Hayes, Graham"  wrote:

>On 02/08/2016 15:42, Flavio Percoco wrote:
>> On 01/08/16 10:19 -0400, Sean Dague wrote:
>>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
 Thierry, Ben, Doug,

 How can we distinguish between. "Project is doing the right thing, but
 others are not joining" vs "Project is actively trying to keep people
 out"?
>>>
>>> I think at some level, it's not really that different. If we treat them
>>> as different, everyone will always believe they did all the right
>>> things, but got no results. 3 cycles should be plenty of time to drop
>>> single entity contributions below 90%. That means prioritizing bugs /
>>> patches from outside groups (to drop below 90% on code commits),
>>> mentoring every outside member that provides feedback (to drop below
>>>90%
>>> on reviews), shifting development resources towards mentoring / docs /
>>> on ramp exercises for others in the community (to drop below 90% on
>>>core
>>> team).
>>>
>>> Digging out of a single vendor status is hard, and requires making that
>>> your top priority. If teams aren't interested in putting that ahead of
>>> development work, that's fine, but that doesn't make it a sustainable
>>> OpenStack project.
>>
>>
>> ++ to the above! I don't think they are that different either and we
>>might not
>> need to differentiate them after all.
>>
>> Flavio
>>
>
>I do have one question - how are teams getting out of
>"team:single-vendor" and towards "team:diverse-affiliation" ?
>
>We have tried to get more people involved with Designate using the ways
>we know how - doing integrations with other projects, pushing designate
>at conferences, helping DNS Server vendors to add drivers, adding
>drivers for DNS Servers and service providers ourselves, adding
>features - the lot.
>
>We have a lot of user interest (41% of users were interested in using
>us), and are quite widely deployed for a non tc-approved-release
>project (17% - 5% in production). We are actually the most deployed
>non tc-approved-release project.
>
>We still have 81% of the reviews done by 2 companies, and 83% by 3
>companies.

By the objective criteria of team:single-vendor Designate isn't a single
vendor project.  By the objective criteria of team:diverse-affiliation
your not a diversely affiliated project either.  This is why I had
suggested we need a third tag which accurately represents where Designate
is in its community building journey.
>
>I know our project is not "cool", and DNS is probably one of the most
>boring topics, but I honestly believe that it has a place in the
>majority of OpenStack clouds - both public and private. We are a small
>team of people dedicated to making Designate the best we can, but are
>still one company deciding to drop OpenStack / DNS development from
>joining the single-vendor party.

Agree Designate is important to OpenStack.  But IMO it is not a single
vendor project as defined by the criteria given the objective statistics
you mentioned above.

>
>We are definitely interested in putting community development ahead of
>development work - but what that actual work is seems to difficult to
>nail down. I do feel sometimes that I am flailing in the dark trying to
>improve this.

Fantastic its a high-prioiirty goal.  Sad to hear your struggling but
struggling is part of the activity.
>
>If projects could share how that got out of single-vendor or into
>diverse-affiliation this could really help teams progress in the
>community, and avoid being removed.

You bring up a fantastic point here - and that is that teams need to share
techniques for becoming multi-vendor and some day diversely affiliated.  I
am a super busy atm, or I would volunteer to lead a cross-project effort
with PTLs to coordinate community building from our shared knowledge pool
of expert Open Source contributors in the wider OpenStack community.

That said, I am passing the baton for Kolla PTL at the conclusion of
Newton (assuming the leadership pipeline I've built for Kolla wants to run
for Kolla PTL), and would be pleased to lead a cross project effort in
Occata on moving from single-vendor to multi-vendor and beyond if there is
enough PTL interest.  I take mentorship seriously and the various single
vendor (and others) PTL's won't be disappointed in such an effort.

>
>Making grand statements about "work harder on community" without any
>guidance about what we need to work on do not help the community.

Agree - lets fix that.  Unfortunately it can't be fixed in an email thread
- it requires a cross-project team based approach with atleast 6 months of
activity.

If PTLs can weigh in on this thread and commit to participation in such a
cross-project subgroup, I'd be happy to lead it.

Regards
-steve


>
>- Graham
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 

Re: [openstack-dev] [nova][ironic] A couple feature freeze exception requests

2016-08-02 Thread Matt Riedemann

On 8/1/2016 4:20 PM, Jim Rollenhagen wrote:

Yes, I know this is stupid late for these.

I'd like to request two exceptions to the non-priority feature freeze,
for a couple of features in the Ironic driver.  These were not requested
at the normal time as I thought they were nowhere near ready.

Multitenant networking
==

Ironic's top feature request for around 2 years now has been to make
networking safe for multitenant use, as opposed to a flat network
(including control plane access!) for all tenants. We've been working on
a solution for 3 cycles now, and finally have the Ironic pieces of it
done, after a heroic effort to finish things up this cycle.

There's just one patch left to make it work, in the virt driver in Nova.
That is here: https://review.openstack.org/#/c/297895/

It's important to note that this actually fixes some dead code we pushed
on before this feature was done, and is only ~50 lines, half of which
are comments/reno.

Reviewers on this unearthed a problem on the ironic side, which I expect
to be fixed in the next couple of days:
https://review.openstack.org/#/q/topic:bug/1608511

We also have CI for this feature in ironic, and I have a depends-on
testing all of this as a whole: https://review.openstack.org/#/c/347004/

Per Matt's request, I'm also adding that job to Nova's experimental
queue: https://review.openstack.org/#/c/349595/

A couple folks from the ironic team have also done some manual testing
of this feature, with the nova code in, using real switches.

Merging this patch would bring a *huge* win for deployers and operators,
and I don't think it's very risky. It'll be ready to go sometime this
week, once that ironic chain is merged.


I've reviewed this one and it looks good to me. It's dependent on 
python-ironicclient>=1.5.0 which Jim has a g-r bump up as a dependency. 
And the gate-tempest-dsvm-ironic-multitenant-network-nv job is testing 
this and passing on the test patch in ironic (and that job is in the 
nova experimental queue now).


The upgrade procedure had some people scratching their heads in IRC this 
week so I've stated that we need clear documentation there, which will 
probably live here:


http://docs.openstack.org/developer/ironic/deploy/upgrade-guide.html

Since Ironic isn't in here:

http://docs.openstack.org/ops-guide/ops_upgrades.html#update-services

But the docs in the Ironic repo say that Nova should be upgraded first 
when going from Juno to Kilo so it's definitely important to get those 
docs updated for upgrades from Mitaka to Newton, but Jim said he'd do 
that this cycle.


Given how long people have been asking for this in Ironic and the Ironic 
team has made it a priority to get it working on their side, and there 
is CI already and a small change in Nova, I'm OK with giving a 
non-priority FFE for this.




Multi-compute usage via a hash ring
===

One of the major problems with the ironic virt driver today is that we
don't support running multiple nova-compute daemons with the ironic driver
loaded, because each compute service manages all ironic nodes and stomps
on each other.

There's currently a hack in the ironic virt driver to
kind of make this work, but instance locking still isn't done:
https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py

That is also holding back removing the pluggable compute manager in nova:
https://github.com/openstack/nova/blob/master/nova/conf/service.py#L64-L69

And as someone that runs a deployment using this hack, I can tell you
first-hand that it doesn't work well.

We (the ironic and nova community) have been working on fixing this for
2-3 cycles now, trying to find a solution that isn't terrible and
doesn't break existing use cases. We've been conflating it with how we
schedule ironic instances and keep managing to find a big wedge with
each approach. The best approach we've found involves duplicating the
compute capabilities and affinity filters in ironic.

Some of us were talking at the nova midcycle and decided we should try
the hash ring approach (like ironic uses to shard nodes between
conductors) and see how it works out, even though people have said in
the past that wouldn't work. I did a proof of concept last week, and
started playing with five compute daemons in a devstack environment.
Two nerd-snipey days later and I had a fully working solution, with unit
tests, passing CI. That is here:
https://review.openstack.org/#/c/348443/

We'll need to work on CI for this with multiple compute services. That
shouldn't be crazy difficult, but I'm not sure we'll have it done this
cycle (and it might get interesting trying to test computes joining and
leaving the cluster).

It also needs some testing at scale, which is hard to do in the upstream
gate, but I'll be doing my best to ship this downstream as soon as I
can, and iterating on any problems we see there.

It's a huge win for operators, for only a few hundred lines (some of

Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-08-02 13:49:06 +:
> On 02/08/2016 14:37, Doug Hellmann wrote:
> > Excerpts from Hayes, Graham's message of 2016-08-02 11:53:37 +:
> >> On 29/07/2016 21:59, Doug Hellmann wrote:
> >>> One of the outcomes of the discussion at the leadership training
> >>> session earlier this year was the idea that the TC should set some
> >>> community-wide goals for accomplishing specific technical tasks to
> >>> get the projects synced up and moving in the same direction.
> >>>
> >>> After several drafts via etherpad and input from other TC and SWG
> >>> members, I've prepared the change for the governance repo [1] and
> >>> am ready to open this discussion up to the broader community. Please
> >>> read through the patch carefully, especially the "goals/index.rst"
> >>> document which tries to lay out the expectations for what makes a
> >>> good goal for this purpose and for how teams are meant to approach
> >>> working on these goals.
> >>>
> >>> I've also prepared two patches proposing specific goals for Ocata
> >>> [2][3].  I've tried to keep these suggested goals for the first
> >>> iteration limited to "finish what we've started" type items, so
> >>> they are small and straightforward enough to be able to be completed.
> >>> That will let us experiment with the process of managing goals this
> >>> time around, and set us up for discussions that may need to happen
> >>> at the Ocata summit about implementation.
> >>>
> >>> For future cycles, we can iterate on making the goals "harder", and
> >>> collecting suggestions for goals from the community during the forum
> >>> discussions that will happen at summits starting in Boston.
> >>>
> >>> Doug
> >>>
> >>> [1] https://review.openstack.org/349068 describe a process for managing 
> >>> community-wide goals
> >>> [2] https://review.openstack.org/349069 add ocata goal "support python 
> >>> 3.5"
> >>> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> >>> libraries"
> >>
> >> I am confused. When I proposed my patch for doing something very similar
> >> (Equal Chances for all projects is basically a multiple release goal) I
> >> got the following rebuttals:
> >>
> >>  > "and it would be
> >>  > a mistake to try to require that because the issue is almost always
> >>  > lack of resources and not lack of desire. Volunteers to contribute
> >>  > to the work that's needed will do more to help than a
> >>  > one-size-fits-all policy."
> >>
> >>  > This isn't a thing that gets fixed with policy. It gets fixed with
> >>  > people.
> >>
> >>  > I am reading through the thread, and it puzzles me that I see a lot
> >>  > of right words about goals but not enough hints on who is going to
> >>  > implement that.
> >>
> >>  > I think the right solutions here are human ones. Talk with people.
> >>  > Figure out how you can help lighten their load so that they have
> >>  > breathing space. I think hiding behind policy becomes a way to make
> >>  > us more separate rather than engaging folks more directly.
> >>
> >>  > Coming at this with top down declarations of how things should work
> >>  > not only ignores reality of the ecosystem and the current state of
> >>  > these projects, but is also going about things backwards.
> >>
> >>  > This entirely ignores that these are all open source projects,
> >>  > which are often very sparsely contributed to. If you have an issue
> >>  > with a project or the interface it provides, then contribute to it.
> >>  > Don't make grandiose resolutions trying to force things into what you
> >>  > see as an ideal state, instead contribute to help fix the problems
> >>  > you've identified.
> >>
> >> And yet, we are currently suggesting a system that will actively (in an
> >> undefined way) penalise projects who do not comply with a different set
> >> of proposals, done in a top down manner.
> >>
> >> I may be missing the point, but the two proposals seem to have
> >> similarities - what is the difference?
> >>
> >> When I see comments like:
> >>
> >>  > Project teams who join the big tent sign up to the rights and
> >>  > responsibilities that go along with membership. Those responsibilities
> >>  > include taking some direction from the TC, including regarding work
> >>  > they may not give the same priority as the broader community.
> >>
> >> It sounds like top down is OK, but previous ML thread / TC feedback
> >> has been different.
> >
> > One difference is that these goals are not things like "the
> > documentation team must include every service project in the
> > installation guide" but rather would be phrased like "every project
> > must provide an installation guide". The work is distributed to the
> > vertical teams, and not focused in the horizontal teams.
> 
> Well, the wording was actually "the documentation team must provide a
> way for all projects to be included in the documentation guide". The
> work was on the horizontal teams to provide a method, and the 

Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-02 Thread Thierry Carrez
Doug Hellmann wrote:
> [...]
>> Likewise, what if the Manila project team decides they aren't interested 
>> in supporting Python 3.5 or a particular greenlet library du jour that 
>> has been mandated upon them? Is the only filesystem-as-a-service project 
>> going to be booted from the tent?
> 
> I hardly think "move off of the EOL-ed version of our language" and
> "use a library du jour" are in the same class.  All of the topics
> discussed so far are either focused on eliminating technical debt
> that project teams have not prioritized consistently or adding
> features that, again for consistency, are deemed important by the
> overall community (API microversioning falls in that category,
> though that's an example and not in any way an approved goal right
> now).

Right, the proposal is pretty clearly about setting a number of
reasonable, small goals for a release cycle that would be awesome to
collectively reach. Not really invasive top-down design mandates that we
would expect teams to want to resist.

IMHO if a team has a good reason for not wanting or not being able to
fulfill a common goal that's fine -- it just needs to get documented and
should not result in itself in getting kicked out from anything. If a
team regularly skips on common goals (and/or misses releases, and/or
doesn't fix security issues) that's a general sign that it's not really
behaving like an OpenStack project and then a case could be opened for
removal, but there is nothing new here.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-02 Thread Timofei Durakov
Hi,

Taking into account everything above I'd prefer to see
live_migration_tunnelled(that corresponds to VIR_MIGRATE_TUNNELLED)
defaulted to False. We just need to make a release note for this change,
and on the host startup do LOG.warning to notify the operator that there
are no tunnels for live-migration. For me, it will be enough. Then just put
[1] on top of it.

Thanks,
Timofey


On Tue, Aug 2, 2016 at 5:36 PM, Koniszewski, Pawel <
pawel.koniszew...@intel.com> wrote:

> In Mitaka development cycle 'live_migration_flag' and
> 'block_migration_flag' have been marked as deprecated for removal. I'm
> working on a patch [1] to remove both of them and want to ask what we
> should do with live_migration_tunnelled logic.
>
> The default configuration of both flags contain VIR_MIGRATE_TUNNELLED
> option. It is there to avoid the need to configure the network to allow
> direct communication between hypervisors. However, tradeoff is that it
> slows down all migrations by up to 80% due to increased number of memory
> copies and single-threaded encryption mechanism in Libvirt. By 80% here I
> mean that transfer between source and destination node is around 2Gb/s on a
> 10Gb network. I believe that this is a configuration issue and people
> deploying OpenStack are not aware that live migrations with this flag will
> not work. I'm not sure that this is something we wanted to achieve. AFAIK
> most operators are turning it OFF in order to make live migration usable.
>
> Going to a new flag that is there to keep possibility to turn tunneling on
> - Live_migration_tunnelled [2] which is a tri-state boolean - None, False,
> True:
>
> * True - means that live migrations will be tunneled through libvirt.
> * False - no tunneling, native hypervisor transport.
> * None - nova will choose default based on, e.g., the availability of
> native encryption support in the hypervisor. (Default value)
>
> Right now we don't have any logic implemented for None value which is a
> default value. So the question here is should I implement logic so that if
> live_migration_tunnelled=None it will still use VIR_MIGRATE_TUNNELLED if
> native encryption is not available? Given the impact of this flag I'm not
> sure that we really want to keep it there. Another option is to change
> default value of live_migration_tunnelled to be True. In both cases we will
> again end up with slower LM and people complaining that LM does not work at
> all in OpenStack.
>
> Thoughts?
>
> [1] https://review.openstack.org/#/c/334860/
> [2]
> https://github.com/openstack/nova/blob/be59c19c969acf6b25b0711f0ebfb26aaed0a171/nova/conf/libvirt.py#L107
>
> Kind Regards,
> Pawel Koniszewski
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-02 Thread James Page
Hi Andrey

On Tue, 2 Aug 2016 at 15:59 Andrey Pavlov  wrote:

> I need to add glance support via storing images in cinder instead of
> local files.
> (This works only from Mitaka version due to glance-store package)
>

OK


> First step I've made here -
> https://review.openstack.org/#/c/348336/
> This patchset adds ability to relate glance-charm to cinder-charm
> (it's similar to ceph/swift relations)
>

Looks like a good start, I'll comment directly on the review with any
specific comments.


> And also it configures glance's rootwrap - original glance package
> doesn't have such code
> (
>   I think that this is a bug in glance-common package - cinder and
> nova can do it themselves.
>   And if someone point me to bugtracker - I will file the bug there.
> )
>

Sounds like this should be in the glance package:

  https://bugs.launchpad.net/ubuntu/+source/glance/+filebug

 or use:

  ubuntu-bug glance-common

on an installed system.


> But main question is about additional configurations' steps -
> Some cinder backends need to store additional files in
> /etc/glance/rootwrap.d/ folder.
> I have two options to implement this -
> 1) relate my charm to glance:juju-info (it will be run on the same
> machine as glance)
> and do all work in this hook in my charm.
> 2) add one more relation to glance - like
> 'storage-backend:cinder-backend' in cinder.
> And write code in a same way - with ability to pass config options.
>

> I prefer option 2. It's more logical and more general. It will allow
> to configure any cinder's backend.
>

+1 the subordinate approach in cinder (and nova) works well; lets ensure
the semantics on the relation data mean its easy to restart the glance
services from the subordinate service if need be.

Taking this a step further, it might also make sense to have the relation
to cinder on the subordinate charm and pass up the data item to configure
glance to use cinder from the sub - does that make sense in this context?

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >