Re: [openstack-dev] [rally] "Failed to create the requested number of tenants" error

2016-06-09 Thread Boris Pavlovic
Nate,

This looks quite strange. Could you share the information from keystone
catalog?

Seems like you didn't setup admin endpoint for keystone in that region.

Best regards,
Boris Pavlovic

On Thu, Jun 9, 2016 at 12:41 PM, Nate Johnston 
wrote:

> Rally folks,
>
> I am working with an engineer to get him up to speed on Rally on a new
> development.  He is trying out running a few tests from the samples
> directory, like samples/tasks/scenarios/nova/list-hypervisors.yaml - but
> he keeps getting the error "Completed: Exit context: `users`\nTask
> config is invalid: `Unable to setup context 'users': 'Failed to create
> the requested number of tenants.'`"
>
> This is against an Icehouse environment with Mitaka Rally; When I run
> Rally with debug logging I see:
>
> 2016-06-08 18:59:24.692 11197 ERROR rally.common.broker EndpointNotFound:
> admin endpoint for identity service in  region not found
>
> However I note that $OS_AUTH_URL is set in the Rally deployment... see
> http://paste.openstack.org/show/509002/ for the full log.
>
> Any ideas you could give me would be much appreciated.  Thanks!
>
> --N.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 10 June 2016

2016-06-09 Thread Lana Brindley
Hi everyone,

My week has been spent on the Install Guide, and a big thanks to Andreas for 
getting the Infra patch up and documenting the new process. The Install Guide 
team meeting was well attended this week, and I've been following up on the 
actions from that. I also contacted the cross project liaisons with the 
information they need to get their content moved, and I'm looking forward to 
seeing some of these start work next week. 

In other news, Joseph has been busy reviewing the User Guides, and could use a 
little help working on the information architecture, and getting a few new 
projects documented. User Guide meetings are held in US and APAC timezones and 
volunteers are essential to get this effort complete for Newton. Get all the 
info here: https://wiki.openstack.org/wiki/User_Guides

== Progress towards Newton ==

117 days to go!

Bugs closed so far: 163

Newton deliverables: 
https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
Feel free to add more detail and cross things off as they are achieved 
throughout the release.

== Speciality Team Reports ==

'''HA Guide: Bogdan Dobrelya'''
No report this week.

'''Install Guide: Lana Brindley'''
Infra patch: https://review.openstack.org/#/c/326039/ Instructions: 
http://docs.openstack.org/contributor-guide/project-install-guide.html Next 
meeting: Tue 21 June 0600 UTC

'''Networking Guide: Edgar Magana'''
No meeting this week. Working on a better ToC for the guide that may impact of 
the of scenarios.
Moving more networking content from other guides into the Networking one in 
order to keep everything in one central point and better updated. 

'''Security Guide: Nathaniel Dillon'''
No report this week.

'''User Guides: Joseph Robinson'''
Outstanding Tasks - Contacting more project teams for inclusion status, IA 
plans for the new guide. Some team discussion on fixing old links - contact me 
if anyone is interested in contributing more content to the User Guides team.

'''Ops Guide: Shilla Saebi'''
Ops tasks are documented here: https://etherpad.openstack.org/p/ops-arch-tasks 
OpenStack ops guide reorg in progress & documented here: 
https://etherpad.openstack.org/p/ops-guide-reorg Working on posting enterprise 
docs for cleanup. Looking for volunteers in ops/arch docs group to attend ops 
specific meetings to find additional info and help.

'''API Guide: Anne Gentle'''
Call for help for unified all-OpenStack API navigation design: 
http://lists.openstack.org/pipermail/openstack-docs/2016-June/008730.html
Discussing project-level organization in  https://review.openstack.org/312259
Discussing source organization in  https://review.openstack.org/314819
Redirects and deletions in api-site are welcomed! For example, see 
https://review.openstack.org/327399
Updated README for api-site: https://review.openstack.org/327395

'''Config/CLI Ref: Tomoyuki Kato'''
Closed a few bugs continuously. Cleaned up many bugs about tool-generated 
configuration options that are already released for Mitaka. **We need folks for 
vendor plug-in docs from each vendor.**

'''Training labs: Pranav Salunke, Roger Luethi'''
Working on the training-labs landing page to make it look much better. 
Reintroducing the tooling to build zip files. Working on PXE support for 
baremetal provisioning. Working on Python port of training-labs.

'''Training Guides: Matjaz Pancur'''
No report this week.

'''Hypervisor Tuning Guide: Blair Bethwaite
No report this week.

'''UX/UI Guidelines: Michael Tullis, Stephen Ballard'''
No report this week.

== Site Stats ==

During May, the docs.openstack.org site had 620 sessions, with just under 20% 
by new users. The average time of all sessions for the month was about 5 and a 
half minutes, looking at an average of 3 and a half pages.

== Doc team meeting ==

Next meetings:

The US meeting was held this week, you can read the minutes here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-06-08

Next meetings:
APAC: Wednesday 15 June, 00:30 UTC
US: Wednesday 22 June, 19:00 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#10_June_2016

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] stepping down from core

2016-06-09 Thread Swapnil Kulkarni (coolsvap)
On Tue, Jun 7, 2016 at 1:06 AM, Jeff Peeler  wrote:
> Hi all,
>
> This is my official announcement to leave core on Kolla /
> Kolla-Kubernetes. I've enjoyed working with all of you and hopefully
> we'll cross paths again!
>
> Jeff
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


All the best Jeff for the next big thing you are going to do :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-09 Thread Swapnil Kulkarni (coolsvap)
On Fri, Jun 10, 2016 at 6:44 AM, Steven Dake (stdake)  wrote:
> Swapnil,
>
> Thanks for triggering this community vote.  It was sorely overdue :).  I
> counted 8 votes before I voted (I am all for moving everything to 1600UTC)
> which is a majority of the CR team.  Note in the future I think we may
> need to consider having split meetings again, if our community makeup
> changes.

Steve, yes I understand.

>
> I have submitted a review to make the change official:
> https://review.openstack.org/#/c/327845/1
>
>
> Regards
> -steve
>
>
> On 6/8/16, 5:54 AM, "Swapnil Kulkarni (coolsvap)"  wrote:
>
>>Dear Kollagues,
>>
>>Some time ago we discussed the requirement of alternating meeting
>>times for Kolla weekly meeting due to major contributors from
>>kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
>>implemented alternate US/APAC meeting times.
>>
>>With kolla-mesos not active anymore and looking at the current active
>>contributors, I wish to reinstate the UTC 1600 time for all Kolla
>>Weekly meetings.
>>
>>Please let me know your views.
>>
>>--
>>Best Regards,
>>Swapnil Kulkarni
>>irc : coolsvap
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thank you all for your vote!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Quesion about Openstack Containers and Magnum

2016-06-09 Thread zhihao wang
Dear Openstack Dev Members:
I would like to install the Magnum on OpenStack to manage Docker Containers.I 
have a openstack Liberty production setup. one controller node, and a few 
compute nodes.
I am wondering how can I install Openstack Magnum on OpenStack Liberty on 
distributed production environment (1 controller node and some compute nodes)? 
I know I can install Magnum using desstack, but I dont want the developer 
version, 
Is there a way/guide to install it on production environment?
ThanksWally   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

2016-06-09 Thread Matt Riedemann

On 6/9/2016 4:39 PM, Claudiu Belu wrote:

Hello again,

We've set use_glance_v1 nova config option to False on the Hyper-V CI. All good.

[1] http://64.119.130.115/nova/278835/13/results.html.gz

Best regards,

Claudiu Belu



Awesome, thanks for testing it out.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][release] reno 1.7.0 release

2016-06-09 Thread no-reply
We are gleeful to announce the release of:

reno 1.7.0: RElease NOtes manager

With source available at:

http://git.openstack.org/cgit/openstack/reno

Please report issues through launchpad:

http://bugs.launchpad.net/reno

For more details, please see below.

Changes in reno 1.6.2..1.7.0


45878e0 Ignore empty sections in notes
38b0158 Clean up oslo-incubator stuff
1d7c3d8 [Trivial] Remove executable privilege of doc/source/conf.py
c805665 make the cache command write to a file by default
9cb8c4b use the cache file instead of scanner when possible
0b459b8 add 'cache' command to write a cache file

Diffstat (except docs and test files)
-

.gitignore   |   1 +
openstack-common.conf|   6 ---
reno/cache.py|  94 +
reno/formatter.py|  20 +++-
reno/lister.py   |  12 +++--
reno/loader.py   | 109 +++
reno/main.py |  23 +
reno/report.py   |  13 +++---
reno/sphinxext.py|  13 +++---
tox.ini  |   2 +-
13 files changed, 381 insertions(+), 66 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] taskflow 2.1.0 release (newton)

2016-06-09 Thread no-reply
We are happy to announce the release of:

taskflow 2.1.0: Taskflow structured state management library.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

For more details, please see below.

Changes in taskflow 2.0.0..2.1.0


88fec5d Updated from global requirements
54f8112 Updated from global requirements
8ab0ba9 Split revert/execute missing args messages
44c9a5d Updated from global requirements
c5e9cf2 Instead of a multiprocessing queue use sockets via asyncore
8c2d73b Add a simple sanity test for pydot outputting

Diffstat (except docs and test files)
-

requirements.txt   |   2 +-
taskflow/engines/action_engine/engine.py   |  35 +-
taskflow/engines/action_engine/executor.py | 411 +---
taskflow/engines/action_engine/process_executor.py | 711 +
taskflow/exceptions.py |   6 +-
.../unit/action_engine/test_process_executor.py|  99 +++
taskflow/types/graph.py|  18 +-
taskflow/utils/misc.py |   8 +
test-requirements.txt  |   5 +-
15 files changed, 946 insertions(+), 440 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 7bbf26c..af3db31 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -44 +44 @@ automaton>=0.5.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 2a2497e..e57fda1 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +7 @@ oslotest>=1.10.0 # Apache-2.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD
@@ -23,0 +24,3 @@ redis>=2.10.0 # MIT
+# Used for making sure pydot is still working
+pydotplus>=2.0.2 # MIT License
+



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] tooz 1.38.0 release (newton)

2016-06-09 Thread no-reply
We are gleeful to announce the release of:

tooz 1.38.0: Coordination library for distributed systems.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.37.0..1.38.0
--

7f15e34 Using LOG.warning instead of LOG.warn
c2f9671 Updated from global requirements
0f4e119 Fix coordinator typo
ccf6b7a Updated from global requirements
82197e3 file: make python2 payload readable from python3
10b9711 coordination: expose a heartbeat loop method

Diffstat (except docs and test files)
-

requirements.txt|  6 +--
test-requirements.txt   |  2 +-
tooz/coordination.py| 86 -
tooz/drivers/etcd.py| 11 ++---
tooz/drivers/file.py| 76 ++--
tooz/drivers/memcached.py   |  9 ++--
tooz/drivers/mysql.py   |  2 +-
tooz/drivers/pgsql.py   |  2 +-
tooz/drivers/redis.py   |  1 +
tooz/drivers/zookeeper.py   |  1 +
12 files changed, 189 insertions(+), 31 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e57d7e0..7e6588e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ zake>=0.1.6 # Apache-2.0
-voluptuous>=0.8.6 # BSD License
+voluptuous>=0.8.9 # BSD License
@@ -16 +16 @@ futurist>=0.11.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.9.0 # Apache-2.0
@@ -18 +18 @@ oslo.serialization>=1.10.0 # Apache-2.0
-requests!=2.9.0,>=2.8.1 # Apache-2.0
+requests>=2.10.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 665f71c..b7f0925 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10 +10 @@ doc8 # Apache-2.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.concurrency 3.7.1 release (mitaka)

2016-06-09 Thread no-reply
We are satisfied to announce the release of:

oslo.concurrency 3.7.1: Oslo Concurrency library

This release is part of the mitaka stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

With package available at:

https://pypi.python.org/pypi/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.

Changes in oslo.concurrency 3.7.0..3.7.1


5f417f8 processutils: add support for missing process limits
930d872 Updated from global requirements
0003758 Updated from global requirements

Diffstat (except docs and test files)
-

oslo_concurrency/prlimit.py  | 21 +
oslo_concurrency/processutils.py | 38 +---
requirements.txt |  2 +-
test-requirements.txt|  2 +-
5 files changed, 87 insertions(+), 13 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index ec291e3..dc6a7cd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ pbr>=1.6 # Apache-2.0
-Babel>=1.3 # BSD
+Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index f9925e1..d9b4f55 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9 +9 @@ futures>=3.0;python_version=='2.7' or python_version=='2.6' # BSD
-fixtures>=1.3.1 # Apache-2.0/BSD
+fixtures<2.0,>=1.3.1 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] stevedore 1.15.0 release (newton)

2016-06-09 Thread no-reply
We are delighted to announce the release of:

stevedore 1.15.0: Manage dynamic plugins for Python applications

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/stevedore

With package available at:

https://pypi.python.org/pypi/stevedore

Please report issues through launchpad:

https://bugs.launchpad.net/python-stevedore

For more details, please see below.

Changes in stevedore 1.14.0..1.15.0
---

01b09a5 Updated from global requirements

Diffstat (except docs and test files)
-

test-requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 248a12d..d59feb9 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7 +7 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.versionedobjects 1.10.0 release (newton)

2016-06-09 Thread no-reply
We are delighted to announce the release of:

oslo.versionedobjects 1.10.0: Oslo Versioned Objects library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.

Changes in oslo.versionedobjects 1.9.1..1.10.0
--

91184e1 Fix ComparableVersionedObject in python 3.4
be2038f Updated from global requirements
a439291 Updated from global requirements
59ac1d0 Fix a typo in Enum error path
5061888 Replace safe_utils.getcallargs with inspect.getcallargs
5254527 Fix compare_obj() to obey missing/unset fields
bb43887 Add a pci address  field

Diffstat (except docs and test files)
-

oslo_versionedobjects/base.py |  9 +++-
oslo_versionedobjects/exception.py|  6 +--
oslo_versionedobjects/fields.py   | 19 +++-
oslo_versionedobjects/fixture.py  | 49 
oslo_versionedobjects/safe_utils.py   | 53 --
requirements.txt  |  6 +--
setup.cfg |  4 +-
11 files changed, 164 insertions(+), 84 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index a0b5a14..29cc6f2 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,2 +5,2 @@ six>=1.9.0 # MIT
-oslo.concurrency>=3.5.0 # Apache-2.0
-oslo.config>=3.9.0 # Apache-2.0
+oslo.concurrency>=3.8.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -10 +10 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslotest 2.6.0 release (newton)

2016-06-09 Thread no-reply
We are happy to announce the release of:

oslotest 2.6.0: Oslo test framework

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslotest

With package available at:

https://pypi.python.org/pypi/oslotest

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

For more details, please see below.

Changes in oslotest 2.5.0..2.6.0


412073f Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt  | 4 ++--
test-requirements.txt | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c52c09f..9c881de 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -11 +11 @@ testtools>=1.4.0 # MIT
-mock>=1.2 # BSD
+mock>=2.0 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index bd87ecf..bfdb9ba 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.vmware 2.8.0 release (newton)

2016-06-09 Thread no-reply
We are gleeful to announce the release of:

oslo.vmware 2.8.0: Oslo VMware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.vmware

With package available at:

https://pypi.python.org/pypi/oslo.vmware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.vmware

For more details, please see below.

2.8.0
^

Other Notes

* Switch to reno for managing release notes.

Changes in oslo.vmware 2.7.0..2.8.0
---

bfcd07f Updated from global requirements
d46d23a Updated from global requirements
c4ecd85 Updated from global requirements
74fede7 Updated from global requirements
2e9ba24 Add reno for release notes management
b10c757 Updated from global requirements
10cff5c Updated from global requirements

Diffstat (except docs and test files)
-

.gitignore|   3 +
oslo_vmware/version.py|  18 ++
releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml |   3 +
releasenotes/source/_static/.placeholder  |   0
releasenotes/source/_templates/.placeholder   |   0
releasenotes/source/conf.py   | 274 ++
releasenotes/source/index.rst |   8 +
releasenotes/source/unreleased.rst|   5 +
requirements.txt  |   8 +-
test-requirements.txt |   5 +-
tox.ini   |   5 +-
11 files changed, 322 insertions(+), 7 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 0770ec5..8682b9e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12 +12 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
@@ -19,3 +19,3 @@ eventlet!=0.18.3,>=0.18.2 # MIT
-requests!=2.9.0,>=2.8.1 # Apache-2.0
-urllib3>=1.8.3 # MIT
-oslo.concurrency>=3.5.0 # Apache-2.0
+requests>=2.10.0 # Apache-2.0
+urllib3>=1.15.1 # MIT
+oslo.concurrency>=3.8.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 5508978..e1fdf23 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9,2 +9,2 @@ discover # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
-mock>=1.2 # BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
+mock>=2.0 # BSD
@@ -24,0 +25 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+reno>=1.6.2 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.middleware 3.12.0 release (newton)

2016-06-09 Thread no-reply
We are stoked to announce the release of:

oslo.middleware 3.12.0: Oslo Middleware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.middleware

With package available at:

https://pypi.python.org/pypi/oslo.middleware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

For more details, please see below.

Changes in oslo.middleware 3.11.0..3.12.0
-

549be72 Updated from global requirements
4028696 Updated from global requirements
f553a61 Do not add a default content type when replying

Diffstat (except docs and test files)
-

oslo_middleware/base.py| 14 +-
oslo_middleware/cors.py|  9 ++---
requirements.txt   |  4 ++--
test-requirements.txt  |  4 ++--
5 files changed, 30 insertions(+), 12 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 30a44b7..2abf9a3 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ Jinja2>=2.8 # BSD License (3 clause)
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index fc60e42..d913843 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -7 +7 @@ hacking<0.11,>=0.10.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.rootwrap 4.3.0 release (newton)

2016-06-09 Thread no-reply
We are thrilled to announce the release of:

oslo.rootwrap 4.3.0: Oslo Rootwrap

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.rootwrap

With package available at:

https://pypi.python.org/pypi/oslo.rootwrap

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.rootwrap

For more details, please see below.

Changes in oslo.rootwrap 4.2.0..4.3.0
-

7ec1e73 Updated from global requirements

Diffstat (except docs and test files)
-

test-requirements.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 14b8a6c..e4b24c3 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ discover # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -21 +21 @@ oslotest>=1.10.0 # Apache-2.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.messaging 5.3.0 release (newton)

2016-06-09 Thread no-reply
We are gleeful to announce the release of:

oslo.messaging 5.3.0: Oslo Messaging API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

Changes in oslo.messaging 5.2.0..5.3.0
--

8674f73 Modify info of default_notification_exchange
4af6878 Imported Translations from Zanata
6166b44 [zmq] Remove rpc_zmq_concurrency option
fbf5cb4 [zmq] Fix timeout in ThreadingPoller.poll
2668177 Fix typo: 'olso' to 'oslo'
a620319 Updated from global requirements
3169174 [zmq] Don't skip non-direct message types
8ee1915 [zmq] Refactoring of zmq client
034c8f0 [impl_rabbit] Remove deprecated get_expiration method
9d51fa4 [AMQP 1.0] Randomize host list connection attempts
c07d02e Documents recommended executor

Diffstat (except docs and test files)
-

oslo_messaging/_drivers/amqp1_driver/controller.py |   3 +-
oslo_messaging/_drivers/impl_pika.py   |   2 +-
oslo_messaging/_drivers/impl_rabbit.py |  16 +---
oslo_messaging/_drivers/impl_zmq.py|  18 ++--
.../_drivers/zmq_driver/broker/zmq_proxy.py|   2 +-
.../_drivers/zmq_driver/broker/zmq_queue_proxy.py  |   8 +-
.../dealer/zmq_dealer_publisher_proxy.py   |  20 +++-
.../client/publishers/zmq_publisher_base.py|  27 +++---
.../_drivers/zmq_driver/client/zmq_client.py   | 104 -
.../zmq_driver/matchmaker/matchmaker_redis.py  |   2 +-
.../_drivers/zmq_driver/poller/threading_poller.py |  19 ++--
oslo_messaging/_drivers/zmq_driver/zmq_async.py|  58 +++-
.../en_GB/LC_MESSAGES/oslo_messaging-log-error.po  |  31 --
.../en_GB/LC_MESSAGES/oslo_messaging-log-info.po   |  15 ++-
.../es/LC_MESSAGES/oslo_messaging-log-error.po |  32 ---
oslo_messaging/locale/oslo_messaging-log-error.pot |  54 ---
oslo_messaging/locale/oslo_messaging-log-info.pot  |  25 -
.../locale/oslo_messaging-log-warning.pot  |  43 -
oslo_messaging/server.py   |  13 ++-
requirements.txt   |   2 +-
setup-test-env-zmq.sh  |   1 +
24 files changed, 224 insertions(+), 421 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index f6e9f64..7ec46a9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11 +11 @@ oslo.log>=1.14.0 # Apache-2.0
-oslo.utils>=3.9.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.policy 1.9.0 release (newton)

2016-06-09 Thread no-reply
We are overjoyed to announce the release of:

oslo.policy 1.9.0: Oslo Policy library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.policy

With package available at:

https://pypi.python.org/pypi/oslo.policy

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.policy

For more details, please see below.

1.9.0
^

Other Notes

* Switch to reno for managing release notes.

Changes in oslo.policy 1.8.0..1.9.0
---

474c120 Add sample file generation script and helper methods
ea29939 Add equality operator to policy.RuleDefault
f5988a2 Imported Translations from Zanata
88bcd97 Updated from global requirements
5046c53 Fix typo: 'olso' to 'oslo'
8c3acab Updated from global requirements
3e7f7d4 Updated from global requirements
fd785d2 Add reno for release notes management
bb11272 Add policy registration and authorize method
f5ee730 Updated from global requirements
3da2f4a doc: Fix wrong import statement in usage

Diffstat (except docs and test files)
-

.gitignore |   3 +
oslo_policy/generator.py   | 130 ++
.../locale/en_GB/LC_MESSAGES/oslo_policy.po|  49 
oslo_policy/locale/es/LC_MESSAGES/oslo_policy.po   |  11 +-
oslo_policy/locale/oslo_policy-log-error.pot   |  30 ---
oslo_policy/locale/oslo_policy.pot |  47 
oslo_policy/policy.py  | 153 +++-
oslo_policy/version.py |  18 ++
releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml  |   3 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 273 +
releasenotes/source/index.rst  |   8 +
.../locale/en_GB/LC_MESSAGES/releasenotes.po   |  27 ++
releasenotes/source/unreleased.rst |   5 +
requirements.txt   |   6 +-
setup.cfg  |   3 +-
test-requirements.txt  |   2 +
tox.ini|   3 +
22 files changed, 1126 insertions(+), 118 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 8e217a1..7204ac2 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,2 +5,2 @@
-requests!=2.9.0,>=2.8.1 # Apache-2.0
-oslo.config>=3.9.0 # Apache-2.0
+requests>=2.10.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -9 +9 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index dec1fa7..57a305d 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -14,0 +15,2 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+
+reno>=1.6.2 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.utils 3.12.0 release (newton)

2016-06-09 Thread no-reply
We are psyched to announce the release of:

oslo.utils 3.12.0: Oslo Utility library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.utils

With package available at:

https://pypi.python.org/pypi/oslo.utils

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.utils

For more details, please see below.

Changes in oslo.utils 3.11.0..3.12.0


dfdaaa2 Updated from global requirements
cbf5dde Fix method split_path's docstring 'versionadded'
947d73a Updated from global requirements
d79012d Updated from global requirements
8f5e65c Remove method total_seconds in timeuitls
388a15e Fix is_valid_cidr raises TypeError

Diffstat (except docs and test files)
-

oslo_utils/netutils.py |  2 +-
oslo_utils/strutils.py |  2 +-
oslo_utils/timeutils.py| 18 --
test-requirements.txt  |  6 +++---
6 files changed, 6 insertions(+), 28 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index ed8f5f9..b9e4d03 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ discover # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -25 +25 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD
@@ -28 +28 @@ mock>=1.2 # BSD
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.service 1.12.0 release (newton)

2016-06-09 Thread no-reply
We are content to announce the release of:

oslo.service 1.12.0: oslo.service library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

For more details, please see below.

Changes in oslo.service 1.11.0..1.12.0
--

33d8b10 Imported Translations from Zanata
241e196 Updated from global requirements
d8f46bc Updated from global requirements
9143d0c Updated from global requirements
1083a7f Updated from global requirements
14eda53 Updated from global requirements
2e106c6 Updated from global requirements

Diffstat (except docs and test files)
-

.../en_GB/LC_MESSAGES/oslo_service-log-error.po|  50 +
.../en_GB/LC_MESSAGES/oslo_service-log-info.po |  85 ++
.../en_GB/LC_MESSAGES/oslo_service-log-warning.po  |  23 
.../locale/en_GB/LC_MESSAGES/oslo_service.po   | 124 +
requirements.txt   |   6 +-
test-requirements.txt  |   4 +-
6 files changed, 287 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3b56e53..87352e2 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9,3 +9,3 @@ monotonic>=0.6 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
-oslo.concurrency>=3.5.0 # Apache-2.0
-oslo.config>=3.9.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
+oslo.concurrency>=3.8.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 23d23c7..d0b8e20 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -7 +7 @@ hacking<0.11,>=0.10.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.serialization 2.8.0 release (newton)

2016-06-09 Thread no-reply
We are stoked to announce the release of:

oslo.serialization 2.8.0: Oslo Serialization library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.serialization

With package available at:

https://pypi.python.org/pypi/oslo.serialization

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.serialization

For more details, please see below.

Changes in oslo.serialization 2.7.0..2.8.0
--

4fdaeff Replace TypeError by ValueError in msgpackutils
8a4cac9 Updated from global requirements
9bb6d42 Updated from global requirements
bfb1536 Updated from global requirements

Diffstat (except docs and test files)
-

oslo_serialization/jsonutils.py   | 2 +-
oslo_serialization/msgpackutils.py| 6 +++---
requirements.txt  | 2 +-
test-requirements.txt | 2 +-
6 files changed, 12 insertions(+), 9 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 4460946..6872a60 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13 +13 @@ msgpack-python>=0.4.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 78669a5..61e43b4 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@ hacking<0.11,>=0.10.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.log 3.9.0 release (newton)

2016-06-09 Thread no-reply
We are thrilled to announce the release of:

oslo.log 3.9.0: oslo.log library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 3.8.0..3.9.0


070cc7a Updated from global requirements
991d8f0 Make available to log encoded strings as arguments
5c55189 Updated from global requirements
3e3471f Fix typo: 'Olso' to 'Oslo'
3b45fbc Updated from global requirements
93dd44f Convert unicode data to utf-8 before calling syslog.syslog()
ea4b9d0 Updated from global requirements
77355b1 Use new logging specific method for context info
6a36cff Reduce READ_FREQ and TIMEOUT for watch-file
48920ea Improve olso.log test coverage for edge cases

Diffstat (except docs and test files)
-

oslo_log/formatters.py |  70 +++
oslo_log/handlers.py   |   9 +-
oslo_log/log.py|  17 
oslo_log/watchers.py   |   4 +-
releasenotes/source/conf.py|  10 +--
requirements.txt   |   4 +-
setup.cfg  |   2 +-
test-requirements.txt  |   2 +-
12 files changed, 286 insertions(+), 81 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 59ec3f1..c8905bf 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ six>=1.9.0 # MIT
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 6ebbe26..c2d83c8 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12 +12 @@ testtools>=1.4.0 # MIT
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.i18n 3.7.0 release (newton)

2016-06-09 Thread no-reply
We are jazzed to announce the release of:

oslo.i18n 3.7.0: Oslo i18n library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.i18n

With package available at:

https://pypi.python.org/pypi/oslo.i18n

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.i18n

For more details, please see below.

Changes in oslo.i18n 3.6.0..3.7.0
-

4b33a2c Imported Translations from Zanata
8845373 Updated from global requirements

Diffstat (except docs and test files)
-

oslo_i18n/locale/de/LC_MESSAGES/oslo_i18n.po| 14 +++---
oslo_i18n/locale/en_GB/LC_MESSAGES/oslo_i18n.po |  8 
oslo_i18n/locale/es/LC_MESSAGES/oslo_i18n.po|  8 
oslo_i18n/locale/fr/LC_MESSAGES/oslo_i18n.po|  8 
oslo_i18n/locale/it/LC_MESSAGES/oslo_i18n.po|  8 
oslo_i18n/locale/ja/LC_MESSAGES/oslo_i18n.po|  8 
oslo_i18n/locale/ko_KR/LC_MESSAGES/oslo_i18n.po |  8 
oslo_i18n/locale/oslo_i18n.pot  | 23 ---
oslo_i18n/locale/pl_PL/LC_MESSAGES/oslo_i18n.po |  8 
oslo_i18n/locale/pt/LC_MESSAGES/oslo_i18n.po|  8 
oslo_i18n/locale/zh_CN/LC_MESSAGES/oslo_i18n.po |  8 
requirements.txt|  2 +-
test-requirements.txt   |  4 ++--
13 files changed, 46 insertions(+), 69 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index be4eb38..8340453 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ pbr>=1.6 # Apache-2.0
-Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD
+Babel>=2.3.4 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index fa71f43..758c7d5 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10 +10 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD
@@ -15 +15 @@ coverage>=3.6 # Apache-2.0
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.concurrency 3.10.0 release (newton)

2016-06-09 Thread no-reply
We are psyched to announce the release of:

oslo.concurrency 3.10.0: Oslo Concurrency library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

With package available at:

https://pypi.python.org/pypi/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.

3.10.0
^^

Other Notes

* Switch to reno for managing release notes.

Changes in oslo.concurrency 3.9.0..3.10.0
-

4c60f8e Imported Translations from Zanata
bbcb1ad Updated from global requirements
9a36c18 Add reno for releasenotes management

Diffstat (except docs and test files)
-

.gitignore |   3 +
.../de/LC_MESSAGES/oslo_concurrency-log-info.po|  19 ++
.../locale/oslo_concurrency-log-info.pot   |  25 --
oslo_concurrency/locale/oslo_concurrency.pot   |  95 ---
oslo_concurrency/version.py|  18 ++
releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml  |   3 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 273 +
releasenotes/source/index.rst  |   8 +
releasenotes/source/unreleased.rst |   5 +
requirements.txt   |   4 +-
test-requirements.txt  |   3 +-
tox.ini|   3 +
14 files changed, 336 insertions(+), 123 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index aa28515..51b3c76 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ iso8601>=0.1.11 # MIT
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -10 +10 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index d9b4f55..8418f39 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9 +9 @@ futures>=3.0;python_version=='2.7' or python_version=='2.6' # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD
@@ -13,0 +14 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+reno>=1.6.2 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.config 3.11.0 release (newton)

2016-06-09 Thread no-reply
We are excited to announce the release of:

oslo.config 3.11.0: Oslo Configuration API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.config

With package available at:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

Changes in oslo.config 3.10.0..3.11.0
-

613fdcf Fix typo: 'olso' to 'oslo'
1b3af11 Return [] for .config_dirs when config files are not parsed
75e1c30 generator: format string default value for List type properly
90e8184 Updated from global requirements
792a43f Updated from global requirements
1c02ce8 Make sure ConfigType is an abstract class
20e6e90 Added i18n formatting to log messages
d3a4c98 Remove duplicated code in method test_equal of HostnameTypeTests
77505c7 Incorrect group name when deprecated_group is not specified
a671f9a Disallow config option name as same as attribute of ConfigOpts

Diffstat (except docs and test files)
-

oslo_config/_i18n.py| 50 ++
oslo_config/cfg.py  | 61 +
oslo_config/generator.py| 11 ---
oslo_config/sphinxext.py|  2 +-
oslo_config/types.py|  3 ++
releasenotes/source/conf.py | 10 +++---
requirements.txt|  1 +
test-requirements.txt   |  2 +-
tox.ini |  3 ++
13 files changed, 231 insertions(+), 51 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e0a5ef1..bfa7a9b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8,0 +9 @@ stevedore>=1.10.0 # Apache-2.0
+oslo.i18n>=2.1.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 47f809d..966fc1d 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -29 +29 @@ oslo.i18n>=2.1.0 # Apache-2.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.cache 1.9.0 release (newton)

2016-06-09 Thread no-reply
We are amped to announce the release of:

oslo.cache 1.9.0: Cache storage for Openstack projects.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

For more details, please see below.

1.9.0
^

Other Notes

* Switch to reno for managing release notes.

Changes in oslo.cache 1.8.0..1.9.0
--

f6108b0 Updated from global requirements
3e8d5eb Add reno for releasenotes management

Diffstat (except docs and test files)
-

.gitignore|   3 +
oslo_cache/version.py |  18 ++
releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml |   3 +
releasenotes/source/_static/.placeholder  |   0
releasenotes/source/_templates/.placeholder   |   0
releasenotes/source/conf.py   | 273 ++
releasenotes/source/index.rst |   8 +
releasenotes/source/unreleased.rst|   5 +
requirements.txt  |   4 +-
test-requirements.txt |   3 +-
tox.ini   |   3 +
11 files changed, 317 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 35f4848..3c3ac71 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ six>=1.9.0 # MIT
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -10 +10 @@ oslo.log>=1.14.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 11648dd..42f2881 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@ hacking<0.11,>=0.10.0
-mock>=1.2 # BSD
+mock>=2.0 # BSD
@@ -8,0 +9 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+reno>=1.6.2 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.context 2.5.0 release (newton)

2016-06-09 Thread no-reply
We are happy to announce the release of:

oslo.context 2.5.0: Oslo Context library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.context

With package available at:

https://pypi.python.org/pypi/oslo.context

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.context

For more details, please see below.

2.5.0
^

Other Notes

* Switch to reno for managing release notes.

Changes in oslo.context 2.4.0..2.5.0


0617412 Add reno for releasenotes management

Diffstat (except docs and test files)
-

.gitignore|   3 +
oslo_context/version.py   |  18 ++
releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml |   3 +
releasenotes/source/_static/.placeholder  |   0
releasenotes/source/_templates/.placeholder   |   0
releasenotes/source/conf.py   | 273 ++
releasenotes/source/index.rst |   8 +
releasenotes/source/unreleased.rst|   5 +
test-requirements.txt |   1 +
tox.ini   |   3 +
10 files changed, 314 insertions(+)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index fe145a5..1db9871 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -11,0 +12 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+reno>=1.6.2 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] mox3 0.16.0 release (newton)

2016-06-09 Thread no-reply
We are delighted to announce the release of:

mox3 0.16.0: Mock object framework for Python

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/mox3

With package available at:

https://pypi.python.org/pypi/mox3

Please report issues through launchpad:

http://bugs.launchpad.net/python-mox3

For more details, please see below.

Changes in mox3 0.15.0..0.16.0
--

c09ec5b Updated from global requirements
7329b2e Correct spelling of occurrences

Diffstat (except docs and test files)
-

requirements.txt   | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 25cfc5d..fe04f92 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ pbr>=1.6 # Apache-2.0
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] debtcollector 1.5.0 release (newton)

2016-06-09 Thread no-reply
We are jubilant to announce the release of:

debtcollector 1.5.0: A collection of Python deprecation patterns and
strategies that help you collect your technical debt in a non-
destructive manner.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/debtcollector

With package available at:

https://pypi.python.org/pypi/debtcollector

Please report issues through launchpad:

http://bugs.launchpad.net/debtcollector

For more details, please see below.

Changes in debtcollector 1.4.0..1.5.0
-

4d766d6 Updated from global requirements
fe22a47 Fix renamed_kwarg to preserve argspec
1f5816a Add tests for decorated argspec preservation

Diffstat (except docs and test files)
-

debtcollector/moves.py  | 11 +++
debtcollector/renames.py| 22 ++
debtcollector/updating.py   |  9 +++---
test-requirements.txt   |  2 +-
5 files changed, 69 insertions(+), 28 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index b2db7ee..d1f9b5d 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -16 +16 @@ testtools>=1.4.0 # MIT
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-09 Thread Steven Dake (stdake)
Swapnil,

Thanks for triggering this community vote.  It was sorely overdue :).  I
counted 8 votes before I voted (I am all for moving everything to 1600UTC)
which is a majority of the CR team.  Note in the future I think we may
need to consider having split meetings again, if our community makeup
changes.

I have submitted a review to make the change official:
https://review.openstack.org/#/c/327845/1


Regards
-steve


On 6/8/16, 5:54 AM, "Swapnil Kulkarni (coolsvap)"  wrote:

>Dear Kollagues,
>
>Some time ago we discussed the requirement of alternating meeting
>times for Kolla weekly meeting due to major contributors from
>kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
>implemented alternate US/APAC meeting times.
>
>With kolla-mesos not active anymore and looking at the current active
>contributors, I wish to reinstate the UTC 1600 time for all Kolla
>Weekly meetings.
>
>Please let me know your views.
>
>-- 
>Best Regards,
>Swapnil Kulkarni
>irc : coolsvap
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-09 Thread Tony Breeds
On Fri, Jun 10, 2016 at 08:24:34AM +1000, Michael Still wrote:
> On Fri, Jun 10, 2016 at 7:18 AM, Tony Breeds 
> wrote:
> 
> > On Wed, Jun 08, 2016 at 08:10:47PM -0500, Matt Riedemann wrote:
> >
> > > Agreed, but it's the worked example part that we don't have yet,
> > > chicken/egg. So we can drop the hammer on all new things until someone
> > does
> > > it, which sucks, or hope that someone volunteers to work the first
> > example.
> >
> > I'll work with gus to find a good example in nova and have patches up
> > before
> > the mid-cycle.  We can discuss next steps then.
> >
> 
> Sorry to be a pain, but I'd really like that example to be non-trivial if
> possible. One of the advantages of privsep is that we can push the logic
> down closer to the privileged code, instead of just doing something "close"
> and then parsing. I think reinforcing that idea in the sample code is
> important.

I think *any* change will show that.  I wanted to pick something achievable in
the short timeframe.

The example I'm thinking of is nova/virt/libvirt/utils.py:update_mtime()

 * It will provide a lot of the boiler plate
 * Show that we can now now replace an exec with pure python code.
 * Show how you need to retrieve data from a trusted source on the priviledged
   side
 * Migrate testing
 * Remove an entry from compute.filters

Once that's implace chown() in the same file is probably a quick fix.

Is it super helpful? does it have a measurable impact on performance, security?
The answer is probably "no"

I still think it has value.

Handling qemu-img is probably best done by creating os-qemu (or similar) and
designing from the ground up with provsep in mind.  Glance and Cinder would
benefit from that also.  That howveer is waaay to big for this cycle.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Joshua Harlow

Jim Rollenhagen wrote:

1.)Nova<->  ironic interactions are generally seem terrible?

I don't know if I'd call it terrible, but there's friction. Things that
are unchangable on hardware are just software configs in vms (like mac
addresses, overlays, etc), and things that make no sense in VMs are
pretty standard on servers (trunked vlans, bonding, etc).

One way we've gotten around it is by using Ironic standalone via
Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
and includes playbooks to build config drives and deploy images in a
fairly rudimentary way without Nova.

I call this the "better than Cobbler" way of getting a toe into the
Ironic waters.

[1] https://github.com/openstack/bifrost

Out of curiosity, why ansible vs turning
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
(or something like it) into a tiny-wsgi-app (pick useful name here) that
has its own REST api (that looks pretty similar to the public functions
in that driver file)?

That's an interesting idea. I think a reason Bifrost doesn't just import
nova virt drivers is that they're likely _not_ a supported public API
(despite not having _'s at the front). Also, a lot of the reason Bifrost
exists is to enable users to get the benefits of all the baremetal
abstraction work done in Ironic without having to fully embrace all of
OpenStack's core. So while you could get a little bit of the stuff from
nova (like config drive building), you'd still need to handle network
address assignment, image management, etc. etc., and pretty soon you
start having to run a tiny glance and a tiny neutron. The Bifrost way
is the opposite: I just want a tiny Ironic, and _nothing_ else.


Ya, I'm just thinking that at a certain point

Oops forgot to fill this out, was just thinking that at a certain point it
might be easier to figure out how to extract that API (meh, if its public or
private) and just have someone make an executive decision around ironic
being a stand-alone thing or not (and a capable stand-alone thing, not a
sorta-standalone-thing).


So, I've been thinking about this quite a bit. We've also talked about
doing a v2 API (as evil as that may be) in Ironic here and there. We've
had lots of lessons learned from the v1 API, mostly that our API is
absolutely terrible for humans. I'd love to fix that (whether that
requires a v2 API or not is unclear, so don't focus on that).

I've noticed that people keep talking about the Nova driver API
not being public/stable/whatever in this thread - let's ignore that and
think bigger.

So, there's two large use cases for ironic that we support today:

* Ironic as a backend to nova. Operators still need to interact with the
   Ironic API for management, troubleshooting, and fixing issues that
   computers do not handle today.

* Ironic standalone - by this I mean ironic without nova. The primary
   deployment method here is using Bifrost, and I also call it the
   "better than cobbler" case. I'm not sure if people are using this
   without bifrost, or with other non-nova services, today. Users in this
   model, as I understand things, do not interact with the Ironic API
   directly (except maybe for troubleshooting).

There's other use cases I would like to support:

* Ironic standalone, without Bifrost. I would love for a deployer to be
   able to stand up Ironic as an end-user facing API, probably with
   Keystone, maybe with Neutron/Glance/Swift if needed. This would
   require a ton of discussion and work (e.g. ironic has no concept of
   tenants/projects today, we might want a scheduler, a concept of an
   instance, etc) and would be a very long road. The ideal solution to
   this is to break out the Compute API and scheduler to be separate from
   Nova, but that's an even longer road, so let's pretend I didn't say
   that and not devolve this thread into that conversation (yet).



That'd be nice, that is more of what I was thinking when I think of 
'standalone' in that it's standalone from the aspect of not needing to 
go through another compute layer [nova] to get to the bottom compute 
layer [ironic], whereas bifrost (from my small understanding of it) also 
discards the rest of the openstack services (and therefore also 
discarding the non-zero amount of functionality they provide).



* Ironic as a backend to other things. Josh pointed out kubernetes
   somewhere, I'd love to be an official backend there. Heat today goes
   through Nova to get an ironic instance, it seems reasonable to have
   heat talk directly to ironic. Things like that. The amount of work
   here might depend on the application using ironic (e.g. I think k8s
   has it's own scheduler, heat does not, right?).


Correct, I'm a big fan of this, and I think openstack as a community 
needs more of it... We as a community IMHO need to embrace and be best 
buddies (pick other word here) with other projects that are outside of 
openstack that are in what I would call the 'cloud family' because 

[openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-09 Thread Carl Baldwin
Hi,

You may or may not be aware of the vlan-aware-vms effort [1] in
Neutron.  If not, there is a spec and a fair number of patches in
progress for this.  Essentially, the goal is to allow a VM to connect
to multiple Neutron networks by tagging traffic on a single port with
VLAN tags.

This effort will have some effect on vif plugging because the datapath
will include some changes that will effect how vif plugging is done
today.

The design proposal for trunk ports with OVS adds a new bridge for
each trunk port.  This bridge will demux the traffic and then connect
to br-int with patch ports for each of the networks.  Rawlin Peters
has some ideas for expanding the vif capability to include this
wiring.

There is also a proposal for connecting to linux bridges by using
kernel vlan interfaces.

This effort is pretty important to Neutron in the Newton timeframe.  I
wanted to send this out to start rounding up the reviewers and other
participants we need to see how we can start putting together a plan
for nova integration of this feature (via os-vif?).

Carl Baldwin

[1] https://review.openstack.org/#/q/topic:bp/vlan-aware-vms+-status:abandoned

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-09 Thread Tony Breeds
On Thu, Jun 09, 2016 at 02:16:10AM -0700, Sumit Naiksatam wrote:
> Hi Tony, The following repos should not be included in the EoL list since
> they will not be EoL'ed at this time:
> openstack/group-based-policy
> openstack/group-based-policy-automation
> openstack/group-based-policy-ui
> openstack/python-group-based-policy-client

Sure.

I think it's a bad idea, and means we eaither need to tweak devstack-gate to
handle this base or you need to drop all but the unit tests from your check and
gate pipelines.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Artur Svechnikov to the fuel-web-core team

2016-06-09 Thread Evgeniy L
Hi Dmitry,

It depends, but usually it takes half of working time to do reviews, I'm
not sure if we can assume 25-30%, also finding a good reviewer usually is
much harder than a person who can write the code, so it would be much more
productive to encourage people to spend as much time as they can on making
the project better and helping other contributors, than restricting them to
review code for not more than 2.5 hours.

Thanks,

On Thu, Jun 9, 2016 at 5:46 AM, Dmitry Klenov  wrote:

> Hi Folks,
>
> From technical standpoint I fully support Arthur to become core reviewer.
> I like thorough reviews that he is making.
>
> Although I have some concerns as well. Planned tasks for our team will not
> allow Arthur to spend more than 25-30% of his time for reviewing. If that
> is fine - my concerns are resolved.
>
> Thanks,
> Dmitry.
>
> On Thu, Jun 9, 2016 at 12:57 PM, Sergey Vasilenko  > wrote:
>
>> +1
>>
>>
>> /sv
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] [lbaas] Mid-Cycle proposed for the week of August 22nd

2016-06-09 Thread Michael Johnson
Just a reminder, we have a proposed mid-cycle meeting set for the week
of August 22nd in San Antonio.

If you would like to attend and have not yet signed up, please add
your name to the list on our etherpad:

https://etherpad.openstack.org/p/lbaas-octavia-newton-midcycle

Thank you,
Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-09 Thread Sumit Naiksatam
On Thu, Jun 9, 2016 at 3:10 PM, Ihar Hrachyshka  wrote:
>
>> On 10 Jun 2016, at 00:03, Sumit Naiksatam  wrote:
>>
>> On Thu, Jun 9, 2016 at 2:26 AM, Ihar Hrachyshka  wrote:
>>>
 On 09 Jun 2016, at 11:16, Sumit Naiksatam  wrote:

 Hi Tony, The following repos should not be included in the EoL list since 
 they will not be EoL'ed at this time:
 openstack/group-based-policy
 openstack/group-based-policy-automation
 openstack/group-based-policy-ui
 openstack/python-group-based-policy-client
>>>
>>> Would you mind clarifying why you absolutely need to maintain those old 
>>> branches for those projects, and how do you plan to do it if no tempest 
>>> jobs will be able to install other components for you?
>>>
>>
>> We are continuing to fix bugs for kilo users.
>
> How are you supposed to validate that those fixes don’t break interactions 
> with other components?
>
We expect that any such fixes will be fairly contained, and well
covered by the UTs. In addition we will be doing our due diligence
with offline integration testing.

> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-09 Thread Michael Still
On Fri, Jun 10, 2016 at 7:18 AM, Tony Breeds 
wrote:

> On Wed, Jun 08, 2016 at 08:10:47PM -0500, Matt Riedemann wrote:
>
> > Agreed, but it's the worked example part that we don't have yet,
> > chicken/egg. So we can drop the hammer on all new things until someone
> does
> > it, which sucks, or hope that someone volunteers to work the first
> example.
>
> I'll work with gus to find a good example in nova and have patches up
> before
> the mid-cycle.  We can discuss next steps then.
>

Sorry to be a pain, but I'd really like that example to be non-trivial if
possible. One of the advantages of privsep is that we can push the logic
down closer to the privileged code, instead of just doing something "close"
and then parsing. I think reinforcing that idea in the sample code is
important.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][infra][qa] Ironic grenade work nearly complete

2016-06-09 Thread Jay Faulkner

A quick update:

The devstack-gate patch is currently merging.

There was some discussion about whether or not the Ironic grenade job 
should be in the check pipeline (even as -nv) for grenade, so I split 
that patch into two pieces so the less controversial part (adding the 
grenade-nv job to Ironic's check pipeline) could merge more easily.


https://review.openstack.org/#/c/319336/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue for Ironic only)

https://review.openstack.org/#/c/327985/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue for grenade)

Getting working upgrade testing will be a huge milestone for Ironic. 
Thanks to those who have already helped us make progress and those who 
will help us land these and see it at work.


Thanks in advance,
Jay Faulkner
OSIC

On 6/9/16 8:28 AM, Jim Rollenhagen wrote:

Hi friends,

We're two patches away from having grenade passing in our check queue!
This is a huge step forward for us, many thanks go to the numerous folks
that have worked on or helped somehow with this.

I'd love to push this across the line today as it's less than 10 lines
of changes between the two, and we have a bunch of work nearly done that
we'd like upgrade testing running against before merging.

So we need infra cores' help here.

https://review.openstack.org/#/c/316662/ - devstack-gate
Allow to pass OS_TEST_TIMEOUT for grenade job
1 line addition with an sdague +2.

https://review.openstack.org/#/c/319336/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue)
+7,-1 with an AJaeger +2.

Thanks in advance. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-09 Thread Ihar Hrachyshka

> On 10 Jun 2016, at 00:03, Sumit Naiksatam  wrote:
> 
> On Thu, Jun 9, 2016 at 2:26 AM, Ihar Hrachyshka  wrote:
>> 
>>> On 09 Jun 2016, at 11:16, Sumit Naiksatam  wrote:
>>> 
>>> Hi Tony, The following repos should not be included in the EoL list since 
>>> they will not be EoL'ed at this time:
>>> openstack/group-based-policy
>>> openstack/group-based-policy-automation
>>> openstack/group-based-policy-ui
>>> openstack/python-group-based-policy-client
>> 
>> Would you mind clarifying why you absolutely need to maintain those old 
>> branches for those projects, and how do you plan to do it if no tempest jobs 
>> will be able to install other components for you?
>> 
> 
> We are continuing to fix bugs for kilo users.

How are you supposed to validate that those fixes don’t break interactions with 
other components?

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-09 Thread Sumit Naiksatam
On Thu, Jun 9, 2016 at 2:26 AM, Ihar Hrachyshka  wrote:
>
>> On 09 Jun 2016, at 11:16, Sumit Naiksatam  wrote:
>>
>> Hi Tony, The following repos should not be included in the EoL list since 
>> they will not be EoL'ed at this time:
>> openstack/group-based-policy
>> openstack/group-based-policy-automation
>> openstack/group-based-policy-ui
>> openstack/python-group-based-policy-client
>
> Would you mind clarifying why you absolutely need to maintain those old 
> branches for those projects, and how do you plan to do it if no tempest jobs 
> will be able to install other components for you?
>

We are continuing to fix bugs for kilo users.

> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] theoretical race between live migration and resource audit?

2016-06-09 Thread Chris Friesen

Hi,

I'm wondering if we might have a race between live migration and the resource 
audit.  I've included a few people on the receiver list that have worked 
directly with this code in the past.


In _update_available_resource() we have code that looks like this:

instances = objects.InstanceList.get_by_host_and_node()
self._update_usage_from_instances()
migrations = objects.MigrationList.get_in_progress_by_host_and_node()
self._update_usage_from_migrations()


In post_live_migration_at_destination() we do this (updating the host and node 
as well as the task state):

instance.host = self.host
instance.task_state = None
instance.node = node_name
instance.save(expected_task_state=task_states.MIGRATING)


And in _post_live_migration() we update the migration status to "completed":
if migrate_data and migrate_data.get('migration'):
migrate_data['migration'].status = 'completed'
migrate_data['migration'].save()


Both of the latter routines are not serialized by the 
COMPUTE_RESOURCE_SEMAPHORE, so they can race relative to the code in 
_update_available_resource().



I'm wondering if we can have a situation like this:

1) migration in progress
2) We start running _update_available_resource() on destination, and we call 
instances = objects.InstanceList.get_by_host_and_node().  This will not return 
the migration, because it is not yet on the destination host.
3) The migration completes and we call post_live_migration_at_destination(), 
which sets the host/node/task_state on the instance.
4) In _update_available_resource() on destination, we call migrations = 
objects.MigrationList.get_in_progress_by_host_and_node().  This will return the 
migration for the instance in question, but when we run 
self._update_usage_from_migrations() the uuid will not be in "instances" and so 
we will use the instance from the newly-queried migration.  We will then ignore 
the instance because it is not in a "migrating" state.


Am I imagining things, or is there a race here?  If so, the negative effects 
would be that the resources of the migrating instance would be "lost", allowing 
a newly-scheduled instance to claim the same resources (PCI devices, pinned 
CPUs, etc.)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

2016-06-09 Thread Claudiu Belu
Hello again,

We've set use_glance_v1 nova config option to False on the Hyper-V CI. All good.

[1] http://64.119.130.115/nova/278835/13/results.html.gz

Best regards,

Claudiu Belu


From: Claudiu Belu [cb...@cloudbasesolutions.com]
Sent: Wednesday, June 08, 2016 4:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

Hello,

Sounds good.

We'll be testing glance v2 in the Hyper-V CI as well, but at the first glance, 
there doesn't seem to be any issues with this. We'll switch to glance v2 as 
soon as we're sure nothing will blow up. :)

Best regards,

Claudiu Belu


From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
Sent: Tuesday, June 07, 2016 11:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

I tested the glance v2 stack (glance v1 disabled) using a devstack
change here:

https://review.openstack.org/#/c/325322/

Now that the changes are merged up through the base nova image proxy and
the libvirt driver, and we just have hyper-v/xen driver changes for that
series, we should look at gating on this configuration.

I was originally thinking about adding a new job for this, but it's
probably better if we just change one of the existing integrated gate
jobs, like gate-tempest-dsvm-full or gate-tempest-dsvm-neutron-full.

Does anyone have an issue with that? Glance v1 is deprecated and the
configuration option added to nova (use_glance_v1) defaults to True for
compat but is deprecated, and the Nova team plans to drop it's v1 proxy
code in Ocata. So it seems like changing config to use v2 in the gate
jobs should be a non-issue. We'd want to keep at least one integrated
gate job using glance v1 to make sure we don't regress anything there in
Newton.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] OpenStack Swift 2.8.0 has been released

2016-06-09 Thread John Dickinson
I'm happy to announce that OpenStack Swift 2.8.0 has been released.

This release includes several feature improvements and important
bug fixes, and I recommend that everyone upgrade as soon as possible.
As always, you can upgrade to this version with no end-user downtime.

The full release notes can be found at
https://github.com/openstack/swift/blob/master/CHANGELOG.

The release is available at https://tarballs.openstack.org/swift/.

Feature highlights:

  * Concurrent bulk deletes now uses concurrency to speed up the process.
This will result in faster API responses to end users. The amount of
concurrency used is configurable by the operator, and it defaults to 2.

  * Server-side copy has been refactored to be entirely encapsulated
in middleware. Not only does this make the code cleaner and make it
easier to support external middleware, this change is necessary for the
upcoming server-side encryption functionality.

  * The `fallocate_reserve` setting can now be a percent of drive
capacity instead of just a fixed number of bytes.

  * the deprecated `threads_per_disk` setting has been removed.
Deployers are encouraged to use `servers_per_port` instead.

Bug-fix highlights:

  * Fixed an infinite recursion issue when syslog is down.

  * Fixed a rare case where a backend failure during a read could
result in a missing byte in the response body.

  * `disable_fallocate` not also correctly disables `fallocate_reserve`.

  * Fixed an issue where a single-replica configuration for account or
container DBs could result in the DB being inadvertently deleted if
it was placed on a handoff node.

  * Reclaim isolated .meta files if they are older than the `reclaim_age`.


This release is the work of 42 different developers, including 16
first-time contributors. That you to the whole community for your work
during this release.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSSN 0063] Nova and Cinder key manager for Barbican misuses cached credentials

2016-06-09 Thread Sean McGinnis
On Thu, Jun 09, 2016 at 12:52:03PM -0700, Nathan Kinder wrote:
> Nova and Cinder key manager for Barbican misuses cached credentials
> ---
> 
> ### Summary ###
> During the Icehouse release the Cinder and Nova projects added a feature
> that supports storage volume encryption using keys stored in Barbican.
> The Barbican key manager, that is part of Nova and Cinder, had a bug
> that could cause an authorized user to lose access to an encryption key
> or allow the wrong user to gain access to an encryption key.
> 
> ### Affected Services / Software ###
> Cinder: Icehouse, Juno, Kilo, Liberty
> Nova: Juno, Kilo, Liberty
> 
> ...
>
> A specification for a fix has been merged for the Mitaka release of both
> Nova and Cinder. Additionally these patches have been backported to
> stable/kilo and stable/liberty.
> 
> ### Contacts / References ###
> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0063
> Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1523646
> OpenStack Security ML : openstack-secur...@lists.openstack.org
> OpenStack Security Group : https://launchpad.net/~openstack-ossg
> Nova patch for Mitaka : https://review.openstack.org/254358/
> Nova patch for stable/liberty: https://review.openstack.org/288490
> Cinder patch for Mitaka : https://review.openstack.org/254357/
> Cinder patch for stable/liberty: https://review.openstack.org/266678
> Cinder patch for stable/kilo: https://review.openstack.org/266680
> CVE : N/A
> 

Thanks for the detailed write up Nathan!

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-09 Thread Clint Byrum
Excerpts from Michael Barton's message of 2016-06-09 15:59:24 -0500:
> On Thu, Jun 9, 2016 at 2:49 PM, Clint Byrum  wrote:
> >
> > Agreed it isn't done in uvloop. But it is done in libuv and the uvloop
> > devs agree it should be done. So this is the kind of thing where the
> > community can invest in python + C to help solve problems thought only
> > solvable by other languages.
> 
> 
> I mean, if someone wants to figure out a file server in python that can
> compete in any way with a go version, I'm totally down for rewriting swift
> to target some other python-based architecture.
> 
> But personally, my desire to try to build a universe where such a thing is
> possible is pretty low.  Because I've been fighting with it for years, and
> go already works great and there's nothing wrong with it.
> 

Mike, the whole entire crux of this thread, and Monty's words, is that
this sort of sentiment is hard to ignore, but it's even harder to ignore
the massive amount of inertia and power there is in having a community
that can all work on each others' code without investing a lot of time
in learning a new language.

That inertia is entirely the reason why other languages have surpassed
Python in some areas like concurrency. It takes longer to turn a
massive community going really hard in one direction than it does to
just start off heading in that direction in the first place. But that
turn is starting, and I for one think it's worth everyone's time to take
a hard look at whether or not we can in fact get it done together.

Nobody will force you to, but what I think Monty, and the rest of the
TC members who have voted to stay the course, are asking us all to do,
is to try to throw what we can at python solutions for these problems.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread Kyle Mestery
On Thu, Jun 9, 2016 at 4:19 PM, Assaf Muller  wrote:
> On Thu, Jun 9, 2016 at 5:06 PM, Kyle Mestery  wrote:
>> On Thu, Jun 9, 2016 at 2:11 PM, Assaf Muller  wrote:
>>> On Thu, Jun 9, 2016 at 1:48 PM, Ben Pfaff  wrote:
 On Thu, Jun 09, 2016 at 10:28:31AM -0700, rezroo wrote:
> I'm trying to reconcile differences and similarities between OVN and
> OpenDayLight in my head. Can someone help me compare these two 
> technologies
> and explain if they solve the same problem, or if there are fundamental
> differences between them?

 OVN implements network virtualization for clouds of VMs or containers or
 a mix.  Open Daylight is a platform for managing networks that can do
 anything you want.
>>>
>>> That is true, but when considering a Neutron backend for OpenStack
>>> deployments, people choose a subset of OpenDaylight projects and the
>>> end result is a solution that is comparable in scope and feature set.
>>> There are objective differences in where the projects are in their
>>> lifetime, the HA architecture, the project's consistency model between
>>> the neutron-server process and the backend, the development velocity,
>>> the community size and the release model.
>>>
>> Fundamentally, the main difference is that OVN does one thing: It does
>> network virtualization. OpenDaylight _MAY_ do network virtualization,
>> among other things, and it likely does network virtualization in many
>> different ways. Like Ben said:
>>
>> "Open Daylight is a platform for managing networks that can do
>> anything you want."
>
> I agree, but I don't think that was what was asked or makes for an
> interesting discussion. I think the obvious comparison is OVN to
> ML2/ODL using the ovsdb ODL project.
>
OK, I'll bite. :)

Fundamentally, a project's focus is absolutely important, especially
when a comparison is asked. When you ask the question: "How can OVN or
ODL solve being a backend layer for Neutron?", for example, the answer
with OVN is simple: You do it this way, and it works. For ODL, the
question is much more nuanced, as it depends on *what* components in
ODL you are using.

Also, yes, the comparison between "ML2+python agents" vs. "ML2+OVN" is
much more relevant IMHO.

Thanks!
Kyle

>>
>> Thanks,
>> Kyle
>>

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread Assaf Muller
On Thu, Jun 9, 2016 at 5:06 PM, Kyle Mestery  wrote:
> On Thu, Jun 9, 2016 at 2:11 PM, Assaf Muller  wrote:
>> On Thu, Jun 9, 2016 at 1:48 PM, Ben Pfaff  wrote:
>>> On Thu, Jun 09, 2016 at 10:28:31AM -0700, rezroo wrote:
 I'm trying to reconcile differences and similarities between OVN and
 OpenDayLight in my head. Can someone help me compare these two technologies
 and explain if they solve the same problem, or if there are fundamental
 differences between them?
>>>
>>> OVN implements network virtualization for clouds of VMs or containers or
>>> a mix.  Open Daylight is a platform for managing networks that can do
>>> anything you want.
>>
>> That is true, but when considering a Neutron backend for OpenStack
>> deployments, people choose a subset of OpenDaylight projects and the
>> end result is a solution that is comparable in scope and feature set.
>> There are objective differences in where the projects are in their
>> lifetime, the HA architecture, the project's consistency model between
>> the neutron-server process and the backend, the development velocity,
>> the community size and the release model.
>>
> Fundamentally, the main difference is that OVN does one thing: It does
> network virtualization. OpenDaylight _MAY_ do network virtualization,
> among other things, and it likely does network virtualization in many
> different ways. Like Ben said:
>
> "Open Daylight is a platform for managing networks that can do
> anything you want."

I agree, but I don't think that was what was asked or makes for an
interesting discussion. I think the obvious comparison is OVN to
ML2/ODL using the ovsdb ODL project.

>
> Thanks,
> Kyle
>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Lucas Alvares Gomes
Hi,

>> I agree in general with the idea but I think it needs a tad more
>> context. We need to remember that Ironic (ex-Nova Baremetal) was
>> created to fill a gap in OpenStack that was missing for TripleO
>> project to get off the ground. That was the problem being solved and
>> these aspects are reflected in the ReST API: Being admin-only, not
>> "human-friendly" (standalone came later), etc...
>
> Sorry, I didn't mean to slag on people here. In fact, I tried to come up
> with a way to say "no offense" but couldn't figure the words out. Ironic
> did start with a very specific use case, it's come a super long way, and
> you all did what you had to do to get things going. For that I'm forever
> indebted to you. :)
>

No offense taken at all, I also do think that the current API is
absolutely terrible for humans! I just wanted to point out that it
wasn't actually architected for it, plus, IIRC nobody in the project
at the time had much - if any - experience designing ReST APIs.

> ++, I do agree we could make a v2 faster than shoehorning things into
> v1. The "evil" part of my comment is around removing v1 in the future,
> actually. No matter the project, it's a long hard road, and will take
> years to do (and even then some tools will likely be left old and
> broken).
>

Yeah on that perspective it's evil indeed :-/

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-09 Thread Tony Breeds
On Wed, Jun 08, 2016 at 08:10:47PM -0500, Matt Riedemann wrote:

> Agreed, but it's the worked example part that we don't have yet,
> chicken/egg. So we can drop the hammer on all new things until someone does
> it, which sucks, or hope that someone volunteers to work the first example.

I'll work with gus to find a good example in nova and have patches up before
the mid-cycle.  We can discuss next steps then.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread Kyle Mestery
On Thu, Jun 9, 2016 at 2:11 PM, Assaf Muller  wrote:
> On Thu, Jun 9, 2016 at 1:48 PM, Ben Pfaff  wrote:
>> On Thu, Jun 09, 2016 at 10:28:31AM -0700, rezroo wrote:
>>> I'm trying to reconcile differences and similarities between OVN and
>>> OpenDayLight in my head. Can someone help me compare these two technologies
>>> and explain if they solve the same problem, or if there are fundamental
>>> differences between them?
>>
>> OVN implements network virtualization for clouds of VMs or containers or
>> a mix.  Open Daylight is a platform for managing networks that can do
>> anything you want.
>
> That is true, but when considering a Neutron backend for OpenStack
> deployments, people choose a subset of OpenDaylight projects and the
> end result is a solution that is comparable in scope and feature set.
> There are objective differences in where the projects are in their
> lifetime, the HA architecture, the project's consistency model between
> the neutron-server process and the backend, the development velocity,
> the community size and the release model.
>
Fundamentally, the main difference is that OVN does one thing: It does
network virtualization. OpenDaylight _MAY_ do network virtualization,
among other things, and it likely does network virtualization in many
different ways. Like Ben said:

"Open Daylight is a platform for managing networks that can do
anything you want."

Thanks,
Kyle

>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-09 Thread Michael Barton
On Thu, Jun 9, 2016 at 2:49 PM, Clint Byrum  wrote:
>
> Agreed it isn't done in uvloop. But it is done in libuv and the uvloop
> devs agree it should be done. So this is the kind of thing where the
> community can invest in python + C to help solve problems thought only
> solvable by other languages.


I mean, if someone wants to figure out a file server in python that can
compete in any way with a go version, I'm totally down for rewriting swift
to target some other python-based architecture.

But personally, my desire to try to build a universe where such a thing is
possible is pretty low.  Because I've been fighting with it for years, and
go already works great and there's nothing wrong with it.

- Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-09 Thread Adrian Otto
Rackspace is willing to host in Austin, TX or San Antonio, TX, or San 
Francisco, CA.

--
Adrian

On Jun 7, 2016, at 1:35 PM, Hongbin Lu 
> wrote:

Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0063] Nova and Cinder key manager for Barbican misuses cached credentials

2016-06-09 Thread Nathan Kinder
Nova and Cinder key manager for Barbican misuses cached credentials
---

### Summary ###
During the Icehouse release the Cinder and Nova projects added a feature
that supports storage volume encryption using keys stored in Barbican.
The Barbican key manager, that is part of Nova and Cinder, had a bug
that could cause an authorized user to lose access to an encryption key
or allow the wrong user to gain access to an encryption key.

### Affected Services / Software ###
Cinder: Icehouse, Juno, Kilo, Liberty
Nova: Juno, Kilo, Liberty

### Discussion ###
The Barbican key manager is a feature that is part of Nova and Cinder to
allow those projects to create and retrieve keys in Barbican. The key
manager includes a cache function that allows for a copy_key() operation
to work while only validating the token once with Keystone.

This cache function had a bug such that the cached token was used for
operations where it was no longer valid. The symptoms of this error
vary, but include a user not being able to access their key or the wrong
user being able to access a key.

An affected user would see an error similar to this in their cinder log:

 begin cinder.log sample snippet 
2015-12-03 09:09:03.648 TRACE cinder.volume.api Unauthorized: The
request you have made requires authentication. (Disable debug mode to
suppress these details.) (HTTP 401) (Request-ID:
req-d2c52e0b-c16d-43ec-a7a0-763f1270)
 end cinder.log sample snippet 

### Recommended Actions ###
Users wishing to use the Barbican key manager to provided keys for
volume encryption with Nova and Cinder should ensure they are using a
patched version.

A specification for a fix has been merged for the Mitaka release of both
Nova and Cinder. Additionally these patches have been backported to
stable/kilo and stable/liberty.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0063
Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1523646
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
Nova patch for Mitaka : https://review.openstack.org/254358/
Nova patch for stable/liberty: https://review.openstack.org/288490
Cinder patch for Mitaka : https://review.openstack.org/254357/
Cinder patch for stable/liberty: https://review.openstack.org/266678
Cinder patch for stable/kilo: https://review.openstack.org/266680
CVE : N/A



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-09 Thread Clint Byrum
Excerpts from Michael Barton's message of 2016-06-09 14:01:11 -0500:
> On Thu, Jun 9, 2016 at 9:58 AM, Ben Meyer  wrote:
> 
> >
> > uvloop (first commit 2015-11-01) is newer than Swift's hummingbird
> > (2015-04-20, based on
> >
> > https://github.com/openstack/swift/commit/a0e300df180f7f4ca64fc1eaf3601a1a73fc68cb
> > and github network graph) so it would not have been part of the
> > consideration.
> >
> 
> And it still wouldn't be, since it doesn't solve the problem.
> 

Agreed it isn't done in uvloop. But it is done in libuv and the uvloop
devs agree it should be done. So this is the kind of thing where the
community can invest in python + C to help solve problems thought only
solvable by other languages.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] "Failed to create the requested number of tenants" error

2016-06-09 Thread Nate Johnston
Rally folks,

I am working with an engineer to get him up to speed on Rally on a new
development.  He is trying out running a few tests from the samples
directory, like samples/tasks/scenarios/nova/list-hypervisors.yaml - but
he keeps getting the error "Completed: Exit context: `users`\nTask
config is invalid: `Unable to setup context 'users': 'Failed to create
the requested number of tenants.'`"

This is against an Icehouse environment with Mitaka Rally; When I run
Rally with debug logging I see: 

2016-06-08 18:59:24.692 11197 ERROR rally.common.broker EndpointNotFound: admin 
endpoint for identity service in  region not found

However I note that $OS_AUTH_URL is set in the Rally deployment... see
http://paste.openstack.org/show/509002/ for the full log.

Any ideas you could give me would be much appreciated.  Thanks!

--N.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-09 Thread Hongbin Lu
Thanks CERN for offering the host. We will discuss the dates and location in 
the next team meeting [1].

[1] 
https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-06-14_1600_UTC

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: June-09-16 2:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> >
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] versioning of IPA, it is time or is it?

2016-06-09 Thread Loo, Ruby
Thank you Sam and Dmitry for your thoughts. It will (most likely) be one of the 
topics of discussion at the mid-cycle [1]. The actual schedule hasn't been 
decided yet so stay tuned. Be there for a vigorating, heated, and fun time :)

--ruby

[1] https://etherpad.openstack.org/p/ironic-newton-midcycle

From: "Sam Betts (sambetts)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, June 3, 2016 at 7:22 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] versioning of IPA, it is time or is it?

I personally think that we need IPA versioning, but not so that we can pin a 
version. We need versioning so that we can do more intelligent graceful 
degradation in Ironic without just watching for errors and guessing if a 
feature isn’t available. If we add a new feature in Ironic that requires a 
feature in IPA, then we should add code in Ironic that checks the version of 
IPA (either via an API or reported at lookup) and turns on/off that feature 
based on the version of IPA we’re talking to. Doing this would allow for both 
backwards and forward IPA version compatibility:

Old Ironic with newer IPA: Should just work
New Ironic with old IPA: Ironic should intelligently turn off unsupported 
features, with Warnings in the logs telling the operator if a feature is 
skipped.

Sam

From: Dmitry Tantsur 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 2 June 2016 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] versioning of IPA, it is time or is it?


2 июня 2016 г. 10:19 PM пользователь "Loo, Ruby"  написал:
>
> Hi,
>
> I recently reviewed a patch [1] that is trying to address an issue with 
> ironic (master) talking to a ramdisk that has a mitaka IPA lurking around.
>
> It made me think that IPA may no longer be a teenager (yay, boo). IPA now has 
> a stable branch. I think it is time it grows up and acts responsibly. Ironic 
> needs to know which era of IPA it is talking to. Or conversely, does ironic 
> want to specify which microversion of IPA it wants to use? (Sorry, Dmitry, I 
> realize you are cringing.)
With versioning in place we'll have to fix one IPA version in ironic. Meaning, 
as soon as we introduce a new feature, we have to explicitly break 
compatibility with old ramdisk by requesting a version it does not support. 
Even if the feature itself is optional. Or we have to wait some long time 
before using new IPA features in ironic. I hate both options.
Well, or we can use some different versioning procedure :)
>
> Has anyone thought more than I have about this (i.e., more than 2ish minutes)?
>
> If the solution (whatever it is) is going to take a long time to implement, 
> is there anything we can do in the short term (ie, in this cycle)?
>
> --ruby
>
> [1] https://review.openstack.org/#/c/319183/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread Assaf Muller
On Thu, Jun 9, 2016 at 1:48 PM, Ben Pfaff  wrote:
> On Thu, Jun 09, 2016 at 10:28:31AM -0700, rezroo wrote:
>> I'm trying to reconcile differences and similarities between OVN and
>> OpenDayLight in my head. Can someone help me compare these two technologies
>> and explain if they solve the same problem, or if there are fundamental
>> differences between them?
>
> OVN implements network virtualization for clouds of VMs or containers or
> a mix.  Open Daylight is a platform for managing networks that can do
> anything you want.

That is true, but when considering a Neutron backend for OpenStack
deployments, people choose a subset of OpenDaylight projects and the
end result is a solution that is comparable in scope and feature set.
There are objective differences in where the projects are in their
lifetime, the HA architecture, the project's consistency model between
the neutron-server process and the backend, the development velocity,
the community size and the release model.

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Mid-cycle date selection (need input!)

2016-06-09 Thread Truman, Travis
I¹m okay with either 1 or 2. Thanks for running with this Major.

On 6/9/16, 2:51 PM, "Major Hayden"  wrote:

>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA256
>
>Hey folks,
>
>I've been able to secure a few dates at Rackspace's headquarters in San
>Antonio, Texas:
>
>  1) August 10-12
>  2) August 22-26
>  3) August 29 - September 2
>
>During the meeting earlier today, #3 was determined to cause a lot of
>conflicts for people.  #1 seems to be the most preferred.  I have emails
>out to ask about deals on local hotels and I'm waiting to hear back on
>those.
>
>The room should seat about 20-25 people and we would have at least one
>projector.
>
>Please reply with your thoughts and a date preference!  Once we get that
>sorted out, we can fire up an etherpad for everyone to sign up for a spot.
>
>- --
>Major Hayden
>-BEGIN PGP SIGNATURE-
>Version: GnuPG v2
>
>iQIcBAEBCAAGBQJXWbqdAAoJEHNwUeDBAR+xMUYP/1/SN69gCraGCO2XxR52ZKIN
>NWzbeY7mw44eQyoeUBXtJLLo/qFxeQniR6ybaz/zMhqhxOliOys0rDn3Q1Xawtkn
>Mq8IN/aStnGXLn1dXY2DkkaOksvKZTAKhHTvM5ojzGh2laso0Qeh9DK6aItmmljn
>fibzU0FNkYlSOj3LQLW+dnxSYUaovs+1Ir1QlCGq5dB49pQF7wEVU0adMabYkL7n
>6GsjYCfsiy/Iyr1TEc8vjcbVwyteOLS59ibN1c+Y0Yp42jBb+zpA8VmupiL2Y9yM
>aUvNgmtyO0lx2LWGh2MWBrxeNgcA6aLpxgOG4oOLK7U5CRQkXy/Rw9BfeA5X5WB4
>A6DWptzSYR3HiVqoD6BG2sH4Ube5Xd3PLMIcfG7ar0lSvN1s6fNugS/u1/tKnCHf
>/e16zhZ/2m96s7Pe6pX1hckgDYUbLyDrw7FwyO4QZZBPeILk6QBHJ978/n2PH2yD
>kaka04A4mqbr+wD2iaoPURM46RPuk5I2noDTjW+udm12tN4dBLdc5PZ6M5tIjhUM
>G3GY82B4lOLlgGZUlwyu1Whq8jVkctdgbq0gjK7jy+iWl4c/77V+KUgKbfAjIx0W
>cPgW7/adKK1x0Ev02L9j5oMcqYlOz0QpKPrILUY1G+jjwBVX5+74zbNTgBTopBw0
>Q0fSm4KIMeuF5cD2pvAS
>=J7ZQ
>-END PGP SIGNATURE-
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Joshua Harlow

Jay Pipes wrote:

On 06/07/2016 02:34 PM, Joshua Harlow wrote:

I'll work on this list, as some folks that are trying start to try to
connect ironic (IMHO without nova, because well kubernetes is enough
like nova that there isn't a need for 2-layers of nova-like-systems at
that point) into kubernetes as a 'resource provider'. I'm sure
attempting to do that will expose a bunch of specific recommendations
soon enough.


Good luck with that.


Thank you very much for the blessings sir jay ;)



I think you'll find that all of the things that the Nova community has
spent the last 4 years working on for the "enterprise" pet VMs,
supporting NFV/network I/O-sensitive workloads with NUMA affinity and L2
networking constraints, and non-12-factor-app-from-the-start things are
going to want to be addressed in k8s [1].

I won't at all be surprised to see the k8s community three years from
now be where Nova is right now with regards to supporting these terrible
stateful applications.


Ya, there will always be this 'temptation' and I know it's hard to 
resist, there is always a temptress in the room, this is always one of 
those :-P




k8s and mesos are all nice and hot right now, as is anything
container-related. Just because something is trendy and hot, however,
does not suddenly make the problem domains that more traditional
stateful applications find themselves in magically disappear.

Best,
-jay

[1] https://github.com/kubernetes/kubernetes/issues/260

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-09 Thread Michael Barton
On Thu, Jun 9, 2016 at 9:58 AM, Ben Meyer  wrote:

>
> uvloop (first commit 2015-11-01) is newer than Swift's hummingbird
> (2015-04-20, based on
>
> https://github.com/openstack/swift/commit/a0e300df180f7f4ca64fc1eaf3601a1a73fc68cb
> and github network graph) so it would not have been part of the
> consideration.
>

And it still wouldn't be, since it doesn't solve the problem.

- Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Mid-cycle date selection (need input!)

2016-06-09 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hey folks,

I've been able to secure a few dates at Rackspace's headquarters in San 
Antonio, Texas:

  1) August 10-12
  2) August 22-26
  3) August 29 - September 2

During the meeting earlier today, #3 was determined to cause a lot of conflicts 
for people.  #1 seems to be the most preferred.  I have emails out to ask about 
deals on local hotels and I'm waiting to hear back on those.

The room should seat about 20-25 people and we would have at least one 
projector.

Please reply with your thoughts and a date preference!  Once we get that sorted 
out, we can fire up an etherpad for everyone to sign up for a spot.

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXWbqdAAoJEHNwUeDBAR+xMUYP/1/SN69gCraGCO2XxR52ZKIN
NWzbeY7mw44eQyoeUBXtJLLo/qFxeQniR6ybaz/zMhqhxOliOys0rDn3Q1Xawtkn
Mq8IN/aStnGXLn1dXY2DkkaOksvKZTAKhHTvM5ojzGh2laso0Qeh9DK6aItmmljn
fibzU0FNkYlSOj3LQLW+dnxSYUaovs+1Ir1QlCGq5dB49pQF7wEVU0adMabYkL7n
6GsjYCfsiy/Iyr1TEc8vjcbVwyteOLS59ibN1c+Y0Yp42jBb+zpA8VmupiL2Y9yM
aUvNgmtyO0lx2LWGh2MWBrxeNgcA6aLpxgOG4oOLK7U5CRQkXy/Rw9BfeA5X5WB4
A6DWptzSYR3HiVqoD6BG2sH4Ube5Xd3PLMIcfG7ar0lSvN1s6fNugS/u1/tKnCHf
/e16zhZ/2m96s7Pe6pX1hckgDYUbLyDrw7FwyO4QZZBPeILk6QBHJ978/n2PH2yD
kaka04A4mqbr+wD2iaoPURM46RPuk5I2noDTjW+udm12tN4dBLdc5PZ6M5tIjhUM
G3GY82B4lOLlgGZUlwyu1Whq8jVkctdgbq0gjK7jy+iWl4c/77V+KUgKbfAjIx0W
cPgW7/adKK1x0Ev02L9j5oMcqYlOz0QpKPrILUY1G+jjwBVX5+74zbNTgBTopBw0
Q0fSm4KIMeuF5cD2pvAS
=J7ZQ
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-09 Thread Chris Friesen

On 06/09/2016 05:15 AM, Paul Michali wrote:

1) On the host, I was seeing 32768 huge pages, of 2MB size.


Please check the number of huge pages _per host numa node_.


2) I changed mem_page_size from 1024 to 2048 in the flavor, and then when VMs
were created, they were being evenly assigned to the two NUMA nodes. Each using
1024 huge pages. At this point I could create more than half, but when there
were 1945 pages left, it failed to create a VM. Did it fail because the
mem_page_size was 2048 and the available pages were 1945, even though we were
only requesting 1024 pages?


I do not think that "1024" is a valid page size (at least for x86).

Be careful about units. mem_page_size is in units of KB.  For x86, valid 
numerical sizes are 4, 2048, and 1048576.  (For 4KB, 2MB, and 1GB hugepages.) 
The flavor specifies memory size in MB.



3) Related to #2, is there a relationship between mem_page_size, the allocation
of VMs to NUMA nodes, and the flavor size? IOW, if I use the medium flavor
(4GB), will I need a larger mem_page_size? (I'll play with this variation, as
soon as I can). Gets back to understanding how the scheduling determines how to
assign the VMs.


Valid mem_page_size values are determined by the host CPU.  You do not need a 
larger page size for flavors with larger memory sizes.


VMs with numa topology (hugepages, pinned CPUs, pci devices, etc.) will be 
pinned to a single host numa node.)



Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-09 Thread Tim Bell
If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> >
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Jay Pipes

On 06/07/2016 02:34 PM, Joshua Harlow wrote:

I'll work on this list, as some folks that are trying start to try to
connect ironic (IMHO without nova, because well kubernetes is enough
like nova that there isn't a need for 2-layers of nova-like-systems at
that point) into kubernetes as a 'resource provider'. I'm sure
attempting to do that will expose a bunch of specific recommendations
soon enough.


Good luck with that.

I think you'll find that all of the things that the Nova community has 
spent the last 4 years working on for the "enterprise" pet VMs, 
supporting NFV/network I/O-sensitive workloads with NUMA affinity and L2 
networking constraints, and non-12-factor-app-from-the-start things are 
going to want to be addressed in k8s [1].


I won't at all be surprised to see the k8s community three years from 
now be where Nova is right now with regards to supporting these terrible 
stateful applications.


k8s and mesos are all nice and hot right now, as is anything 
container-related. Just because something is trendy and hot, however, 
does not suddenly make the problem domains that more traditional 
stateful applications find themselves in magically disappear.


Best,
-jay

[1] https://github.com/kubernetes/kubernetes/issues/260

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread Ben Pfaff
On Thu, Jun 09, 2016 at 10:28:31AM -0700, rezroo wrote:
> I'm trying to reconcile differences and similarities between OVN and
> OpenDayLight in my head. Can someone help me compare these two technologies
> and explain if they solve the same problem, or if there are fundamental
> differences between them?

OVN implements network virtualization for clouds of VMs or containers or
a mix.  Open Daylight is a platform for managing networks that can do
anything you want.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Jim Rollenhagen
On Thu, Jun 09, 2016 at 06:17:56PM +0100, Lucas Alvares Gomes wrote:
> Hi,
> 
> Thanks for writing it down Jim.
> 
> > So, I've been thinking about this quite a bit. We've also talked about
> > doing a v2 API (as evil as that may be) in Ironic here and there. We've
> > had lots of lessons learned from the v1 API, mostly that our API is
> > absolutely terrible for humans. I'd love to fix that (whether that
> > requires a v2 API or not is unclear, so don't focus on that).
> >
> > I've noticed that people keep talking about the Nova driver API
> > not being public/stable/whatever in this thread - let's ignore that and
> > think bigger.
> >
> > So, there's two large use cases for ironic that we support today:
> >
> > * Ironic as a backend to nova. Operators still need to interact with the
> >   Ironic API for management, troubleshooting, and fixing issues that
> >   computers do not handle today.
> >
> > * Ironic standalone - by this I mean ironic without nova. The primary
> >   deployment method here is using Bifrost, and I also call it the
> >   "better than cobbler" case. I'm not sure if people are using this
> >   without bifrost, or with other non-nova services, today. Users in this
> >   model, as I understand things, do not interact with the Ironic API
> >   directly (except maybe for troubleshooting).
> >
> > There's other use cases I would like to support:
> >
> > * Ironic standalone, without Bifrost. I would love for a deployer to be
> >   able to stand up Ironic as an end-user facing API, probably with
> >   Keystone, maybe with Neutron/Glance/Swift if needed. This would
> >   require a ton of discussion and work (e.g. ironic has no concept of
> >   tenants/projects today, we might want a scheduler, a concept of an
> >   instance, etc) and would be a very long road. The ideal solution to
> >   this is to break out the Compute API and scheduler to be separate from
> >   Nova, but that's an even longer road, so let's pretend I didn't say
> >   that and not devolve this thread into that conversation (yet).
> >
> > * Ironic as a backend to other things. Josh pointed out kubernetes
> >   somewhere, I'd love to be an official backend there. Heat today goes
> >   through Nova to get an ironic instance, it seems reasonable to have
> >   heat talk directly to ironic. Things like that. The amount of work
> >   here might depend on the application using ironic (e.g. I think k8s
> >   has it's own scheduler, heat does not, right?).
> >
> > So all that said, I think there is one big step we can take in the
> > short-term that works for all of these use cases: make our API better.
> > Make it simpler. Take a bunch of the logic in the Nova driver, and put
> > it in our API instead. spawn() becomes /v1/nodes/foo/deploy or
> > something, etc (I won't let us bikeshed those specifics in this thread).
> > Just doing that allows us to remove a bunch of code from a number of
> > places (nova, bifrost, shade, tempest(?)) and make those simpler. It
> > allows direct API users to more easily deploy things, making one API
> > call instead of a bunch (we could even create Neutron ports and such for
> > them). It allows k8s and friends to write less code. Oh, let's also stop
> > directly exposing state machine transitions as API actions, that's
> > crazy, kthx.
> >
> > I think this is what Josh is trying to get at, except maybe with a
> > separate API service in between, which doesn't sound very desirable to
> > me.
> >
> > Thoughts on this?
> >
> > Additionally, in the somewhat-short term, I'd like us to try to
> > enumerate the major use cases we're trying to solve, and make those use
> > cases ridiculously simple to deploy. Ironic is quickly becoming a
> > tangled mess of configuration options and tweaking surrounding services
> > (nova, neutron) to deploy it. Once it's figured out, it works very well.
> > However, it's incredibly difficult to figure out how to get there.
> >
> > Ultimately, I'd like someone that wants to deploy ironic in a common use
> > case, with off-the-shelf hardware, to be able to get a POC up and
> > running in a matter of hours, not days or weeks.
> >
> > Who's in? :)
> >
> 
> I agree in general with the idea but I think it needs a tad more
> context. We need to remember that Ironic (ex-Nova Baremetal) was
> created to fill a gap in OpenStack that was missing for TripleO
> project to get off the ground. That was the problem being solved and
> these aspects are reflected in the ReST API: Being admin-only, not
> "human-friendly" (standalone came later), etc...
> 
> > * Ironic as a backend to other things. Josh pointed out kubernetes
> >   somewhere, I'd love to be an official backend there. Heat today goes
> >   through Nova to get an ironic instance, it seems reasonable to have
> >   heat talk directly to ironic. Things like that. The amount of work
> >   here might depend on the application using ironic (e.g. I think k8s
> >   has it's own scheduler, heat does not, right?).
> 
> There was an 

Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Lucas Alvares Gomes
Hi,

Thanks for writing it down Jim.

> So, I've been thinking about this quite a bit. We've also talked about
> doing a v2 API (as evil as that may be) in Ironic here and there. We've
> had lots of lessons learned from the v1 API, mostly that our API is
> absolutely terrible for humans. I'd love to fix that (whether that
> requires a v2 API or not is unclear, so don't focus on that).
>
> I've noticed that people keep talking about the Nova driver API
> not being public/stable/whatever in this thread - let's ignore that and
> think bigger.
>
> So, there's two large use cases for ironic that we support today:
>
> * Ironic as a backend to nova. Operators still need to interact with the
>   Ironic API for management, troubleshooting, and fixing issues that
>   computers do not handle today.
>
> * Ironic standalone - by this I mean ironic without nova. The primary
>   deployment method here is using Bifrost, and I also call it the
>   "better than cobbler" case. I'm not sure if people are using this
>   without bifrost, or with other non-nova services, today. Users in this
>   model, as I understand things, do not interact with the Ironic API
>   directly (except maybe for troubleshooting).
>
> There's other use cases I would like to support:
>
> * Ironic standalone, without Bifrost. I would love for a deployer to be
>   able to stand up Ironic as an end-user facing API, probably with
>   Keystone, maybe with Neutron/Glance/Swift if needed. This would
>   require a ton of discussion and work (e.g. ironic has no concept of
>   tenants/projects today, we might want a scheduler, a concept of an
>   instance, etc) and would be a very long road. The ideal solution to
>   this is to break out the Compute API and scheduler to be separate from
>   Nova, but that's an even longer road, so let's pretend I didn't say
>   that and not devolve this thread into that conversation (yet).
>
> * Ironic as a backend to other things. Josh pointed out kubernetes
>   somewhere, I'd love to be an official backend there. Heat today goes
>   through Nova to get an ironic instance, it seems reasonable to have
>   heat talk directly to ironic. Things like that. The amount of work
>   here might depend on the application using ironic (e.g. I think k8s
>   has it's own scheduler, heat does not, right?).
>
> So all that said, I think there is one big step we can take in the
> short-term that works for all of these use cases: make our API better.
> Make it simpler. Take a bunch of the logic in the Nova driver, and put
> it in our API instead. spawn() becomes /v1/nodes/foo/deploy or
> something, etc (I won't let us bikeshed those specifics in this thread).
> Just doing that allows us to remove a bunch of code from a number of
> places (nova, bifrost, shade, tempest(?)) and make those simpler. It
> allows direct API users to more easily deploy things, making one API
> call instead of a bunch (we could even create Neutron ports and such for
> them). It allows k8s and friends to write less code. Oh, let's also stop
> directly exposing state machine transitions as API actions, that's
> crazy, kthx.
>
> I think this is what Josh is trying to get at, except maybe with a
> separate API service in between, which doesn't sound very desirable to
> me.
>
> Thoughts on this?
>
> Additionally, in the somewhat-short term, I'd like us to try to
> enumerate the major use cases we're trying to solve, and make those use
> cases ridiculously simple to deploy. Ironic is quickly becoming a
> tangled mess of configuration options and tweaking surrounding services
> (nova, neutron) to deploy it. Once it's figured out, it works very well.
> However, it's incredibly difficult to figure out how to get there.
>
> Ultimately, I'd like someone that wants to deploy ironic in a common use
> case, with off-the-shelf hardware, to be able to get a POC up and
> running in a matter of hours, not days or weeks.
>
> Who's in? :)
>

I agree in general with the idea but I think it needs a tad more
context. We need to remember that Ironic (ex-Nova Baremetal) was
created to fill a gap in OpenStack that was missing for TripleO
project to get off the ground. That was the problem being solved and
these aspects are reflected in the ReST API: Being admin-only, not
"human-friendly" (standalone came later), etc...

> * Ironic as a backend to other things. Josh pointed out kubernetes
>   somewhere, I'd love to be an official backend there. Heat today goes
>   through Nova to get an ironic instance, it seems reasonable to have
>   heat talk directly to ironic. Things like that. The amount of work
>   here might depend on the application using ironic (e.g. I think k8s
>   has it's own scheduler, heat does not, right?).

There was an attempt to do that before in heat, but they were refused
at the time because it didn't fit the context above [0]. That wasn't
the goal/scope of the project.

Now we have v1 is (almost) 3 years and during this time Ironic evolved
a _lot_, it does covers way more 

Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Jim Rollenhagen
On Thu, Jun 09, 2016 at 06:17:56PM +0100, Lucas Alvares Gomes wrote:
> Hi,
> 
> Thanks for writing it down Jim.
> 
> > So, I've been thinking about this quite a bit. We've also talked about
> > doing a v2 API (as evil as that may be) in Ironic here and there. We've
> > had lots of lessons learned from the v1 API, mostly that our API is
> > absolutely terrible for humans. I'd love to fix that (whether that
> > requires a v2 API or not is unclear, so don't focus on that).
> >
> > I've noticed that people keep talking about the Nova driver API
> > not being public/stable/whatever in this thread - let's ignore that and
> > think bigger.
> >
> > So, there's two large use cases for ironic that we support today:
> >
> > * Ironic as a backend to nova. Operators still need to interact with the
> >   Ironic API for management, troubleshooting, and fixing issues that
> >   computers do not handle today.
> >
> > * Ironic standalone - by this I mean ironic without nova. The primary
> >   deployment method here is using Bifrost, and I also call it the
> >   "better than cobbler" case. I'm not sure if people are using this
> >   without bifrost, or with other non-nova services, today. Users in this
> >   model, as I understand things, do not interact with the Ironic API
> >   directly (except maybe for troubleshooting).
> >
> > There's other use cases I would like to support:
> >
> > * Ironic standalone, without Bifrost. I would love for a deployer to be
> >   able to stand up Ironic as an end-user facing API, probably with
> >   Keystone, maybe with Neutron/Glance/Swift if needed. This would
> >   require a ton of discussion and work (e.g. ironic has no concept of
> >   tenants/projects today, we might want a scheduler, a concept of an
> >   instance, etc) and would be a very long road. The ideal solution to
> >   this is to break out the Compute API and scheduler to be separate from
> >   Nova, but that's an even longer road, so let's pretend I didn't say
> >   that and not devolve this thread into that conversation (yet).
> >
> > * Ironic as a backend to other things. Josh pointed out kubernetes
> >   somewhere, I'd love to be an official backend there. Heat today goes
> >   through Nova to get an ironic instance, it seems reasonable to have
> >   heat talk directly to ironic. Things like that. The amount of work
> >   here might depend on the application using ironic (e.g. I think k8s
> >   has it's own scheduler, heat does not, right?).
> >
> > So all that said, I think there is one big step we can take in the
> > short-term that works for all of these use cases: make our API better.
> > Make it simpler. Take a bunch of the logic in the Nova driver, and put
> > it in our API instead. spawn() becomes /v1/nodes/foo/deploy or
> > something, etc (I won't let us bikeshed those specifics in this thread).
> > Just doing that allows us to remove a bunch of code from a number of
> > places (nova, bifrost, shade, tempest(?)) and make those simpler. It
> > allows direct API users to more easily deploy things, making one API
> > call instead of a bunch (we could even create Neutron ports and such for
> > them). It allows k8s and friends to write less code. Oh, let's also stop
> > directly exposing state machine transitions as API actions, that's
> > crazy, kthx.
> >
> > I think this is what Josh is trying to get at, except maybe with a
> > separate API service in between, which doesn't sound very desirable to
> > me.
> >
> > Thoughts on this?
> >
> > Additionally, in the somewhat-short term, I'd like us to try to
> > enumerate the major use cases we're trying to solve, and make those use
> > cases ridiculously simple to deploy. Ironic is quickly becoming a
> > tangled mess of configuration options and tweaking surrounding services
> > (nova, neutron) to deploy it. Once it's figured out, it works very well.
> > However, it's incredibly difficult to figure out how to get there.
> >
> > Ultimately, I'd like someone that wants to deploy ironic in a common use
> > case, with off-the-shelf hardware, to be able to get a POC up and
> > running in a matter of hours, not days or weeks.
> >
> > Who's in? :)
> >
> 
> I agree in general with the idea but I think it needs a tad more
> context. We need to remember that Ironic (ex-Nova Baremetal) was
> created to fill a gap in OpenStack that was missing for TripleO
> project to get off the ground. That was the problem being solved and
> these aspects are reflected in the ReST API: Being admin-only, not
> "human-friendly" (standalone came later), etc...

Sorry, I didn't mean to slag on people here. In fact, I tried to come up
with a way to say "no offense" but couldn't figure the words out. Ironic
did start with a very specific use case, it's come a super long way, and
you all did what you had to do to get things going. For that I'm forever
indebted to you. :)

> > * Ironic as a backend to other things. Josh pointed out kubernetes
> >   somewhere, I'd love to be an official backend there. Heat 

[openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread rezroo
I'm trying to reconcile differences and similarities between OVN and 
OpenDayLight in my head. Can someone help me compare these two 
technologies and explain if they solve the same problem, or if there are 
fundamental differences between them?


Thanks,

Reza


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port isActive

2016-06-09 Thread Mohammad Banikazemi

When you write "Neutron has the ability already of sending an event as a
REST call to notify a third party", that third party can be Nova only as of
now and notifying any other party requires changes to Neutron. It seems
that one needs to add a notifier for Kuryr similar to the one that exists
for Nova that you have pointed to here: [1]. Furthermore, Nuetron needs to
be changed to call this new notifier. I suppose one can make the current
Nova notifier more generic and have the third party (the client to use for
notifying the third party) configurable.
Have I understood this correctly or there is such a generic framework
already in place?

Best,

Mohammad



From:   Salvatore Orlando 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   06/08/2016 01:06 PM
Subject:Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron
Port is Active



Neutron has the ability already of sending an event as a REST call to
notify a third party that a port became active [1]
This is used by Nova to hold on booting instances until network has been
wired.
Perhaps kuryr could leverage this without having to tap into the AMQP bus,
as that would be implementation-specific - since there would be an
assumption about having a plugin that communicates with the reference impl
l2 agent.

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/notifiers/nova.py



On 8 June 2016 at 17:23, Mohammad Banikazemi  wrote:
  For the Kuryr project, in order to support blocking until vifs are
  plugged in (that is adding config options similar to the following
  options define in Nova: vif_plugging_is_fatal and vif_plugging_timeout),
  we need to detect that the Neutron plugin being used is done with
  plugging a given vif.

  Here are a few options:

  1- The simplest approach seems to be polling for the status of the
  Neutron port to become Active. (This may lead to scalability issues but
  short of having a specific goal for scalability, it is not clear that
  will be the case.)
  2- Alternatively, We could subscribe to the message queue and wait for
  such a port update event.
  3- It was also suggested that we could use l2 agent extension to detect
  such an event but that seems to limit us to certain Neutron plugins and
  therefore not acceptable.

  I was wondering if there are other and better options.

  Best,

  Mohammad

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SFC]

2016-06-09 Thread Henry Fourie
Alioune,
   The logical-source-port refers to a Neutron port of the VM that
originates the traffic that is to be processed by the port-chain.

-Louis

From: Alioune [mailto:baliou...@gmail.com]
Sent: Thursday, June 09, 2016 6:50 AM
To: Mohan Kumar
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][SFC]

Thanks Mohan,

After setting service_plugins and adding sfc tables to neutrondb, I can create 
port-pair, port-pair-group but classifier creation still claim a 
logical-source-port parameter.

neutron flow-classifier-create  --ethertype IPv4  --source-ip-prefix 
55.55.55.2/32  --destination-ip-prefix 
55.55.55.9/32  --protocol tcp  --source-port 22:22  
--destination-port 1:65000 FC1
ERROR:
neutron flow-classifier-create: error: argument --logical-source-port is 
required
Try 'neutron help flow-classifier-create' for more information.

Please someone can explain what does --logical-source-port correspond to ?
Does the classifier require port-create like SF ?

Regards,


On 9 June 2016 at 09:21, Mohan Kumar 
> wrote:
Alioune,

networking-sfc  resources not installed / not reachable , If installation is 
okay, Possibly you may missed service_plugins entry in neutron.conf ( in case 
of manual networking-sfc installation)

it should be ,

service_plugins = 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin

and restart q-svc services in screen -x

Thanks.,
Mohankumar.N

On Thu, Jun 9, 2016 at 12:58 AM, Alioune 
> wrote:
I've switched from devstack to a normal deployment of openstack/mitaka and 
neutron-l2 agent is working fine with sfc. I can boot instances, create ports.
However I can not create neither flow-classifier nor port-pair ...

neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix 
22.1.20.1/32 --destination-ip-prefix 
172.4.5.6/32 --protocol tcp --source-port 23:23 
--destination-port 100:100 FC1

returns: neutron flow-classifier-create: error: argument --logical-source-port 
is required
Try 'neutron help flow-classifier-create' for more information.

 neutron port-pair-create --ingress=p1 --egress=p2 PP1
404 Not Found

The resource could not be found.

Neutron server returns request_ids: ['req-1bfd0983-4a61-4b32-90b3-252004d90e65']

neutron --version
4.1.1

p1,p2,p3,p4 have already been created, I can ping instances attached to these 
ports.
Since I've not installed networking-sfc, are there additional config to set in 
neutron config files ?
Or is it due to neutron-client version ?

Regards

On 8 June 2016 at 20:31, Mohan Kumar 
> wrote:

neutron agent not able to fetch details from ovsdb . Could you check below 
options 1.restart ovsdb-server and execute ovs_vsctl list-br  2.   execute ovs- 
vsctl list-br manually and try to check error.

3. Could be ovs cleanup issue , please check the output of sudo service 
openvswitch restart and /etc/init.d/openvswich** restart , both should be same

Thanks.,
Mohankumar.N
On Jun 7, 2016 6:04 PM, "Alioune" 
> wrote:
Hi Mohan/Cathy
 I've installed now ovs 2.4.0 and followed 
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining but I got 
this error :
Regards,

+ neutron-ovs-cleanup
2016-06-07 11:25:36.465 22147 INFO neutron.common.config [-] Logging enabled!
2016-06-07 11:25:36.468 22147 INFO neutron.common.config [-] 
/usr/local/bin/neutron-ovs-cleanup version 7.1.1.dev4
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl [-] Unable 
to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'list-br'].
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl Traceback 
(most recent call last):
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in run_vsctl
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl 
log_fail_as_error=False).rstrip()
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl raise 
RuntimeError(m)
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl RuntimeError:
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl Command: 
['sudo', 'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'list-br']
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl Exit code: 1
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
2016-06-07 11:25:36.505 22147 ERROR 

[openstack-dev] [all] [api] POST /api-wg/news

2016-06-09 Thread Chris Dent


Greetings OpenStack community,

Nothing new for guidelines this week. In this week's api-wg meeting we had some 
good dicussions about the API for image visibility in glance[1] and the 
forthcoming Glare API[2], including the wisdom: If you're ever going to do 
microversions, then you better microversion
now.

# Recently merged guidelines

Nothing new in the last two weeks.

# API guidelines proposed for freeze

The following guidelines are available for broader review by interested 
parties. These will be merged in one week if there is no further feedback.

None this week

# Guidelines currently under review

These are guidelines that the working group are debating and working on for 
consistency and language. We encourage any interested parties to join in the 
conversation.

* Add the beginning of a set of guidlines for URIs
   https://review.openstack.org/#/c/322194/
* Add description of pagination parameters
   https://review.openstack.org/190743
* Add guideline for Experimental APIs
   https://review.openstack.org/273158
* Add version discovery guideline
   https://review.openstack.org/254895

Note that some of these guidelines were introduced quite a long time ago and 
need to either be refreshed by their original authors, or adopted by new 
interested parties.

# API Impact reviews currently open

Reviews marked as APIImpact [3] are meant to help inform the working group 
about changes which would benefit from wider inspection by group members and 
liaisons. While the working group will attempt to address these reviews 
whenever possible, it is highly recommended that interested parties attend the 
API-WG meetings [4] to promote communication surrounding their reviews.

Thanks for reading and see you next week!

[1] https://review.openstack.org/#/c/271019/
[2] https://review.openstack.org/#/c/283136/
[3] 
https://review.openstack.org/#/q/status:open+AND+(message:ApiImpact+OR+message:APIImpact),n,z
[4] https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SFC]

2016-06-09 Thread Mohan Kumar
Alioune,

   If you use networking-sfc master code , you can use create
flow-classifier without  logical-source-port specified .  But
if back-end driver is  OVS , you will end up failure in ovs_driver checks .
If i remembered correct , logical_source_port restriction is
to avoid retrun packets to get reclassified .

Thanks.,
Mohankumar.N


On Thu, Jun 9, 2016 at 8:27 PM, Alioune  wrote:

> Mohan,
>
> I would like to redirect all http flows in tenant network to the
> port-chain and according to your explanation I do specify the neutron-port
> of source vm in the classifier.
>
> is there a generic way to to put into the chain all traffc going to a web
> server the tenant network ? (to avoide  setting neutron-port of  source
> vm)
>
> Regards,
>
> On 9 June 2016 at 16:32, Mohan Kumar  wrote:
>
>> Alioune,
>>
>>logical-source-port is egress neutron-port of  source vm , typically
>>  flow-classifier will classifies packets coming to this neutron port and
>> forwards to the rest of port-chain if other classifier conditions are
>> matches.
>>
>> Thanks.,
>> Mohankumar.N
>>
>>
>>
>> On Thu, Jun 9, 2016 at 7:20 PM, Alioune  wrote:
>>
>>> Thanks Mohan,
>>>
>>> After setting service_plugins and adding sfc tables to neutrondb, I can
>>> create port-pair, port-pair-group but classifier creation still claim a
>>> logical-source-port parameter.
>>>
>>> neutron flow-classifier-create  --ethertype IPv4  --source-ip-prefix
>>> 55.55.55.2/32  --destination-ip-prefix 55.55.55.9/32  --protocol tcp
>>>  --source-port 22:22  --destination-port 1:65000 FC1
>>> ERROR:
>>> neutron flow-classifier-create: error: argument --logical-source-port is
>>> required
>>> Try 'neutron help flow-classifier-create' for more information.
>>>
>>> Please someone can explain what does --logical-source-port correspond to
>>> ?
>>> Does the classifier require port-create like SF ?
>>>
>>> Regards,
>>>
>>>
>>> On 9 June 2016 at 09:21, Mohan Kumar  wrote:
>>>
 Alioune,

 networking-sfc  resources not installed / not reachable , If installation
 is okay, Possibly you may missed service_plugins entry in *neutron.conf
 *( in case of manual networking-sfc installation)

 it should be ,

 *service_plugins =
 neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin*

 *and restart q-svc services in screen -x *

 *Thanks.,*
 *Mohankumar.N *

 On Thu, Jun 9, 2016 at 12:58 AM, Alioune  wrote:

> I've switched from devstack to a normal deployment of openstack/mitaka
> and neutron-l2 agent is working fine with sfc. I can boot instances, 
> create
> ports.
> However I can not create neither flow-classifier nor port-pair ...
>
> neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix
> 22.1.20.1/32 --destination-ip-prefix 172.4.5.6/32 --protocol tcp
> --source-port 23:23 --destination-port 100:100 FC1
>
> returns: neutron flow-classifier-create: error: argument
> --logical-source-port is required
> Try 'neutron help flow-classifier-create' for more information.
>
>  neutron port-pair-create --ingress=p1 --egress=p2 PP1
> 404 Not Found
>
> The resource could not be found.
>
> Neutron server returns request_ids:
> ['req-1bfd0983-4a61-4b32-90b3-252004d90e65']
>
> neutron --version
> 4.1.1
>
> p1,p2,p3,p4 have already been created, I can ping instances attached
> to these ports.
> Since I've not installed networking-sfc, are there additional config
> to set in neutron config files ?
> Or is it due to neutron-client version ?
>
> Regards
>
> On 8 June 2016 at 20:31, Mohan Kumar 
> wrote:
>
>> neutron agent not able to fetch details from ovsdb . Could you check
>> below options 1.restart ovsdb-server and execute ovs_vsctl list-br  2.
>> execute ovs- vsctl list-br manually and try to check error.
>>
>> 3. Could be ovs cleanup issue , please check the output of sudo
>> service openvswitch restart and /etc/init.d/openvswich** restart , both
>> should be same
>>
>> Thanks.,
>> Mohankumar.N
>> On Jun 7, 2016 6:04 PM, "Alioune"  wrote:
>>
>>> Hi Mohan/Cathy
>>>  I've installed now ovs 2.4.0 and followed
>>> https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining but
>>> I got this error :
>>> Regards,
>>>
>>> + neutron-ovs-cleanup
>>> 2016-06-07 11:25:36.465 22147 INFO neutron.common.config [-] Logging
>>> enabled!
>>> 2016-06-07 11:25:36.468 22147 INFO neutron.common.config [-]
>>> /usr/local/bin/neutron-ovs-cleanup version 7.1.1.dev4
>>> 2016-06-07 11:25:36.505 22147 

Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-09 Thread Jim Rollenhagen
> >1.)Nova<-> ironic interactions are generally seem terrible?
> I don't know if I'd call it terrible, but there's friction. Things that
> are unchangable on hardware are just software configs in vms (like mac
> addresses, overlays, etc), and things that make no sense in VMs are
> pretty standard on servers (trunked vlans, bonding, etc).
> 
> One way we've gotten around it is by using Ironic standalone via
> Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
> and includes playbooks to build config drives and deploy images in a
> fairly rudimentary way without Nova.
> 
> I call this the "better than Cobbler" way of getting a toe into the
> Ironic waters.
> 
> [1] https://github.com/openstack/bifrost
> >>>Out of curiosity, why ansible vs turning
> >>>https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
> >>>(or something like it) into a tiny-wsgi-app (pick useful name here) that
> >>>has its own REST api (that looks pretty similar to the public functions
> >>>in that driver file)?
> >>
> >>That's an interesting idea. I think a reason Bifrost doesn't just import
> >>nova virt drivers is that they're likely _not_ a supported public API
> >>(despite not having _'s at the front). Also, a lot of the reason Bifrost
> >>exists is to enable users to get the benefits of all the baremetal
> >>abstraction work done in Ironic without having to fully embrace all of
> >>OpenStack's core. So while you could get a little bit of the stuff from
> >>nova (like config drive building), you'd still need to handle network
> >>address assignment, image management, etc. etc., and pretty soon you
> >>start having to run a tiny glance and a tiny neutron. The Bifrost way
> >>is the opposite: I just want a tiny Ironic, and _nothing_ else.
> >>
> >
> >Ya, I'm just thinking that at a certain point
> 
> Oops forgot to fill this out, was just thinking that at a certain point it
> might be easier to figure out how to extract that API (meh, if its public or
> private) and just have someone make an executive decision around ironic
> being a stand-alone thing or not (and a capable stand-alone thing, not a
> sorta-standalone-thing).

So, I've been thinking about this quite a bit. We've also talked about
doing a v2 API (as evil as that may be) in Ironic here and there. We've
had lots of lessons learned from the v1 API, mostly that our API is
absolutely terrible for humans. I'd love to fix that (whether that
requires a v2 API or not is unclear, so don't focus on that).

I've noticed that people keep talking about the Nova driver API
not being public/stable/whatever in this thread - let's ignore that and
think bigger.

So, there's two large use cases for ironic that we support today:

* Ironic as a backend to nova. Operators still need to interact with the
  Ironic API for management, troubleshooting, and fixing issues that
  computers do not handle today.

* Ironic standalone - by this I mean ironic without nova. The primary
  deployment method here is using Bifrost, and I also call it the
  "better than cobbler" case. I'm not sure if people are using this
  without bifrost, or with other non-nova services, today. Users in this
  model, as I understand things, do not interact with the Ironic API
  directly (except maybe for troubleshooting).

There's other use cases I would like to support:

* Ironic standalone, without Bifrost. I would love for a deployer to be
  able to stand up Ironic as an end-user facing API, probably with
  Keystone, maybe with Neutron/Glance/Swift if needed. This would
  require a ton of discussion and work (e.g. ironic has no concept of
  tenants/projects today, we might want a scheduler, a concept of an
  instance, etc) and would be a very long road. The ideal solution to
  this is to break out the Compute API and scheduler to be separate from
  Nova, but that's an even longer road, so let's pretend I didn't say
  that and not devolve this thread into that conversation (yet).

* Ironic as a backend to other things. Josh pointed out kubernetes
  somewhere, I'd love to be an official backend there. Heat today goes
  through Nova to get an ironic instance, it seems reasonable to have
  heat talk directly to ironic. Things like that. The amount of work
  here might depend on the application using ironic (e.g. I think k8s
  has it's own scheduler, heat does not, right?).

So all that said, I think there is one big step we can take in the
short-term that works for all of these use cases: make our API better.
Make it simpler. Take a bunch of the logic in the Nova driver, and put
it in our API instead. spawn() becomes /v1/nodes/foo/deploy or
something, etc (I won't let us bikeshed those specifics in this thread).
Just doing that allows us to remove a bunch of code from a number of
places (nova, bifrost, shade, tempest(?)) and make those simpler. It
allows direct API users to more easily deploy things, making one API
call instead of 

Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-09 Thread Juan Antonio Osorio
+1 on my side.
On 9 Jun 2016 18:10, "Emilien Macchi"  wrote:

> On Thu, Jun 9, 2016 at 10:03 AM, Steven Hardy  wrote:
> > Hi all,
> >
> > I've been in discussion with Martin André and Tomas Sedovic, who are
> > involved with the creation of the new tripleo-validations repo[1]
> >
> > We've agreed that rather than create another gerrit group, they can be
> > added to tripleo-core and agree to restrict +A to this repo for the time
> > being (hopefully they'll both continue to review more widely, and
> obviously
> > Tomas is a former TripleO core anyway, so welcome back! :)
> >
> > If folks feel strongly we should create another group we can, but this
> > seems like a low-overhead approach, and well aligned with the scope of
> the
> > repo, let me know if you disagree.
>
> +1 on my side too. I think in this case it's a good choice.
>
> > Also, while reviewing the core group[2] I noticed the following members
> who
> > are no longer active and should probably be removed:
> >
> > - Radomir Dopieralski
> > - Martyn Taylor
> > - Clint Byrum
> >
> > I know Clint is still involved with DiB (which has a separate core
> group),
> > but he's indicated he's no longer going to be directly involved in other
> > tripleo development, and AFAIK neither Martyn or Radomir are actively
> > involved in TripleO reviews - thanks to them all for their contribution,
> > we'll gladly add you back in the future should you wish to return :)
> >
> > Please let me know if there are any concerns or objections, if there are
> > none I will make these changes next week.
> >
> > Thanks,
> >
> > Steve
> >
> > [1] https://github.com/openstack/tripleo-validations
> > [2] https://review.openstack.org/#/admin/groups/190,members
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-09 Thread Steve Gordon
- Original Message -
> From: "Paul Michali" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, June 7, 2016 11:00:30 AM
> Subject: Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling
> 
> Anyone have any thoughts on the two questions below? Namely...
> 
> If the huge pages are 2M, we are creating a 2GB VM, have 1945 huge pages,
> should the allocation fail (and if so why)?

Were enough pages (1024) available in a single NUMA node? Which release are you 
using? There was a bug where node 0 would always be picked (and eventually 
exhausted) but that was - theoretically - fixed under 
https://bugs.launchpad.net/nova/+bug/1386236

> Why do all the 2GB VMs get created on the same NUMA node, instead of
> getting evenly assigned to each of the two NUMA nodes that are available on
> the compute node (as a result, allocation fails, when 1/2 the huge pages
> are used)? I found that increasing mem_page_size to 2048 resolves the
> issue, but don't know why.

What was the mem_page_size before it was 2048? I didn't think any smaller value 
was supported.

> ANother thing I was seeing, when the VM create failed due to not enough
> huge pages available and was in error state, I could delete the VM, but the
> Neutron port was still there.  Is that correct?
> 
> I didn't see any log messages in neutron, requesting to unbind and delete
> the port.
> 
> Thanks!
> 
> PCM
> 
> .
> 
> On Fri, Jun 3, 2016 at 2:03 PM Paul Michali  wrote:
> 
> > Thanks for the link Tim!
> >
> > Right now, I have two things I'm unsure about...
> >
> > One is that I had 1945 huge pages left (of size 2048k) and tried to create
> > a VM with a small flavor (2GB), which should need 1024 pages, but Nova
> > indicated that it wasn't able to find a host (and QEMU reported an
> > allocation issue).
> >
> > The other is that VMs are not being evenly distributed on my two NUMA
> > nodes, and instead, are getting created all on one NUMA node. Not sure if
> > that is expected (and setting mem_page_size to 2048 is the proper way).
> >
> > Regards,
> >
> > PCM
> >
> >
> > On Fri, Jun 3, 2016 at 1:21 PM Tim Bell  wrote:
> >
> >> The documentation at
> >> http://docs.openstack.org/admin-guide/compute-flavors.html is gradually
> >> improving. Are there areas which were not covered in your clarifications ?
> >> If so, we should fix the documentation too since this is a complex area to
> >> configure and good documentation is a great help.
> >>
> >>
> >>
> >> BTW, there is also an issue around how the RAM for the BIOS is shadowed.
> >> I can’t find the page from a quick google but we found an imbalance when
> >> we
> >> used 2GB pages as the RAM for BIOS shadowing was done by default in the
> >> memory space for only one of the NUMA spaces.
> >>
> >>
> >>
> >> Having a look at the KVM XML can also help a bit if you are debugging.
> >>
> >>
> >>
> >> Tim
> >>
> >>
> >>
> >> *From: *Paul Michali 
> >> *Reply-To: *"OpenStack Development Mailing List (not for usage
> >> questions)" 
> >> *Date: *Friday 3 June 2016 at 15:18
> >> *To: *"Daniel P. Berrange" , "OpenStack Development
> >> Mailing List (not for usage questions)" <
> >> openstack-dev@lists.openstack.org>
> >> *Subject: *Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling
> >>
> >>
> >>
> >> See PCM inline...
> >>
> >> On Fri, Jun 3, 2016 at 8:44 AM Daniel P. Berrange 
> >> wrote:
> >>
> >> On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
> >> > Hi!
> >> >
> >> > I've been playing with Liberty code a bit and had some questions that
> >> I'm
> >> > hoping Nova folks may be able to provide guidance on...
> >> >
> >> > If I set up a flavor with hw:mem_page_size=2048, and I'm creating
> >> (Cirros)
> >> > VMs with size 1024, will the scheduling use the minimum of the number of
> >>
> >> 1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?
> >>
> >>
> >>
> >> PCM: I was using small flavor, which is 2 GB. So that's 2048 MB and the
> >> page size is 2048K, so 1024 pages? Hope I have the units right.
> >>
> >>
> >>
> >>
> >>
> >>
> >> > huge pages available and the size requested for the VM, or will it base
> >> > scheduling only on the number of huge pages?
> >> >
> >> > It seems to be doing the latter, where I had 1945 huge pages free, and
> >> > tried to create another VM (1024) and Nova rejected the request with "no
> >> > hosts available".
> >>
> >> From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.
> >>
> >> Anyway, when you request huge pages to be used for a flavour, the
> >> entire guest RAM must be able to be allocated from huge pages.
> >> ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
> >> of huge pages available. It is not possible for a VM to use
> >> 1.5 GB of huge pages and 500 MB of normal sized 

Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-06-09 Thread Dan Smith
> According to the state of this review:
> https://review.openstack.org/#/c/317689/ the works aren't going to be
> done in this cycle.

This is a procedural -2 waiting for all the following patches to be
reviewed and passing 3rd party CI before we land them. We certainly
expect to get this work into the tree in newton.

This refactor work _is_ a priority for nova in newton, which is why we
said in Austin that it was important to get it done before we add more
drivers. Reviewing that code will help accelerate the process --
hopefully you're helping in that area :)

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][infra][qa] Ironic grenade work nearly complete

2016-06-09 Thread Jim Rollenhagen
Hi friends,

We're two patches away from having grenade passing in our check queue!
This is a huge step forward for us, many thanks go to the numerous folks
that have worked on or helped somehow with this.

I'd love to push this across the line today as it's less than 10 lines
of changes between the two, and we have a bunch of work nearly done that
we'd like upgrade testing running against before merging.

So we need infra cores' help here.

https://review.openstack.org/#/c/316662/ - devstack-gate
Allow to pass OS_TEST_TIMEOUT for grenade job
1 line addition with an sdague +2.

https://review.openstack.org/#/c/319336/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue)
+7,-1 with an AJaeger +2.

Thanks in advance. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-06-09 Thread Diana Clarke
Hi Alex:

We still hope to land this patch series during this cycle. If you're
referring to the -2 on the patch you mentioned [1], it was just a
procedural -2 until we stopped using the old methods in the driver and
cutover completely to the new methods. I'll ping Dan Smith on IRC
later today, and see if he's ready to revisit his -2.

The cutover to the new methods is in this patch [2], and then there is
a big delete of all the old methods and tests in this patch [3].
Yesterday, the entire patch series was green as far as CI was
concerned, including the experimental job Matt Riedemann kindly added
for LVM (thanks!!!). There are now a few merge conflicts, but it's
still otherwise ready for review.

Feodor has been an excellent reviewer, and it would be great to
continue to get his feedback on this patch series. We hope to be able
to reciprocate when the ScaleIO patches are up for review since we'll
be familiar with that area of the code. The entire patch series can be
found here [4] if anyone else is interested in jumping in on reviews
too.

Thanks folks!

--diana

[1] https://review.openstack.org/#/c/317689/
[2] https://review.openstack.org/#/c/282580/
[3] https://review.openstack.org/#/c/322974/
[4] 
https://review.openstack.org/#/q/openstack/nova+topic:libvirt-instance-storage

On Wed, Jun 8, 2016 at 1:05 PM, Alexandre Levine
 wrote:
> Hi Matt,
>
> According to the state of this review:
> https://review.openstack.org/#/c/317689/ the works aren't going to be done
> in this cycle.
>
> Do you think it'd be possible for our driver to cut in now?
>
> Feodor participated in reviewing and helped as much as possible with current
> efforts and if needed we can spare even more resources to help with the
> refactoring in the next cycle.
>
> Best regards,
>
>   Alex Levine
>
>
> On 5/10/16 7:40 PM, Matt Riedemann wrote:
>>
>> On 5/10/2016 11:24 AM, Alexandre Levine wrote:
>>>
>>> Hi Matt,
>>>
>>> Sorry I couldn't reply earlier - was away.
>>> I'm worrying about ScaleIO ephemeral storage backend
>>>
>>> (https://blueprints.launchpad.net/nova/+spec/scaleio-ephemeral-storage-backend)
>>> which is not in this list but various clients are very interested in
>>> having it working along with or instead of Ceph. Especially I'm worrying
>>> in view of the global libvirt storage pools refactoring which looks like
>>> a quite global effort to me judging by a number of preliminary reviews.
>>> It seems to me that we wouldn't be able to squeeze ScaleIO additions
>>> after this refactoring.
>>> What can be done about it?
>>> We could've contribute our initial changes to current code (which would
>>> potentially allow easy backporting to previous versions as a benefit
>>> afterwards) and promise to update our parts along with the refactoring
>>> reviews or something like this.
>>>
>>> Best regards,
>>>   Alex Levine
>>>
>>>
>>> On 5/6/16 3:34 AM, Matt Riedemann wrote:

 There are still a few design summit sessions from the summit that I'll
 recap but I wanted to get the priorities session recap out as early as
 possible. We held that session in the last slot on Thursday. The full
 etherpad is here [1].

 The first part of the session was mostly going over schedule milestones.

 We already started Newton with a freeze on spec approvals for new
 things since we already have a sizable backlog [2]. Now that we're
 past the summit we can approve specs for new things again.

 The full Newton release schedule for Nova is in this wiki [3].

 These are the major dates from here on out:

 * June 2: newton-1, non-priority spec approval freeze
 * June 30: non-priority feature freeze
 * July 15: newton-2
 * July 19-21: Nova Midcycle
 * Aug 4: priority spec approval freeze
 * Sept 2: newton-3, final python-novaclient release, FeatureFreeze,
 Soft StringFreeze
 * Sept 16: RC1 and Hard StringFreeze
 * Oct 7, 2016: Newton Release

 The important thing for most people right now is we have exactly four
 weeks until the non-priority spec approval freeze. We then have about
 one month after that to land all non-priority blueprints.

 Keep in mind that we've already got 52 approved blueprints and most of
 those were re-approved from Mitaka, so have been approved for several
 weeks already.

 The non-priority blueprint cycle is intentionally restricted in Newton
 because of all of the backlog work we've had spilling over into this
 release. We really need to focus on getting as much of that done as
 possible before taking on more new work.

 For the rest of the priorities session we talked about what our actual
 review priorities are for Newton. The list with details and owners is
 already available here [4].

 In no particular order, these are the review priorities:

 * Cells v2
 * Scheduler
 * API Improvements
 * os-vif integration

Re: [openstack-dev] [neutron][SFC]

2016-06-09 Thread Alioune
Mohan,

I would like to redirect all http flows in tenant network to the port-chain
and according to your explanation I do specify the neutron-port of source
vm in the classifier.

is there a generic way to to put into the chain all traffc going to a web
server the tenant network ? (to avoide  setting neutron-port of  source vm)

Regards,

On 9 June 2016 at 16:32, Mohan Kumar  wrote:

> Alioune,
>
>logical-source-port is egress neutron-port of  source vm , typically
>  flow-classifier will classifies packets coming to this neutron port and
> forwards to the rest of port-chain if other classifier conditions are
> matches.
>
> Thanks.,
> Mohankumar.N
>
>
>
> On Thu, Jun 9, 2016 at 7:20 PM, Alioune  wrote:
>
>> Thanks Mohan,
>>
>> After setting service_plugins and adding sfc tables to neutrondb, I can
>> create port-pair, port-pair-group but classifier creation still claim a
>> logical-source-port parameter.
>>
>> neutron flow-classifier-create  --ethertype IPv4  --source-ip-prefix
>> 55.55.55.2/32  --destination-ip-prefix 55.55.55.9/32  --protocol tcp
>>  --source-port 22:22  --destination-port 1:65000 FC1
>> ERROR:
>> neutron flow-classifier-create: error: argument --logical-source-port is
>> required
>> Try 'neutron help flow-classifier-create' for more information.
>>
>> Please someone can explain what does --logical-source-port correspond to ?
>> Does the classifier require port-create like SF ?
>>
>> Regards,
>>
>>
>> On 9 June 2016 at 09:21, Mohan Kumar  wrote:
>>
>>> Alioune,
>>>
>>> networking-sfc  resources not installed / not reachable , If installation
>>> is okay, Possibly you may missed service_plugins entry in *neutron.conf
>>> *( in case of manual networking-sfc installation)
>>>
>>> it should be ,
>>>
>>> *service_plugins =
>>> neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin*
>>>
>>> *and restart q-svc services in screen -x *
>>>
>>> *Thanks.,*
>>> *Mohankumar.N *
>>>
>>> On Thu, Jun 9, 2016 at 12:58 AM, Alioune  wrote:
>>>
 I've switched from devstack to a normal deployment of openstack/mitaka
 and neutron-l2 agent is working fine with sfc. I can boot instances, create
 ports.
 However I can not create neither flow-classifier nor port-pair ...

 neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix
 22.1.20.1/32 --destination-ip-prefix 172.4.5.6/32 --protocol tcp
 --source-port 23:23 --destination-port 100:100 FC1

 returns: neutron flow-classifier-create: error: argument
 --logical-source-port is required
 Try 'neutron help flow-classifier-create' for more information.

  neutron port-pair-create --ingress=p1 --egress=p2 PP1
 404 Not Found

 The resource could not be found.

 Neutron server returns request_ids:
 ['req-1bfd0983-4a61-4b32-90b3-252004d90e65']

 neutron --version
 4.1.1

 p1,p2,p3,p4 have already been created, I can ping instances attached to
 these ports.
 Since I've not installed networking-sfc, are there additional config to
 set in neutron config files ?
 Or is it due to neutron-client version ?

 Regards

 On 8 June 2016 at 20:31, Mohan Kumar  wrote:

> neutron agent not able to fetch details from ovsdb . Could you check
> below options 1.restart ovsdb-server and execute ovs_vsctl list-br  2.
> execute ovs- vsctl list-br manually and try to check error.
>
> 3. Could be ovs cleanup issue , please check the output of sudo
> service openvswitch restart and /etc/init.d/openvswich** restart , both
> should be same
>
> Thanks.,
> Mohankumar.N
> On Jun 7, 2016 6:04 PM, "Alioune"  wrote:
>
>> Hi Mohan/Cathy
>>  I've installed now ovs 2.4.0 and followed
>> https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining but
>> I got this error :
>> Regards,
>>
>> + neutron-ovs-cleanup
>> 2016-06-07 11:25:36.465 22147 INFO neutron.common.config [-] Logging
>> enabled!
>> 2016-06-07 11:25:36.468 22147 INFO neutron.common.config [-]
>> /usr/local/bin/neutron-ovs-cleanup version 7.1.1.dev4
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> [-] Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline',
>> '--format=json', '--', 'list-br'].
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> Traceback (most recent call last):
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> File "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in
>> run_vsctl
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>>   log_fail_as_error=False).rstrip()
>> 

Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-09 Thread Emilien Macchi
On Thu, Jun 9, 2016 at 10:03 AM, Steven Hardy  wrote:
> Hi all,
>
> I've been in discussion with Martin André and Tomas Sedovic, who are
> involved with the creation of the new tripleo-validations repo[1]
>
> We've agreed that rather than create another gerrit group, they can be
> added to tripleo-core and agree to restrict +A to this repo for the time
> being (hopefully they'll both continue to review more widely, and obviously
> Tomas is a former TripleO core anyway, so welcome back! :)
>
> If folks feel strongly we should create another group we can, but this
> seems like a low-overhead approach, and well aligned with the scope of the
> repo, let me know if you disagree.

+1 on my side too. I think in this case it's a good choice.

> Also, while reviewing the core group[2] I noticed the following members who
> are no longer active and should probably be removed:
>
> - Radomir Dopieralski
> - Martyn Taylor
> - Clint Byrum
>
> I know Clint is still involved with DiB (which has a separate core group),
> but he's indicated he's no longer going to be directly involved in other
> tripleo development, and AFAIK neither Martyn or Radomir are actively
> involved in TripleO reviews - thanks to them all for their contribution,
> we'll gladly add you back in the future should you wish to return :)
>
> Please let me know if there are any concerns or objections, if there are
> none I will make these changes next week.
>
> Thanks,
>
> Steve
>
> [1] https://github.com/openstack/tripleo-validations
> [2] https://review.openstack.org/#/admin/groups/190,members
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-09 Thread John Trowbridge


On 06/09/2016 10:10 AM, Dougal Matthews wrote:
> On 9 June 2016 at 15:03, Steven Hardy  wrote:
> 
>> Hi all,
>>
>> I've been in discussion with Martin André and Tomas Sedovic, who are
>> involved with the creation of the new tripleo-validations repo[1]
>>
>> We've agreed that rather than create another gerrit group, they can be
>> added to tripleo-core and agree to restrict +A to this repo for the time
>> being (hopefully they'll both continue to review more widely, and obviously
>> Tomas is a former TripleO core anyway, so welcome back! :)
>>
> 
> +1, I think this approach works fine. Requiring sub groups only makes sense
> if we don't feel we can trust people, but then they shouldn't be core. It
> might
> be worth documenting this somewhere however as we have a few restricted
> cores.
> 

So, I am not strongly opinionated in either direction. However, I do
think sub groups can make some sense. I don't know if it makes sense or
not for tripleo-validations, but I quite like it for tripleo-quickstart.
I like that I can choose to trust someone with +2 on tripleo-quickstart
without forcing the rest of tripleo to trust them with +2 on all
projects. I think this could be an interesting model where we could have
at least one main tripleo core in any sub group who is responsible for
mentoring new people to become core in their sub group, and hopefully
eventually into the main tripleo core group.

There is also an accounting reason for sub groups. With no sub groups,
we would just have one large core team. However, effectively many of
these cores would actually be sub group specialists, and not +2ing
outside of their specialty. It is then hard to have any useful
accounting of how many tripleo-cores are specialists vs. generalists vs.
generalists with a specialty. Not sure if that is worth the overhead of
sub groups in and of itself, just wanted to point out there is more
benefit than just the trust issue.

> 
> If folks feel strongly we should create another group we can, but this
>> seems like a low-overhead approach, and well aligned with the scope of the
>> repo, let me know if you disagree.
>>
>> Also, while reviewing the core group[2] I noticed the following members who
>> are no longer active and should probably be removed:
>>
>> - Radomir Dopieralski
>> - Martyn Taylor
>> - Clint Byrum
>>
>> I know Clint is still involved with DiB (which has a separate core group),
>> but he's indicated he's no longer going to be directly involved in other
>> tripleo development, and AFAIK neither Martyn or Radomir are actively
>> involved in TripleO reviews - thanks to them all for their contribution,
>> we'll gladly add you back in the future should you wish to return :)
>>
>> Please let me know if there are any concerns or objections, if there are
>> none I will make these changes next week.
>>
>> Thanks,
>>
>> Steve
>>
>> [1] https://github.com/openstack/tripleo-validations
>> [2] https://review.openstack.org/#/admin/groups/190,members
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-09 Thread Ben Meyer
On 06/08/2016 11:05 PM, Chris Friesen wrote:
> On 06/07/2016 04:26 PM, Ben Meyer wrote:
>> On 06/07/2016 06:09 PM, Samuel Merritt wrote:
>>> On 6/7/16 12:00 PM, Monty Taylor wrote:
 [snip]

 I'd rather see us focus energy on Python3, asyncio and its pluggable
 event loops. The work in:

 http://magic.io/blog/uvloop-blazing-fast-python-networking/

 is a great indication in an actual apples-to-apples comparison of what
 can be accomplished in python doing IO-bound activities by using
 modern
 Python techniques. I think that comparing python2+eventlet to a fresh
 rewrite in Go isn't 100% of the story. A TON of work has gone in to
 Python that we're not taking advantage of because we're still
 supporting
 Python2. So what I've love to see in the realm of comparative
 experimentation is to see if the existing Python we already have
 can be
 leveraged as we adopt newer and more modern things.
>>>
>>> Asyncio, eventlet, and other similar libraries are all very good for
>>> performing asynchronous IO on sockets and pipes. However, none of them
>>> help for filesystem IO. That's why Swift needs a golang object server:
>>> the go runtime will keep some goroutines running even though some
>>> other goroutines are performing filesystem IO, whereas filesystem IO
>>> in Python blocks the entire process, asyncio or no asyncio.
>>
>> That can be modified. gevent has a tool
>> (http://www.gevent.org/gevent.fileobject.html) that enables the File IO
>> to be async  as well by putting the file into non-blocking mode. I've
>> used it, and it works and scales well.
>
> Arguably non-blocking isn't really async when it comes to reads.  I
> suspect what we really want is full-async where you issue a request
> and then get notified when it's done.

So when it comes to Swift or Glance where you have to transfer large
amounts of data between the HTTP client and HTTP serverunder WSGI,
the only way to make it truly cooperative for eventlet, gevent, etc is
to use non-blocking File I/O. These situations also reveal how
uncooperative green threads are too since if you can keep the data
pipeline full (e.g the read is continuous because the OS is able to
quickly service it) then one greed thread will take over and block the
others. Non-blocking I/O is only part of the solution, not the entire
solution.

Furthermore, the constraint of the web head having to provide the data
transfer makes the full-async offload with a notification impossible for
data transfer services like Swift. Tools like uvloop make great strides
in providing better environments for doing cooperative tasks than what
existed prior, and some of the articles comparing it to the various
tools - both alone and in combination - make for some fascinating
possibilities and solutions. Most likely the best solution here is going
to be going to Py3's asyncio+uvloop+non-blocking I/O on the files in
order to hit the throughput for Swift.

uvloop (first commit 2015-11-01) is newer than Swift's hummingbird
(2015-04-20, based on
https://github.com/openstack/swift/commit/a0e300df180f7f4ca64fc1eaf3601a1a73fc68cb
and github network graph) so it would not have been part of the
consideration.

$0.02

Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [docs] Call for help: navigation for all API docs

2016-06-09 Thread Anne Gentle
Hi all,

Teams are making great progress in the source migration and the even newer
styling [1] is looking great!

What I'd like to ask for help on next is unifying navigation for the
content being published to developer.openstack.org/api-ref/ and
developer.openstack.org/api-guide/.

Previously, we had a sidebar for each service and version's API reference
document [2]. The sidebar is responsive, however, with the new Sphinx
sidebar, the service methods take over the sidebar. Also the prior sidebar
navigation was for reference information only.

The request is to provide a navigation that lets readers see all the
documented OpenStack APIs in a unified way. Perhaps an upper navigation
that can expand would be best. We do have the openstackdocstheme [3]
expanding sidebar menu with version info that perhaps can be reused.

Requirements based on our current tooling:
- Design should integrate well with our current theme, openstackdocstheme.
[3]
- Design should integrate well with the os-api-ref extensions. [4]
- Design should consider that some APIs have multiple versions.
- Both of the above are Sphinx-based and include jquery, bootstrap, and CSS
integration. Currently has Bootstrap v3.2.0 and JQuery 1.11.3 but these are
not required versions.
- Responsive design required; however mobile is 5% of traffic currently.
- Primary browsers are Chrome (60%), Firefox (26%), Safari and IE/Edge
(about 6% each).
- Should be able to add links to new API information through patchsets on
review.openstack.org.
- Should link to both API reference information and API tutorials and
guides.

If you're interested or have ideas, please write back to the openstack-dev
list.

Thanks,
Anne

1. https://api.os.gra.ham.ie/compute/ Thanks Graham Hayes!
2. http://developer.openstack.org/api-ref.html
3. https://github.com/openstack/openstackdocstheme
4. https://github.com/openstack/os-api-ref

-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SFC]

2016-06-09 Thread Mohan Kumar
Alioune,

   logical-source-port is egress neutron-port of  source vm , typically
 flow-classifier will classifies packets coming to this neutron port and
forwards to the rest of port-chain if other classifier conditions are
matches.

Thanks.,
Mohankumar.N



On Thu, Jun 9, 2016 at 7:20 PM, Alioune  wrote:

> Thanks Mohan,
>
> After setting service_plugins and adding sfc tables to neutrondb, I can
> create port-pair, port-pair-group but classifier creation still claim a
> logical-source-port parameter.
>
> neutron flow-classifier-create  --ethertype IPv4  --source-ip-prefix
> 55.55.55.2/32  --destination-ip-prefix 55.55.55.9/32  --protocol tcp
>  --source-port 22:22  --destination-port 1:65000 FC1
> ERROR:
> neutron flow-classifier-create: error: argument --logical-source-port is
> required
> Try 'neutron help flow-classifier-create' for more information.
>
> Please someone can explain what does --logical-source-port correspond to ?
> Does the classifier require port-create like SF ?
>
> Regards,
>
>
> On 9 June 2016 at 09:21, Mohan Kumar  wrote:
>
>> Alioune,
>>
>> networking-sfc  resources not installed / not reachable , If installation
>> is okay, Possibly you may missed service_plugins entry in *neutron.conf *(
>> in case of manual networking-sfc installation)
>>
>> it should be ,
>>
>> *service_plugins =
>> neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin*
>>
>> *and restart q-svc services in screen -x *
>>
>> *Thanks.,*
>> *Mohankumar.N *
>>
>> On Thu, Jun 9, 2016 at 12:58 AM, Alioune  wrote:
>>
>>> I've switched from devstack to a normal deployment of openstack/mitaka
>>> and neutron-l2 agent is working fine with sfc. I can boot instances, create
>>> ports.
>>> However I can not create neither flow-classifier nor port-pair ...
>>>
>>> neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix
>>> 22.1.20.1/32 --destination-ip-prefix 172.4.5.6/32 --protocol tcp
>>> --source-port 23:23 --destination-port 100:100 FC1
>>>
>>> returns: neutron flow-classifier-create: error: argument
>>> --logical-source-port is required
>>> Try 'neutron help flow-classifier-create' for more information.
>>>
>>>  neutron port-pair-create --ingress=p1 --egress=p2 PP1
>>> 404 Not Found
>>>
>>> The resource could not be found.
>>>
>>> Neutron server returns request_ids:
>>> ['req-1bfd0983-4a61-4b32-90b3-252004d90e65']
>>>
>>> neutron --version
>>> 4.1.1
>>>
>>> p1,p2,p3,p4 have already been created, I can ping instances attached to
>>> these ports.
>>> Since I've not installed networking-sfc, are there additional config to
>>> set in neutron config files ?
>>> Or is it due to neutron-client version ?
>>>
>>> Regards
>>>
>>> On 8 June 2016 at 20:31, Mohan Kumar  wrote:
>>>
 neutron agent not able to fetch details from ovsdb . Could you check
 below options 1.restart ovsdb-server and execute ovs_vsctl list-br  2.
 execute ovs- vsctl list-br manually and try to check error.

 3. Could be ovs cleanup issue , please check the output of sudo service
 openvswitch restart and /etc/init.d/openvswich** restart , both should be
 same

 Thanks.,
 Mohankumar.N
 On Jun 7, 2016 6:04 PM, "Alioune"  wrote:

> Hi Mohan/Cathy
>  I've installed now ovs 2.4.0 and followed
> https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining but
> I got this error :
> Regards,
>
> + neutron-ovs-cleanup
> 2016-06-07 11:25:36.465 22147 INFO neutron.common.config [-] Logging
> enabled!
> 2016-06-07 11:25:36.468 22147 INFO neutron.common.config [-]
> /usr/local/bin/neutron-ovs-cleanup version 7.1.1.dev4
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl [-]
> Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline',
> '--format=json', '--', 'list-br'].
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
> Traceback (most recent call last):
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
> File "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in
> run_vsctl
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
> log_fail_as_error=False).rstrip()
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
> File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in 
> execute
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
> raise RuntimeError(m)
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
> RuntimeError:
> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
> Command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline',
> '--format=json', '--', 'list-br']
> 2016-06-07 

Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-09 Thread Ben Nemec
On 06/09/2016 09:03 AM, Steven Hardy wrote:
> Hi all,
> 
> I've been in discussion with Martin André and Tomas Sedovic, who are
> involved with the creation of the new tripleo-validations repo[1]
> 
> We've agreed that rather than create another gerrit group, they can be
> added to tripleo-core and agree to restrict +A to this repo for the time
> being (hopefully they'll both continue to review more widely, and obviously
> Tomas is a former TripleO core anyway, so welcome back! :)
> 
> If folks feel strongly we should create another group we can, but this
> seems like a low-overhead approach, and well aligned with the scope of the
> repo, let me know if you disagree.

As I noted in the previous discussion on this topic, I prefer this
approach anyway so +1 from me. :-)

> 
> Also, while reviewing the core group[2] I noticed the following members who
> are no longer active and should probably be removed:
> 
> - Radomir Dopieralski
> - Martyn Taylor
> - Clint Byrum
> 
> I know Clint is still involved with DiB (which has a separate core group),
> but he's indicated he's no longer going to be directly involved in other
> tripleo development, and AFAIK neither Martyn or Radomir are actively
> involved in TripleO reviews - thanks to them all for their contribution,
> we'll gladly add you back in the future should you wish to return :)
> 
> Please let me know if there are any concerns or objections, if there are
> none I will make these changes next week.
> 
> Thanks,
> 
> Steve
> 
> [1] https://github.com/openstack/tripleo-validations
> [2] https://review.openstack.org/#/admin/groups/190,members
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-09 Thread Dougal Matthews
On 9 June 2016 at 15:03, Steven Hardy  wrote:

> Hi all,
>
> I've been in discussion with Martin André and Tomas Sedovic, who are
> involved with the creation of the new tripleo-validations repo[1]
>
> We've agreed that rather than create another gerrit group, they can be
> added to tripleo-core and agree to restrict +A to this repo for the time
> being (hopefully they'll both continue to review more widely, and obviously
> Tomas is a former TripleO core anyway, so welcome back! :)
>

+1, I think this approach works fine. Requiring sub groups only makes sense
if we don't feel we can trust people, but then they shouldn't be core. It
might
be worth documenting this somewhere however as we have a few restricted
cores.


If folks feel strongly we should create another group we can, but this
> seems like a low-overhead approach, and well aligned with the scope of the
> repo, let me know if you disagree.
>
> Also, while reviewing the core group[2] I noticed the following members who
> are no longer active and should probably be removed:
>
> - Radomir Dopieralski
> - Martyn Taylor
> - Clint Byrum
>
> I know Clint is still involved with DiB (which has a separate core group),
> but he's indicated he's no longer going to be directly involved in other
> tripleo development, and AFAIK neither Martyn or Radomir are actively
> involved in TripleO reviews - thanks to them all for their contribution,
> we'll gladly add you back in the future should you wish to return :)
>
> Please let me know if there are any concerns or objections, if there are
> none I will make these changes next week.
>
> Thanks,
>
> Steve
>
> [1] https://github.com/openstack/tripleo-validations
> [2] https://review.openstack.org/#/admin/groups/190,members
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-09 Thread Steven Hardy
Hi all,

I've been in discussion with Martin André and Tomas Sedovic, who are
involved with the creation of the new tripleo-validations repo[1]

We've agreed that rather than create another gerrit group, they can be
added to tripleo-core and agree to restrict +A to this repo for the time
being (hopefully they'll both continue to review more widely, and obviously
Tomas is a former TripleO core anyway, so welcome back! :)

If folks feel strongly we should create another group we can, but this
seems like a low-overhead approach, and well aligned with the scope of the
repo, let me know if you disagree.

Also, while reviewing the core group[2] I noticed the following members who
are no longer active and should probably be removed:

- Radomir Dopieralski
- Martyn Taylor
- Clint Byrum

I know Clint is still involved with DiB (which has a separate core group),
but he's indicated he's no longer going to be directly involved in other
tripleo development, and AFAIK neither Martyn or Radomir are actively
involved in TripleO reviews - thanks to them all for their contribution,
we'll gladly add you back in the future should you wish to return :)

Please let me know if there are any concerns or objections, if there are
none I will make these changes next week.

Thanks,

Steve

[1] https://github.com/openstack/tripleo-validations
[2] https://review.openstack.org/#/admin/groups/190,members

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SFC]

2016-06-09 Thread Alioune
Thanks Mohan,

After setting service_plugins and adding sfc tables to neutrondb, I can
create port-pair, port-pair-group but classifier creation still claim a
logical-source-port parameter.

neutron flow-classifier-create  --ethertype IPv4  --source-ip-prefix
55.55.55.2/32  --destination-ip-prefix 55.55.55.9/32  --protocol tcp
 --source-port 22:22  --destination-port 1:65000 FC1
ERROR:
neutron flow-classifier-create: error: argument --logical-source-port is
required
Try 'neutron help flow-classifier-create' for more information.

Please someone can explain what does --logical-source-port correspond to ?
Does the classifier require port-create like SF ?

Regards,


On 9 June 2016 at 09:21, Mohan Kumar  wrote:

> Alioune,
>
> networking-sfc  resources not installed / not reachable , If installation
> is okay, Possibly you may missed service_plugins entry in *neutron.conf *(
> in case of manual networking-sfc installation)
>
> it should be ,
>
> *service_plugins =
> neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin*
>
> *and restart q-svc services in screen -x *
>
> *Thanks.,*
> *Mohankumar.N *
>
> On Thu, Jun 9, 2016 at 12:58 AM, Alioune  wrote:
>
>> I've switched from devstack to a normal deployment of openstack/mitaka
>> and neutron-l2 agent is working fine with sfc. I can boot instances, create
>> ports.
>> However I can not create neither flow-classifier nor port-pair ...
>>
>> neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix
>> 22.1.20.1/32 --destination-ip-prefix 172.4.5.6/32 --protocol tcp
>> --source-port 23:23 --destination-port 100:100 FC1
>>
>> returns: neutron flow-classifier-create: error: argument
>> --logical-source-port is required
>> Try 'neutron help flow-classifier-create' for more information.
>>
>>  neutron port-pair-create --ingress=p1 --egress=p2 PP1
>> 404 Not Found
>>
>> The resource could not be found.
>>
>> Neutron server returns request_ids:
>> ['req-1bfd0983-4a61-4b32-90b3-252004d90e65']
>>
>> neutron --version
>> 4.1.1
>>
>> p1,p2,p3,p4 have already been created, I can ping instances attached to
>> these ports.
>> Since I've not installed networking-sfc, are there additional config to
>> set in neutron config files ?
>> Or is it due to neutron-client version ?
>>
>> Regards
>>
>> On 8 June 2016 at 20:31, Mohan Kumar  wrote:
>>
>>> neutron agent not able to fetch details from ovsdb . Could you check
>>> below options 1.restart ovsdb-server and execute ovs_vsctl list-br  2.
>>> execute ovs- vsctl list-br manually and try to check error.
>>>
>>> 3. Could be ovs cleanup issue , please check the output of sudo service
>>> openvswitch restart and /etc/init.d/openvswich** restart , both should be
>>> same
>>>
>>> Thanks.,
>>> Mohankumar.N
>>> On Jun 7, 2016 6:04 PM, "Alioune"  wrote:
>>>
 Hi Mohan/Cathy
  I've installed now ovs 2.4.0 and followed
 https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining but
 I got this error :
 Regards,

 + neutron-ovs-cleanup
 2016-06-07 11:25:36.465 22147 INFO neutron.common.config [-] Logging
 enabled!
 2016-06-07 11:25:36.468 22147 INFO neutron.common.config [-]
 /usr/local/bin/neutron-ovs-cleanup version 7.1.1.dev4
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl [-]
 Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline',
 '--format=json', '--', 'list-br'].
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 Traceback (most recent call last):
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 File "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in
 run_vsctl
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 log_fail_as_error=False).rstrip()
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in 
 execute
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 raise RuntimeError(m)
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 RuntimeError:
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 Command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline',
 '--format=json', '--', 'list-br']
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl Exit
 code: 1
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
 2016-06-07 11:25:36.512 22147 CRITICAL neutron [-] RuntimeError:
 Command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline',
 '--format=json', '--', 'list-br']
 Exit code: 1

 2016-06-07 11:25:36.512 22147 ERROR neutron 

[openstack-dev] [new][openstack] osc-lib 0.1.0 release (newton)

2016-06-09 Thread no-reply
We are amped to announce the release of:

osc-lib 0.1.0: OpenStackClient Library

This is the first release of osc-lib. This release is part of the
newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/osc-lib

With package available at:

https://pypi.python.org/pypi/osc-lib

Please report issues through launchpad:

https://bugs.launchpad.net/python-openstackclient

For more details, please see below.

Changes in osc-lib 381e53813efd2b32dd3d7380e892d9c6c9dd64b7..0.1.0
--

b4fd4ed Backport i18n fixes
26080eb Backport log fix
263dd52 Backport --os-beta-command
876d81e Error handling for KeyValueAction class.
6198590 Updated from global requirements
deb9ade Updated from global requirements
f8f1286 Updated from global requirements
6c8f414 Updated from global requirements
e8d95be Updated from global requirements
d22f26d Change is_network_endpoint_enabled() to is_service_available()
26ff8c9 Clean up API
de84dda Move api.api and api.utils to osc_lib
59edb51 Move shell to osc_lib and begin rework
0901727 Add transition doc
e584cef Rework TLS option handling
a122965 Remove keystoneclient dependency
30f6d2a Move clientmanager to osc_lib
742c28d Updated from global requirements
ce181fa Updated from global requirements
dda54dc fix the docs build
00d79fe Fix imports in remaining openstackclient modules for testing
2bcf739 Begin moving bits to osc_lib
99fddac Make remaining tests pass
641ae6b Trim requirements.txt and test-requirements.txt
f78173f Rename to osc-lib
15574df Implement "address scope set" command
21928d0 Implement "address scope show" command
bac5d7d Implement "address scope list" command
43d963c Implement "address scope delete" command
9dfdea6 Implement "address scope create" command
3aa7949 Updated from global requirements
3c244b6 Ignore domain related config when using with keystone v2
81fef68 Updated from global requirements
c6551a1 Ignore domain related config when using with keystone v2
183154c remove assert in favor an if/else
970a296 Replace tempest-lib with tempest.lib
fdb2d16 add a bandit environment to tox
972ae67 Support for volume service list
0d3f1c2 Updated from global requirements
4966b4c Add "server group show" command
bef62d7 Add "server group list" command
a4c09af Add "server group delete" command
0828877 Add "server group create" command
71c042b Fix mutable default arguments in tests
39174a7 Rename --profile to --os-profile
9fc5cb7 Updated from global requirements
521ff87 Updated from global requirements
38a0246 Propagate AttributeErrors when lazily loading plugins
e763be1 Updated from global requirements
d2f9bf8 Move keys() methods in each resource class to FakeResource
55a7d39 Updated from global requirements
c3fd814 Updated from global requirements
50f7591 Support client certificate/key
5705548 Fix typos in docstrings and comments
43bf253 Use fixtures and addCleanup instead of tearDown
d8ee59e Don't mask authorization errors
25c8d54 Remove unused method 'from_response'
7b49979 Refactor security group rule list to use SDK
a7c8353 Add "aggregate unset" to osc
fd743b1 Subnet: Add "subnet set" command using SDK
bddd364 [Floating IP] Neutron support for "ip floating create" command
22fa638 Refactor security group rule create to use SDK
1018a63 Add Subnet add/remove support to router
fedf9f2 Add "router remove port" to osc
e246bb4 Add "router add port" to osc
f589f79 Updated from global requirements
eb19946 update docs with status of plugins
1be4aca Updated from global requirements
c9cfb93 Use assertItemsEqual() instead of assertListEqual()
951f166 Fix dict.keys() compatibility for python 3
74776c8 Add "os subnet create" command using SDK
ba398c4 Refactor security group create to use SDK
5342528 Refactor security group show to use SDK
27e72cf Add 'port set' command
5ff7fcc [Subnet pool] Add 'subnet pool create' command support
5c24929 [Subnet pool] Add 'subnet pool set' command support
a07081c remove py26 workaround in osc
95fb0e0 Add port list command
d4ef56c Trivial: Remove useless return
a1d6de2 Updated from global requirements
0f4c87c Add 'port create' command
f7e5bf9 Updated from global requirements
57d68e9 Updated from global requirements
746a7b9 Refactor security group set to use SDK
681f4d6 Updated from global requirements
b4fcf95 Fix regression in interactive client mode
b5ce7e9 Subnet: Add "subnet delete" command using SDK
27351a9 fix: Exception message includes unnecessary class args
02ab8eb Refactor security group list to use SDK
da6fe12 Add MultiKeyValueAction to custom parser action
36984cf Updated from global requirements
cdf11a5 [compute] Add set host command
27db5be Add shell --profile option to trigger osprofiler from CLI
f2a195e Floating IP: Neutron support for "ip floating show" command
04e9391 Improve tox to show coverage report on same window
bf00440 Updated from global requirements
0df25d5 Defaults are ignored with flake8
b1ee642 Fixed a bunch of spacing

Re: [openstack-dev] [kolla] stepping down from core

2016-06-09 Thread Martin André
This is not a goodbye Jeff. Have fun on your next adventure.

Martin

On Tue, Jun 7, 2016 at 1:40 PM, Ryan Hallisey  wrote:
> Thanks for all the hard work Jeff!  I'm sure our paths will cross again!
>
> -Ryan
>
> - Original Message -
> From: "Michał Jastrzębski" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Monday, June 6, 2016 7:13:00 PM
> Subject: Re: [openstack-dev] [kolla] stepping down from core
>
> Damn, bad news:( All the best Jeff!
>
> On 6 June 2016 at 17:57, Vikram Hosakote (vhosakot)  
> wrote:
>> Thanks for all the contributions to kolla and good luck Jeff!
>>
>> Regards,
>> Vikram Hosakote
>> IRC: vhosakot
>>
>> From: "Steven Dake (stdake)" 
>> Reply-To: OpenStack Development Mailing List
>> 
>> Date: Monday, June 6, 2016 at 6:14 PM
>> To: OpenStack Development Mailing List 
>> Subject: Re: [openstack-dev] [kolla] stepping down from core
>>
>> Jeff,
>>
>> Thanks for the notification.  Likewise it has been a pleasure working with
>> you over the last 3 years on Kolla.  I've removed you from gerrit.
>>
>> You have made a big impact on Kolla.  For folks that don't know, at one
>> point Kolla was nearly dead, and Jeff was one of our team of 3 that stuck
>> to it.  Without Jeff to carry the work forward, OpenStack deployment in
>> containers would have been set back years.
>>
>> Best wishes on what you work on next.
>>
>> Regards
>> -steve
>>
>> On 6/6/16, 12:36 PM, "Jeff Peeler"  wrote:
>>
>> Hi all,
>>
>> This is my official announcement to leave core on Kolla /
>> Kolla-Kubernetes. I've enjoyed working with all of you and hopefully
>> we'll cross paths again!
>>
>> Jeff
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrades] Bi-weekly upgrades work status. 6/2/2016

2016-06-09 Thread Ihar Hrachyshka

> On 09 Jun 2016, at 02:45, Carl Baldwin  wrote:
> 
> On Thu, Jun 2, 2016 at 2:29 PM, Korzeniewski, Artur
>  wrote:
>> I would like to remind that agreed approach at Design Summit in Austin was,
>> that every new resource added to neutron should have OVO implemented. Please
>> comply, and core reviewers please take care of this requirements in patches
>> you review.
> 
> How about the networksegments table?  It was already a part of the ML2
> model but was moved out of ML2 to make it available for the OVN
> plugin.  Just days after the summit, it was made in to a first class
> resource [2] with its own CRUD operations.  Is this part of the model
> on your radar?  What needs to be done?
> 
> Since then, a relationship has been added between segment and subnet
> [3].  Also, a mapping to hosts has been added [4].  What needs to be
> done for OVO for these?  I'm sorry if these are slipping through the
> cracks but we're still learning.  There are a couple of other model
> tweaks in play on this topic too [5][6].  I'd like to begin doing
> these the correct way.

First, thanks a lot Carl for stepping in on segments.

Previously, we had a blocker that did not allow us to proceed with segments 
OVO, due to missing sorting/pagination support in objects API. As of [1], it’s 
supported.

I think we should start from segment object itself and see where it leads us. 
As for relations to other resources, like subnet, we don’t implement them right 
away since e.g. subnet is not available at all. For the start, an object would 
for the most part reflect what’s already in database model (plus code sugar to 
make objects easier to use).

So the proper order to get this bit converted to using objects would be:
- model a new object class for segments (there are lots of examples under 
neutron/objects/);
- cover segments with sorting/pagination tests to avoid potential regressions; 
example at [2];
- switch existing places where segment models are accessed to using the object; 
example at [3].

In the end, you won’t have any references to NetworkSegment model except in the 
object itself.

ATM I see the following places to convert:
- neutron/db/ipam_backend_mixin.py
- neutron/db/segments_db.py
- neutron/services/segments/db.py

Once we are there, we can see whether more segment related models can be 
considered for conversion.

Please ping me if lost.

[1]: https://review.openstack.org/#/c/300055/
[2]: https://review.openstack.org/#/c/327081/
[3]: https://review.openstack.org/#/c/300056/

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-09 Thread Matt Riedemann

On 6/9/2016 6:15 AM, Paul Michali wrote:



On Wed, Jun 8, 2016 at 11:21 PM Chris Friesen
> wrote:

On 06/03/2016 12:03 PM, Paul Michali wrote:
> Thanks for the link Tim!
>
> Right now, I have two things I'm unsure about...
>
> One is that I had 1945 huge pages left (of size 2048k) and tried
to create a VM
> with a small flavor (2GB), which should need 1024 pages, but Nova
indicated that
> it wasn't able to find a host (and QEMU reported an allocation issue).
>
> The other is that VMs are not being evenly distributed on my two
NUMA nodes, and
> instead, are getting created all on one NUMA node. Not sure if
that is expected
> (and setting mem_page_size to 2048 is the proper way).


Just in case you haven't figured out the problem...

Have you checked the per-host-numa-node 2MB huge page availability
on your host?
  If it's uneven then that might explain what you're seeing.


These are the observations/questions I have:

1) On the host, I was seeing 32768 huge pages, of 2MB size. When I
created VMs (Cirros) using small flavor, each VM was getting created on
NUMA nodeid 0. When it hit half of the available pages, I could no
longer create any VMs (QEMU saying no space). I'd like to understand why
the assignment was always going two nodeid 0, and to confirm that the
huge pages are divided among the number of NUMA nodes available.

2) I changed mem_page_size from 1024 to 2048 in the flavor, and then
when VMs were created, they were being evenly assigned to the two NUMA
nodes. Each using 1024 huge pages. At this point I could create more
than half, but when there were 1945 pages left, it failed to create a
VM. Did it fail because the mem_page_size was 2048 and the available
pages were 1945, even though we were only requesting 1024 pages?

3) Related to #2, is there a relationship between mem_page_size, the
allocation of VMs to NUMA nodes, and the flavor size? IOW, if I use the
medium flavor (4GB), will I need a larger mem_page_size? (I'll play with
this variation, as soon as I can). Gets back to understanding how the
scheduling determines how to assign the VMs.

4) When the VM create failed due to QEMU failing allocation, the VM went
to error state. I deleted the VM, but the neutron port was still there,
and there were no log messages indicating that a request was made to
delete the port. Is this expected (that the user would have to manually
clean up the port)?


When you hit this case, can you check if instance.host is set in the 
database before deleting the instance? I'm guessing what's happening is 
the instance didn't get assigned a host since it eventually ended up 
with NoValidHost, so when you go to delete it doesn't have a compute to 
send it to for delete, so it deletes from the compute API, and we don't 
have the host binding details to delete the port.


Although, when the spawn failed in the compute to begin with we should 
have deallocated any networking that was created before kicking back to 
the scheduler - unless we don't go back to the scheduler if the instance 
is set to ERROR state.


A bug report with stacktrace of the failure scenario when the instance 
goes to error state bug n-cpu logs would probably help.




5) A coworker had hit the problem mentioned in #1, with exhaustion at
the halfway point. If she delete's a VM, and then changes the flavor to
change the mem_page_size to 2048, should Nova start assigning all new
VMs to the other NUMA node, until the pool of huge pages is down to
where the huge pages are for NUMA node 0, or will it alternate between
the available NUMA nodes (and run out when node 0's pool is exhausted)?

Thanks in advance!

PCM




Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-09 Thread Martin André
+1

On Thu, Jun 9, 2016 at 12:34 PM, Ryan Hallisey  wrote:
> +1
>
> On Jun 8, 2016, at 11:43 PM, Vikram Hosakote (vhosakot) 
> wrote:
>
> +1
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: "Swapnil Kulkarni (coolsvap)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Wednesday, June 8, 2016 at 8:54 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [kolla] Request for changing the meeting time to
> 1600 UTC for all meetings
>
> Dear Kollagues,
>
> Some time ago we discussed the requirement of alternating meeting
> times for Kolla weekly meeting due to major contributors from
> kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
> implemented alternate US/APAC meeting times.
>
> With kolla-mesos not active anymore and looking at the current active
> contributors, I wish to reinstate the UTC 1600 time for all Kolla
> Weekly meetings.
>
> Please let me know your views.
>
> --
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Artur Svechnikov to the fuel-web-core team

2016-06-09 Thread Dmitry Klenov
Hi Folks,

>From technical standpoint I fully support Arthur to become core reviewer. I
like thorough reviews that he is making.

Although I have some concerns as well. Planned tasks for our team will not
allow Arthur to spend more than 25-30% of his time for reviewing. If that
is fine - my concerns are resolved.

Thanks,
Dmitry.

On Thu, Jun 9, 2016 at 12:57 PM, Sergey Vasilenko 
wrote:

> +1
>
>
> /sv
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]:VM doesn't get IP with VXLAN and OpenvSwitch

2016-06-09 Thread Kiruthiga R
Attached the logs from /var/log/messages

Regards,
Kiruthiga

From: Kiruthiga R
Sent: Thursday, June 9, 2016 5:43 PM
To: openstack-dev@lists.openstack.org
Subject: [Neutron]:VM doesn't get IP with VXLAN and OpenvSwitch

Hi Team,

I have OpenStack Kilo three node set up. The set up was working fine with VXLAN 
tunnel and OVS version 2.3.1. But now, we have changed the OVS to v2.4.90 to 
support Service Function Chaining via NSH. After switching to OVS 2.4.90, the 
instances created are not getting IP address from DHCP.

I have posted the issue in Openstack question forum. Link to the post: 
https://ask.openstack.org/en/question/93281/vm-doesnt-get-ip-with-vxlan-and-openvswitch/

Any information would be of great help. Thanks in advance

Thanks & Regards,
Kiruthiga
[cid:image001.png@01CEC0E2.A9DBA890]

kvm: 1 guest now active
kernel: kvm [15999]: vcpu0 unhandled rdmsr: 0x1c9
kernel: kvm [15999]: vcpu0 unhandled rdmsr: 0x1a6
kernel: kvm [15999]: vcpu0 unhandled rdmsr: 0x1a7
kernel: kvm [15999]: vcpu0 unhandled rdmsr: 0x3f6
kvm: 0 guests now active
NetworkManager[1157]:   (qbrd154217a-2f): new Bridge device (carrier: 
OFF, driver: 'bridge', ifindex: 74)
NetworkManager[1157]:   (qvod154217a-2f): failed to find device 75 
'qvod154217a-2f' with udev
NetworkManager[1157]:   (qvod154217a-2f): new Veth device (carrier: OFF, 
driver: 'veth', ifindex: 75)
NetworkManager[1157]:   (qvbd154217a-2f): failed to find device 76 
'qvbd154217a-2f' with udev
NetworkManager[1157]:   (qvbd154217a-2f): new Veth device (carrier: OFF, 
driver: 'veth', ifindex: 76)
kernel: IPv6: ADDRCONF(NETDEV_UP): qvbd154217a-2f: link is not ready
kernel: device qvbd154217a-2f entered promiscuous mode
NetworkManager[1157]:   (qvod154217a-2f): link connected
NetworkManager[1157]:   (qvbd154217a-2f): link connected
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): qvbd154217a-2f: link becomes ready
kernel: device qvod154217a-2f entered promiscuous mode
NetworkManager[1157]:   (qbrd154217a-2f): device state change: unmanaged 
-> unavailable (reason 'connection-assumed') [10 20 41]
NetworkManager[1157]:   (qbrd154217a-2f): device state change: 
unavailable -> disconnected (reason 'none') [20 30 0]
kernel: qbrd154217a-2f: port 1(qvbd154217a-2f) entered forwarding state
kernel: qbrd154217a-2f: port 1(qvbd154217a-2f) entered forwarding state
NetworkManager[1157]:   (qbrd154217a-2f): bridge port qvbd154217a-2f was 
attached
NetworkManager[1157]:   (qvbd154217a-2f): enslaved to qbrd154217a-2f
NetworkManager[1157]:   (qbrd154217a-2f): link connected
NetworkManager[1157]:   ifcfg-rh: add connection in-memory 
(27c725f0-296c-44e6-b919-a63830477e19,"qbrd154217a-2f")
NetworkManager[1157]:   (qbrd154217a-2f): Activation: starting connection 
'qbrd154217a-2f' (27c725f0-296c-44e6-b919-a63830477e19)
NetworkManager[1157]:   ifcfg-rh: add connection in-memory 
(3dfe2bd5-417e-40f1-b5af-e85ca00a9b5a,"qvbd154217a-2f")
NetworkManager[1157]:   (qvbd154217a-2f): device state change: unmanaged 
-> unavailable (reason 'connection-assumed') [10 20 41]
NetworkManager[1157]:   (qvbd154217a-2f): device state change: 
unavailable -> disconnected (reason 'connection-assumed') [20 30 41]
NetworkManager[1157]:   (qvbd154217a-2f): Activation: starting connection 
'qvbd154217a-2f' (3dfe2bd5-417e-40f1-b5af-e85ca00a9b5a)
NetworkManager[1157]:   (qbrd154217a-2f): device state change: 
disconnected -> prepare (reason 'none') [30 40 0]
NetworkManager[1157]:   (qvbd154217a-2f): device state change: 
disconnected -> prepare (reason 'none') [30 40 0]
NetworkManager[1157]:   (qbrd154217a-2f): device state change: prepare -> 
config (reason 'none') [40 50 0]
NetworkManager[1157]:   (qvbd154217a-2f): device state change: prepare -> 
config (reason 'none') [40 50 0]
NetworkManager[1157]:   (qbrd154217a-2f): device state change: config -> 
ip-config (reason 'none') [50 70 0]
NetworkManager[1157]:   (qbrd154217a-2f): device state change: ip-config 
-> ip-check (reason 'ip-config-unavailable') [70 80 5]
NetworkManager[1157]:   (qvbd154217a-2f): device state change: config -> 
ip-config (reason 'none') [50 70 0]
NetworkManager[1157]:   (qvbd154217a-2f): device state change: ip-config 
-> secondaries (reason 'none') [70 90 0]
NetworkManager[1157]:   (qbrd154217a-2f): device state change: ip-check 
-> secondaries (reason 'none') [80 90 0]
NetworkManager[1157]:   (qvbd154217a-2f): device state change: 
secondaries -> activated (reason 'none') [90 100 0]
NetworkManager[1157]:   (qvbd154217a-2f): Activation: successful, device 
activated.
dbus-daemon: dbus[989]: [system] Activating via systemd: service 
name='org.freedesktop.nm_dispatcher' 
unit='dbus-org.freedesktop.nm-dispatcher.service'
dbus[989]: [system] Activating via systemd: service 
name='org.freedesktop.nm_dispatcher' 
unit='dbus-org.freedesktop.nm-dispatcher.service'
systemd: Starting Network Manager Script Dispatcher Service...
NetworkManager[1157]:   (qbrd154217a-2f): device state change: 
secondaries -> activated (reason 'none') [90 100 0]

[openstack-dev] [Neutron]:VM doesn't get IP with VXLAN and OpenvSwitch

2016-06-09 Thread Kiruthiga R
Hi Team,

I have OpenStack Kilo three node set up. The set up was working fine with VXLAN 
tunnel and OVS version 2.3.1. But now, we have changed the OVS to v2.4.90 to 
support Service Function Chaining via NSH. After switching to OVS 2.4.90, the 
instances created are not getting IP address from DHCP.

I have posted the issue in Openstack question forum. Link to the post: 
https://ask.openstack.org/en/question/93281/vm-doesnt-get-ip-with-vxlan-and-openvswitch/

Any information would be of great help. Thanks in advance

Thanks & Regards,
Kiruthiga
[cid:image001.png@01CEC0E2.A9DBA890]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-09 Thread Paul Michali
On Wed, Jun 8, 2016 at 11:21 PM Chris Friesen 
wrote:

> On 06/03/2016 12:03 PM, Paul Michali wrote:
> > Thanks for the link Tim!
> >
> > Right now, I have two things I'm unsure about...
> >
> > One is that I had 1945 huge pages left (of size 2048k) and tried to
> create a VM
> > with a small flavor (2GB), which should need 1024 pages, but Nova
> indicated that
> > it wasn't able to find a host (and QEMU reported an allocation issue).
> >
> > The other is that VMs are not being evenly distributed on my two NUMA
> nodes, and
> > instead, are getting created all on one NUMA node. Not sure if that is
> expected
> > (and setting mem_page_size to 2048 is the proper way).
>
>
> Just in case you haven't figured out the problem...
>
> Have you checked the per-host-numa-node 2MB huge page availability on your
> host?
>   If it's uneven then that might explain what you're seeing.
>

These are the observations/questions I have:

1) On the host, I was seeing 32768 huge pages, of 2MB size. When I created
VMs (Cirros) using small flavor, each VM was getting created on NUMA nodeid
0. When it hit half of the available pages, I could no longer create any
VMs (QEMU saying no space). I'd like to understand why the assignment was
always going two nodeid 0, and to confirm that the huge pages are divided
among the number of NUMA nodes available.

2) I changed mem_page_size from 1024 to 2048 in the flavor, and then when
VMs were created, they were being evenly assigned to the two NUMA nodes.
Each using 1024 huge pages. At this point I could create more than half,
but when there were 1945 pages left, it failed to create a VM. Did it fail
because the mem_page_size was 2048 and the available pages were 1945, even
though we were only requesting 1024 pages?

3) Related to #2, is there a relationship between mem_page_size, the
allocation of VMs to NUMA nodes, and the flavor size? IOW, if I use the
medium flavor (4GB), will I need a larger mem_page_size? (I'll play with
this variation, as soon as I can). Gets back to understanding how the
scheduling determines how to assign the VMs.

4) When the VM create failed due to QEMU failing allocation, the VM went to
error state. I deleted the VM, but the neutron port was still there, and
there were no log messages indicating that a request was made to delete the
port. Is this expected (that the user would have to manually clean up the
port)?

5) A coworker had hit the problem mentioned in #1, with exhaustion at the
halfway point. If she delete's a VM, and then changes the flavor to change
the mem_page_size to 2048, should Nova start assigning all new VMs to the
other NUMA node, until the pool of huge pages is down to where the huge
pages are for NUMA node 0, or will it alternate between the available NUMA
nodes (and run out when node 0's pool is exhausted)?

Thanks in advance!

PCM




> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >