[openstack-dev] [vitrage] gate-vitrage-dsvm-api FAILURE

2016-08-18 Thread Yujun Zhang
I submit a simple patch for additional log message but the CI job keeps
failure even after recheck.

It seems some configuration files are missing according to the console log
[2].

Does anybody encountering the same issue as me?

```
2016-08-18 19:59:15.155472

| 2016-08-18 19:59:15.155 | ++
/opt/stack/new/vitrage/devstack/post_test_hook.sh:source:L31: sudo
oslo-config-generator --config-file etc/config-generator.tempest.conf
--output-file etc/tempest.conf
2016-08-18 19:59:15.402397

| 2016-08-18 19:59:15.401 | Traceback (most recent call last):2016-08-18
19:59:15.403804

| 2016-08-18 19:59:15.403 | File "/usr/local/bin/oslo-config-generator",
line 11, in 
2016-08-18 19:59:15.405044

| 2016-08-18 19:59:15.404 | sys.exit(main())
2016-08-18 19:59:15.406367

| 2016-08-18 19:59:15.406 | File
"/usr/local/lib/python2.7/dist-packages/oslo_config/generator.py", line
478, in main
2016-08-18 19:59:15.407918

| 2016-08-18 19:59:15.407 | conf(args, version=version)
2016-08-18 19:59:15.409207

| 2016-08-18 19:59:15.408 | File
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2256, in
__call__
2016-08-18 19:59:15.410630

| 2016-08-18 19:59:15.410 | raise
ConfigFilesNotFoundError(self._namespace._files_not_found)
*2016-08-18 19:59:15.412069

| 2016-08-18 19:59:15.411 | oslo_config.cfg.ConfigFilesNotFoundError:
Failed to find some config files:
/opt/stack/new/tempest/etc/config-generator.tempest.conf*
```


[1] https://review.openstack.org/#/c/357001/
[2]
http://logs.openstack.org/01/357001/1/check/gate-vitrage-dsvm-api/2e679a2/console.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 19 August

2016-08-18 Thread Lana Brindley
Hi everyone,

Watch out, Install Guide testing season is about to begin! If you're a package 
maintainer, cross-project liaison, or just a helpful testing regular, look out 
for an email from me real soon now! In the meantime, keep hitting those bugs, 
and letting us know if you spot any problems in the guide.

== Progress towards Newton ==

47 days to go!

Bugs closed so far: 371

Newton deliverables: 
https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
Feel free to add more detail and cross things off as they are achieved 
throughout the release.

== Speciality Team Reports ==

'''HA Guide: Andrew Beekhof'''
Some reviews, nothing interesting

'''Install Tutorials: Lana Brindley'''
Core Guide testing to kick off Real Soon Now (tm). Next meeting: 30 August 
0600UTC.

'''Networking Guide: Edgar Magana'''
No report this week.

'''Security Guide: Nathaniel Dillon'''
No report this week.

'''User Guides: Joseph Robinson'''
No report this week.

'''Ops Guide: Shilla Saebi, Darren Chan'''
Reorganized monitoring content in the Ops Guide
Migrating ops, legal, and security content in the Arch Guide
Working on use cases chapter and design chapter in the Arch Guide

'''API Guide: Anne Gentle'''
No report this week.

'''Config/CLI Ref: Tomoyuki Kato'''
A few cleanups are under going. Updated some client references.

'''Training labs: Pranav Salunke, Roger Luethi'''
No report this week.

'''Training Guides: Matjaz Pancur'''
No report this week.

'''Hypervisor Tuning Guide: Blair Bethwaite
No report this week.

'''UX/UI Guidelines: Michael Tullis, Rodrigo Caballero'''
No report this week.

== Site Stats ==

Just under 90,000 sessions on the docs site this week.

== Doc team meeting ==

Next meetings:

The US meeting failed to make quorum this week. We're seeing this a lot 
recently, is there still a need for a regular docs meeting?

Next meetings:
APAC: Wednesday 24 August, 00:30 UTC
US: Wednesday 31 August, 19:00 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#19_August_2016
-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up, Doc? 19 August (Correction)

2016-08-18 Thread Lana Brindley
On 19/08/16 14:59, Lana Brindley wrote:



> 
> == Doc team meeting ==
> 
> Next meetings:
> 
> The US meeting failed to make quorum this week. We're seeing this a lot 
> recently, is there still a need for a regular docs meeting?
> 

A correction: US meeting *did* go ahead this week, minutes are here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-08-17

Please excuse me while I try to work out how to use eavesdrop.



> Next meetings:
> APAC: Wednesday 24 August, 00:30 UTC
> US: Wednesday 31 August, 19:00 UTC
> 
> Please go ahead and add any agenda items to the meeting page here: 
> https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting
> 
> --
> 
> Keep on doc'ing!
> 
> Lana
> 
> https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#19_August_2016
> 

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka together with ODL-Beryllium

2016-08-18 Thread Rui Zang

Cloud you give me the output of `ovs-vsctl show` on node-3?

On 8/19/2016 5:20 AM, Nikolas Hermanns wrote:

Hey,

Thanks for the answer. It might be that I did not fully understand the 
networking concept here.
OVS on the host node-3 is as well controlled by opendaylight. And opendaylight 
sets up the external network as well. But it is still a flat network without 
segmentation. As far as I understood it, it is the port which connects node-3 
with the external network. But networking-odl from the beginning onwards 
declines to bind this port.

That is my understanding. But I think I am not fully correct.

BR Nikolas


-Original Message-
From: Rui Zang [mailto:rui.z...@foxmail.com]
Sent: Thursday, August 18, 2016 9:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Vishal Thapar; Nikolas Hermanns; Michal Skalski; neutron-
d...@lists.opendaylight.org
Subject: Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka
together with ODL-Beryllium

Hi Nikolas,

First of all, neutron-...@lists.opendaylight.org (copied) might be a better
place to ask networking-odl questions.

It seems that the external network you described is not managed by
OpenDaylight, so it failed port binding.

You probably want to configure multiple mechanism drivers, say if
physnet1 is connected by ovs br-xxx on  node-3.domain.tld, you could run
ovs agent on that host and configure bridge_mappings correctly. The
openvswitch mechanism driver would succeed the port binding.

Thanks,
Zang, Rui

On 8/17/2016 7:38 PM, Nikolas Hermanns wrote:

Hey Networking-ODL folks,

I just setup a Mirantis 9.0 release together with Opendaylight Beryllium.

Using networking-odl v2 I see constantly the error:

2016-08-17 11:28:07.927 4040 ERROR neutron.plugins.ml2.managers
[req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind
port faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld
for vnic_type normal using segments [{'segmentation_id': None,
'physical_network': u'physnet1', 'id':
u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': u'flat'}]
2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology [req-7e834676-81b4-479b-ad45-
fa39f0fabed3 - - - - -] Network topology element has failed binding port:

2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology Traceback (most recent call last):

2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology   File "/usr/local/lib/python2.7/dist-
packages/networking_odl/ml2/network_topology.py", line 117, in bind_port

2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology port_context, vif_type,
self._vif_details)

2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology   File "/usr/local/lib/python2.7/dist-
packages/networking_odl/ml2/ovsdb_topology.py", line 172, in bind_port

2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology raise ValueError('Unable to find
any valid segment in given context.')

2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology ValueError: Unable to find any valid
segment in given context.

2016-08-17 11:28:07.937 4040 ERROR

networking_odl.ml2.network_topology

2016-08-17 11:28:07.938 4040 ERROR

networking_odl.ml2.network_topology [req-7e834676-81b4-479b-ad45-
fa39f0fabed3 - - - - -] Unable to bind port element for given host and valid VIF
types:

2016-08-17 11:28:07.939 4040 ERROR neutron.plugins.ml2.managers
[req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind
port faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld
for vnic_type normal using segments [{'segmentation_id': None,
'physical_network': u'physnet1', 'id':
u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': u'flat'}]

Looking at the code I saw that you can only bind ports which have a valid

segmentation:

/usr/local/lib/python2.7/dist-

packages/networking_odl/ml2/ovsdb_topology.py(151)bind_port()

def bind_port(self, port_context, vif_type, vif_details):

port_context_id = port_context.current['id']
network_context_id = port_context.network.current['id']
# Bind port to the first valid segment
for segment in port_context.segments_to_bind:
if self._is_valid_segment(segment): <---
# Guest best VIF type for given host
vif_details = self._get_vif_details(
vif_details=vif_details, port_context_id=port_context_id,
vif_type=vif_type)
LOG.debug(
'Bind port with valid segment:\n'
'\tport: %(port)r\n'
'\tnetwork: %(network)r\n'
'\tsegment: %(segment)r\n'
'\tVIF type: %(vif_type)r\n'
'\tVIF details: %(vif_details)r',
{'port': port_context_id,
 'network': network_context_id,
 'segment': segment, 

[openstack-dev] 答复: [cinder]concerns on driver deprecation policy

2016-08-18 Thread Husheng (TommyLike, R IT Equipment Dept)
On Thu, Aug 18, 2016 at 15:36:00PM +, Sean McGinnis  
wrote:
> > Hi all,
> > Sorry for absence from IRC meeting of this week and put forward this topic 
> > on driver deprecation policy again. Actually I support the driver support 
> > tag policy completely, it's a reasonable policy for both sides, and these 
> > below are my 2 concerns:

> > 1. With the release of driver deprecation policy, we may leave a negative 
> > impression on our customers/operators, cause we just send them a warning 
> > message while we bring out the unstable or unusable driver codes together . 
> > If I were the customer, I will probably not change my vendor for a few 
> > warning messages, so this is what may happen for the unlucky boys, they 
> > insist on their choice by setting the enable_unsupported_driver = TRUE and 
> > get stuck. So this action may let them to take the risk(result) of their 
> > own, maybe we can set up a guide of this situation rather than the announce 
> > possible result.

> I'm not actually clear what you mean here for setting up "a guide of this 
> situation". Can you clarify that?

Sorry for the inferior English, Let me make it more clearly. "a guide of this 
situation" means to set up a guideline for customer to get rid of the troubles 
caused by their insistence and unstable driver with more convenience. But now I 
think maybe this's redundant after your legible explanation below:

>
>Marking the driver as deprecated would then allow for some leeway (for 
>customers to angrily yell at their storage vendor to get their act together) 
>so that hopefully they would fix their CI issues and make sure they are 
>maintaining their driver. If they are able to turn things around we can then 
>remove that deprecation tag and unsupported flag and everything's back to 
>normal.
>

And the second point, I'm not referring to the background but the current idea 
itself. few concerns to enhance the operability, I will comment these on 
topics:[1]

Thanks
TommyLike.Hu

[1] https://review.openstack.org/#/c/355608/






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Running Tempest tests for a customized cloud

2016-08-18 Thread Matthew Treinish
On Wed, Aug 17, 2016 at 12:27:29AM +, Elancheran Subramanian wrote:
> Hello Punal,
> We do support both V2 and V3, that’s just a example I’ve stated BTW. We do 
> have our own integration tests which are pretty much covers all our 
> integration points with openstack. But we would like to leverage the tempest 
> while doing our upstream merge for openstack components in CI.
> 
> I believe the tests support the include list, how can I exclude test? Any 
> pointer would be a great help.
> 

It depends on the test runner you're using. The tempest run command supports
several methods of excluding tests:

http://docs.openstack.org/developer/tempest/run.html#test-selection

If you use ostestr it offers the same options that tempest run offers:

http://docs.openstack.org/developer/os-testr/ostestr.html#test-selection

If you're using testr plain it can take a regex filter that will run any tests
that match regex filter:

http://testrepository.readthedocs.io/en/latest/MANUAL.html#running-tests

You can also use a negative lookahead in the regex to exclude something. This
is how tempest's tox jobs skip slow tests in the normal gate run jobs:

http://git.openstack.org/cgit/openstack/tempest/tree/tox.ini#n80

Other test runners have similar selection mechanisms.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Running Tempest tests for a customized cloud

2016-08-18 Thread Matthew Treinish
On Tue, Aug 16, 2016 at 10:40:07PM +, Elancheran Subramanian wrote:
> Hello There,
> I’m currently playing with using Tempest as our integration tests for our 
> internal and external clouds, facing some issues with api which are not 
> supported in our cloud. For ex, listing domains isn’t supported for any user, 
> due to this V3 Identity tests are failing. So I would like to know what’s the 
> best practice? Like fix those tests, and apply those fix as patch? Or just 
> exclude those tests?
> 

It really depends on the configuration of your cloud. It could be a bug in
tempest or it could be a tempest configuration issue. You also could have
configured your cloud in a way that is invalid and breaks API end user
expectations from tempest's POV. It's hard to say without out knowing the
specifics of your deployment.

I'd start with filing a bug with more details. You can file a bug here:

https://bugs.launchpad.net/tempest

If it's a valid tempest bug then submitting a patch to fix the bug in tempest is
the best path forward.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Ocata design summit session ideas

2016-08-18 Thread Matt Riedemann
Early planning is starting for the Ocata summit so I've started an 
etherpad to dump ideas for sessions:


https://etherpad.openstack.org/p/ocata-nova-summit-ideas

The summit in Barcelona is 4 days, with ops and cross-project taking up 
Tuesday and most of Wednesday. The Friday meetups will be in the 
afternoon, which leaves Thursday and Friday morning for fishbowl 
sessions. I think we'll have 13 fishbowl sessions compared to 18 that we 
had in Austin. This is probably fine given we had maybe too many 
sessions in Austin. It also probably means we'll do like one 
unconference instead of two.


Ocata is also going to be a shorter release (sounds like maybe 4-4.5 
months instead of 6). So we're going to have a shorter runway to get 
things done.


Anyway, this is all very early, but I wanted to start this rolling 
because it will be on top of us before we know it.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-18 Thread Steven Dake (stdake)
Kolla Core Review Team:

I am nominating Eduardo for the core reviewer team.  His reviews are fantastic, 
as I'm sure most of you have seen after looking over the review queue.  His 30 
day stats place him at #3 by review count [1] and 60 day stats [2] at #4 by 
review count.  He is also first to review a significant amount of the time - 
which is impressive for someone new to Kolla.  He participates in IRC and he 
has done some nice code contribution as well [3] including the big chunk of 
work on enabling Senlin in Kolla, the dockerfile customizations work, as well 
as a few documentation fixes.  Eduardo is not affiliated with any particular 
company.  As a result he is not full time on Kolla like many of our other core 
reviewers.  The fact that he is part time and still doing fantastically well at 
reviewing is a great sign of things to come :)

Consider this nomination as my +1 vote.

Voting is open for 7 days until August 24th.  Joining the core review team 
requires a majority of the core review team to approve within a 1 week period 
with no veto (-1) votes.  If a veto or unanimous decision is reached prior to 
August 24th, voting will close early.

Regards
-steve

[1] http://stackalytics.com/report/contribution/kolla/30
[2] http://stackalytics.com/report/contribution/kolla/60
[3] 
https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-18 Thread Sagi Shnaidman
Hi,

we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/
I've created a patch that will increase it[1], but we need to increase it
right now on rh1.
I can't do it now, because unfortunately I'll not be able to watch this if
it works and no problems appear.
TripleO CI cloud admins, please increase the memory for baremetal flavor on
rh1 tomorrow (to 6144?).

Thanks

[1] https://review.openstack.org/#/c/357532/
-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-18 Thread Michał Jastrzębski
+1

On 18 August 2016 at 18:09, Steven Dake (stdake)  wrote:
> Kolla Core Review Team:
>
> I am nominating Eduardo for the core reviewer team.  His reviews are
> fantastic, as I'm sure most of you have seen after looking over the review
> queue.  His 30 day stats place him at #3 by review count [1] and 60 day
> stats [2] at #4 by review count.  He is also first to review a significant
> amount of the time – which is impressive for someone new to Kolla.  He
> participates in IRC and he has done some nice code contribution as well [3]
> including the big chunk of work on enabling Senlin in Kolla, the dockerfile
> customizations work, as well as a few documentation fixes.  Eduardo is not
> affiliated with any particular company.  As a result he is not full time on
> Kolla like many of our other core reviewers.  The fact that he is part time
> and still doing fantastically well at reviewing is a great sign of things to
> come :)
>
> Consider this nomination as my +1 vote.
>
> Voting is open for 7 days until August 24th.  Joining the core review team
> requires a majority of the core review team to approve within a 1 week
> period with no veto (-1) votes.  If a veto or unanimous decision is reached
> prior to August 24th, voting will close early.
>
> Regards
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla/30
> [2] http://stackalytics.com/report/contribution/kolla/60
> [3]
> https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging-deb] - status of packaging directly within the OpenStack infra ?

2016-08-18 Thread Paul Belanger
On Thu, Aug 18, 2016 at 02:44:47PM +0200, Saverio Proto wrote:
> Hello,
> 
> I just subscribed this list, usually I am on the operators list.
> 
> I have been reading the archives of this list:
> http://lists.openstack.org/pipermail/openstack-dev/2016-June/097947.html
> 
> I am more than happy to hear that packages will be built in the
> Openstack infra. Are we going to be able to have packages built
> automatically for every gerrit review?
> 
Yes, that is possible. However, not setup currently. Today zigo is just driving
package builds manually. I expect we'll talk more about this in Barcelona.

> As far as I understood every operator has his own procedures to build
> packages for debian ubuntu, when an emergency patch is needed in a
> production system.
> It never managed to read documentation to build packages that is
> somewhat official.
> 
> Here at SWITCH for example we use this:
> https://github.com/zioproto/ubuntu-cloud-archive-vagrant-vm/tree/xenial
> This procedure used to work for our Kilo packages, but now looks like
> something broke upstream.
> 
One of the goals I've been working towards, is providing the generic
infrastructure for packagers to use which ever tooling they are used too rather
then force a tool on to the packager.  Today zigo has a build process in place
he is familiar however I'd likely use different build tools for packages for
openstack-infra.   Which, to me, is perfectly fine.

> Next week in NYC there will be the Openstack Ops Midcycle
> https://etherpad.openstack.org/p/NYC-ops-meetup
> 
> There is a session about Ubuntu packages, looks like also Corey Bryant
> will be there.
> 
> Is anyone coming to give an update about the packaging directly within the
> OpenStack infra ?
> 
I wish I could, I'd be more then happy to give an update of the work on the
openstack-infra side.

> Thank you
> 
> Saverio
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matt Riedemann

On 8/18/2016 3:44 PM, Michael Still wrote:

On Fri, Aug 19, 2016 at 1:00 AM, Matt Riedemann
> wrote:

It's that time of year again to talk about killing this job, at
least from the integrated gate (move it to experimental for people
that care about postgresql, or make it gating on a smaller subset of
projects like oslo.db).

The postgresql job used to have three interesting things about it:

1. It ran keystone with eventlet (which is no longer a thing).
2. It runs the n-api-meta service rather than using config drive.
3. It uses postgresql for the database.

So #1 is gone, and for #3, according to the April 2016 user survey
(page 40) [1], 4% of reporting deployments are using it in production.

I don't think we're running n-api-meta in any other integrated gate
jobs, but I'm pretty sure there is at least one neutron job out
there that's running with it that way. We could also consider making
the nova-net dsvm full gate job run n-api-meta, or vice-versa with
the neutron dsvm full gate job.


We do now have functional testing of the metadata server though, so
perhaps that counts as coverage here?


We also have to consider that with HP public cloud being gone as a
node provider and we've got fewer test nodes to run with, we have to
make tough decisions about which jobs we're going to run in the
integrated gate.

I'm bringing this up again because Nova has a few more jobs it would
like to make voting on it's repo (neutron LB and live migration, at
least in the check queue) but there are concerns about adding yet
more jobs that each change has to get through before it's merged,
which means if anything goes wrong in any of those we can have a 24
hour turnaround on getting an approved change back through the gate.

[1]
https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf



Michael

--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



My comment about coverage was confusing. The metadata API is served 
through n-api by default, the only difference with the PG job was it was 
running the metadata API separately in the n-api-meta service. So it's 
not like we aren't running it elsewhere, but we were forcing config 
drive on everything in the gate by default.


Tempest has tests for both metadata and config drive either way, so I 
think we're covered.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Design Summit Space Needs

2016-08-18 Thread Sean McGinnis
Hey team,

I need to let folks know what we would like for rooms in Barcelona. This
is always the tricky part of guesstimating what we will need by then.
I'll bring this up in next week's meeting to discuss, but I wanted to
get it out there so everyone could start thinking about it now.

In Austin Cinder had four fishbowls, five working rooms, and the day
long Friday contributor meetup.

Things are going to be a little more compressed in Barcelona. This
should probably be OK given where things are, plus Ocata will be a
shorter than normal cycle.

There are also more projects competing for space.

Everyone is restricted to just the last half day afternoon for the
contributor meetup. If you haven't already booked your travel, please
try to stick around until Saturday so we have good attendance.

Last time we spent a lot of time on "educational" sessions to make sure
everyone knew the status of some of the ongoing "works in progress"
like OVO and HA. I think we'll need less time for that this time around. 

Just to throw something out there to go off of, I'm thinking 3 fishbowls
and 4 working rooms.

Feel free to respond with thoughts one way or another. I'll try to pull
together everything to discuss higher bandwidth next Wednesday in the
meeting.

Thanks!
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread Ton Ngo

We have had numerous discussion on this topic, including a presentation and
a design session
in Tokyo,  but we have not really arrived at a consensus yet.  Part of the
problem is that auto-scaling
at the container level is still being developed, so it is still a moving
target.
However, a few points did emerge from the discussion (not necessarily
consensus):
   It's preferable to have a single point of decision on auto-scaling for
   both the container and infrastructure level.
   One approach is to make this decision at the container orchestration
   level, so the infrastructure level would just
   provide the service to handle request to scale the infrastructure.  This
   would require coordinating support with
   upstream like Kubernetes.  This approach also means that we don't want a
   major component in Magnum to
   drive auto-scaling.
   It's good to have a policy-driven mechanism for auto-scaling to handle
   complex scenarios.  For this, Senlin
   is a candidate;  upstream is another potential choice.
We may want to revisit this topic as a design session in the next summit.
Ton Ngo,



From:   Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   08/18/2016 12:26 PM
Subject:Re: [openstack-dev] [Magnum] Next auto-scaling feature design?





> -Original Message-
> From: hie...@vn.fujitsu.com [mailto:hie...@vn.fujitsu.com]
> Sent: August-18-16 3:57 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Magnum] Next auto-scaling feature design?
>
> Hi Magnum folks,
>
> I have some interests in our auto scaling features and currently
> testing with some container monitoring solutions such as heapster,
> telegraf and prometheus. I have seen the PoC session corporate with
> Senlin in Austin and have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun,
> so is there only one level of scaling (node) instead of both node and
> container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on
> Heat/Ceilometer for gathering metrics and do the scaling work based on
> auto scaling policies, but is Heat/Ceilometer is the best choice for
> Magnum auto scaling?
>
> Currently, I saw that Magnum only send CPU and Memory metric to
> Ceilometer, and Heat can grab these to decide the right scaling method.
> IMO, this approach have some problems, please take a look and give
> feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle
> complex scaling policies. For example:
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO,
> the conditional logic of Heat also cannot resolve the conflict of
> scaling policies. For example:
> If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% then
> scale in
> -> What if CPU = 90% and Mem = 30%.
> Thus, I think that we need to implement magnum scaler for validating
> the policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE.
>
> I think we need a new design for auto scaling feature, not for Magnum
> only but also Zun (because the scaling level of container maybe forked
> to Zun too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and
> show the monitoring URL when creating cluster (bay) complete. For
> example, we can use Prometheus as monitoring container for each cluster.
> (Heapster is the best choice for k8s, but not good enough for swarm or
> mesos).

[Hongbin Lu] Personally, I think this is a good idea.

> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if
> need.
> - Manage user-defined scaling policy: not only cpu and memory but also
> other metrics like network bw, CCU.
> - Validate user-defined scaling policy and trigger heat for scaling
> actions. (can trigger nova-scheduler for more scaling options)
> - Need highly scalable architecture, first step we can implement simple
> validator method but in the future, there are some other approach such
> as using fuzzy logic or AI to make an appropriate decision.

[Hongbin Lu] I think this is a valid requirement but I wonder why you want
it in Magnum. However, if you have a valid reason to do that, you can
create a custom bay driver. You can add logic to the custom driver to
retrieve metrics from the monitoring URL and send them to ceilometers.
Users can pass scaling policy via "labels" when they create the bay. The
custom driver is responsible to  validate the policy and trigger the action
based on that. Does it satisfy your requirement?

>
> Some use case for operators:
> - I want to create a k8s cluster, and if CCU or network bandwidth is
> high please scale-out X nodes in other 

Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka together with ODL-Beryllium

2016-08-18 Thread Nikolas Hermanns
Hey,

Thanks for the answer. It might be that I did not fully understand the 
networking concept here. 
OVS on the host node-3 is as well controlled by opendaylight. And opendaylight 
sets up the external network as well. But it is still a flat network without 
segmentation. As far as I understood it, it is the port which connects node-3 
with the external network. But networking-odl from the beginning onwards 
declines to bind this port.

That is my understanding. But I think I am not fully correct.

BR Nikolas

> -Original Message-
> From: Rui Zang [mailto:rui.z...@foxmail.com]
> Sent: Thursday, August 18, 2016 9:23 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Vishal Thapar; Nikolas Hermanns; Michal Skalski; neutron-
> d...@lists.opendaylight.org
> Subject: Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka
> together with ODL-Beryllium
> 
> Hi Nikolas,
> 
> First of all, neutron-...@lists.opendaylight.org (copied) might be a better
> place to ask networking-odl questions.
> 
> It seems that the external network you described is not managed by
> OpenDaylight, so it failed port binding.
> 
> You probably want to configure multiple mechanism drivers, say if
> physnet1 is connected by ovs br-xxx on  node-3.domain.tld, you could run
> ovs agent on that host and configure bridge_mappings correctly. The
> openvswitch mechanism driver would succeed the port binding.
> 
> Thanks,
> Zang, Rui
> 
> On 8/17/2016 7:38 PM, Nikolas Hermanns wrote:
> > Hey Networking-ODL folks,
> >
> > I just setup a Mirantis 9.0 release together with Opendaylight Beryllium.
> Using networking-odl v2 I see constantly the error:
> > 2016-08-17 11:28:07.927 4040 ERROR neutron.plugins.ml2.managers
> > [req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind
> > port faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld
> > for vnic_type normal using segments [{'segmentation_id': None,
> > 'physical_network': u'physnet1', 'id':
> > u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': u'flat'}]
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology [req-7e834676-81b4-479b-ad45-
> fa39f0fabed3 - - - - -] Network topology element has failed binding port:
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology Traceback (most recent call last):
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology   File "/usr/local/lib/python2.7/dist-
> packages/networking_odl/ml2/network_topology.py", line 117, in bind_port
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology port_context, vif_type,
> self._vif_details)
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology   File "/usr/local/lib/python2.7/dist-
> packages/networking_odl/ml2/ovsdb_topology.py", line 172, in bind_port
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology raise ValueError('Unable to find
> any valid segment in given context.')
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology ValueError: Unable to find any valid
> segment in given context.
> > 2016-08-17 11:28:07.937 4040 ERROR
> networking_odl.ml2.network_topology
> > 2016-08-17 11:28:07.938 4040 ERROR
> networking_odl.ml2.network_topology [req-7e834676-81b4-479b-ad45-
> fa39f0fabed3 - - - - -] Unable to bind port element for given host and valid 
> VIF
> types:
> > 2016-08-17 11:28:07.939 4040 ERROR neutron.plugins.ml2.managers
> > [req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind
> > port faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld
> > for vnic_type normal using segments [{'segmentation_id': None,
> > 'physical_network': u'physnet1', 'id':
> > u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': u'flat'}]
> >
> > Looking at the code I saw that you can only bind ports which have a valid
> segmentation:
> > /usr/local/lib/python2.7/dist-
> packages/networking_odl/ml2/ovsdb_topology.py(151)bind_port()
> > def bind_port(self, port_context, vif_type, vif_details):
> >
> > port_context_id = port_context.current['id']
> > network_context_id = port_context.network.current['id']
> > # Bind port to the first valid segment
> > for segment in port_context.segments_to_bind:
> > if self._is_valid_segment(segment): <---
> > # Guest best VIF type for given host
> > vif_details = self._get_vif_details(
> > vif_details=vif_details, 
> > port_context_id=port_context_id,
> > vif_type=vif_type)
> > LOG.debug(
> > 'Bind port with valid segment:\n'
> > '\tport: %(port)r\n'
> > '\tnetwork: %(network)r\n'
> > '\tsegment: %(segment)r\n'
> > '\tVIF type: %(vif_type)r\n'
> > '\tVIF details: %(vif_details)r',
> >

Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Sean Dague
On 08/18/2016 02:22 PM, Sean Dague wrote:
> On 08/18/2016 11:00 AM, Matt Riedemann wrote:
>> It's that time of year again to talk about killing this job, at least
>> from the integrated gate (move it to experimental for people that care
>> about postgresql, or make it gating on a smaller subset of projects like
>> oslo.db).
>>
>> The postgresql job used to have three interesting things about it:
>>
>> 1. It ran keystone with eventlet (which is no longer a thing).
>> 2. It runs the n-api-meta service rather than using config drive.
>> 3. It uses postgresql for the database.
>>
>> So #1 is gone, and for #3, according to the April 2016 user survey (page
>> 40) [1], 4% of reporting deployments are using it in production.
>>
>> I don't think we're running n-api-meta in any other integrated gate
>> jobs, but I'm pretty sure there is at least one neutron job out there
>> that's running with it that way. We could also consider making the
>> nova-net dsvm full gate job run n-api-meta, or vice-versa with the
>> neutron dsvm full gate job.
>>
>> We also have to consider that with HP public cloud being gone as a node
>> provider and we've got fewer test nodes to run with, we have to make
>> tough decisions about which jobs we're going to run in the integrated gate.
>>
>> I'm bringing this up again because Nova has a few more jobs it would
>> like to make voting on it's repo (neutron LB and live migration, at
>> least in the check queue) but there are concerns about adding yet more
>> jobs that each change has to get through before it's merged, which means
>> if anything goes wrong in any of those we can have a 24 hour turnaround
>> on getting an approved change back through the gate.
>>
>> [1]
>> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
> 
> +1.
> 
> Postgresql in the gate hasn't provided any real value in a long time
> (tempest just really can't tickle the differences between the dbs,
> especially as projects put much better input validation in place).
> During icehouse the job was even accidentally running mysql for 6 weeks,
> and no one noticed.

Devstack Default change proposed - https://review.openstack.org/#/c/357446/
Devstack Gate default change proposed -
https://review.openstack.org/#/c/357443/
Project-config change proposed - https://review.openstack.org/#/c/357444

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hold off on pushing new patches for config option cleanup

2016-08-18 Thread Sean Dague
On 08/18/2016 04:46 PM, Michael Still wrote:
> We're still ok with merging existing ones though?

Mostly we'd like to conserve the CI time now. It's about 14.5 node-hours
to run CI on these patches (probably on about 9 node-hours in the gate).
With ~800 nodes every patch represents 1.5% of our CI resources (per
hour) to pass through. There are a ton of these patches up there, so
even just landing gating ones consumes resources that could go towards
other more critical fixes / features.

I think the theory is these are fine to merge post freeze / milestone 3,
when the CI should have cooled down a bit and there is more head room.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hold off on pushing new patches for config option cleanup

2016-08-18 Thread Michael Still
We're still ok with merging existing ones though?

Michael

On Fri, Aug 19, 2016 at 5:18 AM, Jay Pipes  wrote:

> Roger that.
>
> On 08/18/2016 11:48 AM, Matt Riedemann wrote:
>
>> We have a lot of open changes for the centralize / cleanup config option
>> work:
>>
>> https://review.openstack.org/#/q/status:open+project:opensta
>> ck/nova+branch:master+topic:bp/centralize-config-options-newton
>>
>>
>> We said at the midcycle we'd allow those past non-priority feature
>> freeze because they are docs changes, but with the gate being backed up
>> every day let's hold off on pushing new changes for this series, at
>> least until after feature freeze on 9/2. These changes run all of the
>> test jobs so they do take a tool on the CI system, which is hurting
>> other functional things from landing before feature freeze.
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Michael Still
On Fri, Aug 19, 2016 at 1:00 AM, Matt Riedemann 
wrote:

> It's that time of year again to talk about killing this job, at least from
> the integrated gate (move it to experimental for people that care about
> postgresql, or make it gating on a smaller subset of projects like oslo.db).
>
> The postgresql job used to have three interesting things about it:
>
> 1. It ran keystone with eventlet (which is no longer a thing).
> 2. It runs the n-api-meta service rather than using config drive.
> 3. It uses postgresql for the database.
>
> So #1 is gone, and for #3, according to the April 2016 user survey (page
> 40) [1], 4% of reporting deployments are using it in production.
>
> I don't think we're running n-api-meta in any other integrated gate jobs,
> but I'm pretty sure there is at least one neutron job out there that's
> running with it that way. We could also consider making the nova-net dsvm
> full gate job run n-api-meta, or vice-versa with the neutron dsvm full gate
> job.
>

We do now have functional testing of the metadata server though, so perhaps
that counts as coverage here?


> We also have to consider that with HP public cloud being gone as a node
> provider and we've got fewer test nodes to run with, we have to make tough
> decisions about which jobs we're going to run in the integrated gate.
>
> I'm bringing this up again because Nova has a few more jobs it would like
> to make voting on it's repo (neutron LB and live migration, at least in the
> check queue) but there are concerns about adding yet more jobs that each
> change has to get through before it's merged, which means if anything goes
> wrong in any of those we can have a 24 hour turnaround on getting an
> approved change back through the gate.
>
> [1] https://www.openstack.org/assets/survey/April-2016-User-Surv
> ey-Report.pdf


Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Horizon] Any guidelines for naming heat resource type names?

2016-08-18 Thread Praveen Yalagandula
Tatiana,
Thanks for filing the bug and putting in a patch!
Cheers,
Praveen

On Wed, Aug 17, 2016 at 1:45 AM Tatiana Ovtchinnikova <
t.v.ovtchinnik...@gmail.com> wrote:

> Filed a bug: https://bugs.launchpad.net/horizon/+bug/1614000
>
>
> 2016-08-17 11:19 GMT+03:00 Tatiana Ovtchinnikova <
> t.v.ovtchinnik...@gmail.com>:
>
>> It is definitely a bug in Horizon. The additional columns
>> "Implementation", "Component" and "Resource" are representative for a
>> limited resource type group only. Since Heat allows to specify a URL as a
>> resource type, we should not use these columns at all. "Type" column and
>> filter will do just the same trick.
>>
>> Tatiana.
>>
>>
>> 2016-08-17 1:22 GMT+03:00 Rob Cresswell :
>>
>>> This sounds like a bug on the Horizon side. There is/was a patch
>>> regarding a similar issue with LBaaS v2 resources too. It's likely just an
>>> incorrect assumption in the logic processing these names.
>>>
>>> Rob
>>>
>>> On 16 Aug 2016 11:03 p.m., "Zane Bitter"  wrote:
>>>
 On 16/08/16 17:43, Praveen Yalagandula wrote:
 > Hi all,
 > We have developed some heat resources for our custom API server. We
 > followed the instructions in the development guide
 > at http://docs.openstack.org/developer/heat/pluginguide.html and got
 > everything working. However, the Horizon "Resource Types" panel is
 > returning a 500 error with "TemplateSyntaxError: u'resource'" message.
 >
 > Upon further debugging, we found that the Horizon is expecting all
 Heat
 > resource type names to be of form
 > . However, we didn't see this
 > requirement in the heat development documents. Some of our resource
 > types have just two words (e.g., "Avi::Pool"). Heat itself didn't care
 > about these names at all.

 Given that Heat has a REST API specifically for validating templates,
 it's surprising that Horizon would implement its own, apparently
 incorrect, validation.

 > Question: Is there a general consensus is to enforce
 > the  format for type names?

 No. We don't really care what you call them (although using OS:: or
 AWS:: as a prefix for your own custom types would be very unwise). In
 fact, Heat allows you to specify a URL of a template file as a resource
 type, and it sounds like that might run afoul of the same restriction.

 > If so, can we please update the heat plugin guide to reflect this?
 > If not, we can file it is as a bug in Horizon.

 +1 for bug in Horizon.

 - ZB


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Sean Dague
On 08/18/2016 03:31 PM, Matthew Thode wrote:
> On 08/18/2016 01:50 PM, Matt Riedemann wrote:
>> On 8/18/2016 1:18 PM, Matthew Thode wrote:
>>> Perhaps a better option would be to get oslo.db to run cross-project
>>> checks like we do in requirements.  That way the right team is covering
>>> the usage of postgres and we still have coverage while still lowering
>>> gate load for most projects.
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> I don't see the value in this unless there are projects that have
>> pg-specific code in them. The reason we have cross-project unit test
>> jobs for reqs changes is requirements changes in upper-constraints can
>> break and wedge the gate for a project, or multiple project. E.g.
>> removing something in a backward incompatible way, or the project with
>> the unit test is mocking something out poorly (like we've seen lately
>> with nova and python-neutronclient releases).
>>
> 
> That makes sense, just improving the oslo.db test coverage for postgres
> (if that's even necessary) would be good.  The only other thing I'd like
> to see (and it may already be done) is to have pg upgrade test coverage,
> aka, I don't want to hit that keystone bug again :P  But that's a
> different conversation.

That's entirely doable inside the project. We do that in nova in our
unit tests. The important thing there is to not just run the schema
upgrades, but do them with some representative data in the tables.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Performance][Meeting time] Possible meeting time change: feedback appreciated

2016-08-18 Thread Dina Belova
Hey, OpenStackers!

recently I've got lots comments about current time our Performance Team
meetings are held on. Right now we're having them 16:00 UTC on Tuesdays
(9:00 PST) (#openstack-performance IRC channel) and this time slot is not
that much comfortable for some of the US folks due to internal daily
meetings.

The question is: can we move our weekly meeting to 17:30 UTC (10:30 PST)?
It's late a bit for folks from Moscow (20:30), so I'd like to collect more
feedback.

Please leave your comments.

Cheers,
Dina

-- 
*Dina Belova*
*Senior Software Engineer*
Mirantis, Inc.
525 Almanor Avenue, 4th Floor
Sunnyvale, CA 94085

*Phone: 650-772-8418Email: dbel...@mirantis.com *
www.mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matthew Thode
On 08/18/2016 01:50 PM, Matt Riedemann wrote:
> On 8/18/2016 1:18 PM, Matthew Thode wrote:
>> Perhaps a better option would be to get oslo.db to run cross-project
>> checks like we do in requirements.  That way the right team is covering
>> the usage of postgres and we still have coverage while still lowering
>> gate load for most projects.
>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> I don't see the value in this unless there are projects that have
> pg-specific code in them. The reason we have cross-project unit test
> jobs for reqs changes is requirements changes in upper-constraints can
> break and wedge the gate for a project, or multiple project. E.g.
> removing something in a backward incompatible way, or the project with
> the unit test is mocking something out poorly (like we've seen lately
> with nova and python-neutronclient releases).
> 

That makes sense, just improving the oslo.db test coverage for postgres
(if that's even necessary) would be good.  The only other thing I'd like
to see (and it may already be done) is to have pg upgrade test coverage,
aka, I don't want to hit that keystone bug again :P  But that's a
different conversation.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread Hongbin Lu


> -Original Message-
> From: hie...@vn.fujitsu.com [mailto:hie...@vn.fujitsu.com]
> Sent: August-18-16 3:57 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Magnum] Next auto-scaling feature design?
> 
> Hi Magnum folks,
> 
> I have some interests in our auto scaling features and currently
> testing with some container monitoring solutions such as heapster,
> telegraf and prometheus. I have seen the PoC session corporate with
> Senlin in Austin and have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun,
> so is there only one level of scaling (node) instead of both node and
> container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on
> Heat/Ceilometer for gathering metrics and do the scaling work based on
> auto scaling policies, but is Heat/Ceilometer is the best choice for
> Magnum auto scaling?
> 
> Currently, I saw that Magnum only send CPU and Memory metric to
> Ceilometer, and Heat can grab these to decide the right scaling method.
> IMO, this approach have some problems, please take a look and give
> feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle
> complex scaling policies. For example:
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO,
> the conditional logic of Heat also cannot resolve the conflict of
> scaling policies. For example:
> If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% then
> scale in
> -> What if CPU = 90% and Mem = 30%.
> Thus, I think that we need to implement magnum scaler for validating
> the policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE.
> 
> I think we need a new design for auto scaling feature, not for Magnum
> only but also Zun (because the scaling level of container maybe forked
> to Zun too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and
> show the monitoring URL when creating cluster (bay) complete. For
> example, we can use Prometheus as monitoring container for each cluster.
> (Heapster is the best choice for k8s, but not good enough for swarm or
> mesos).

[Hongbin Lu] Personally, I think this is a good idea.

> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if
> need.
> - Manage user-defined scaling policy: not only cpu and memory but also
> other metrics like network bw, CCU.
> - Validate user-defined scaling policy and trigger heat for scaling
> actions. (can trigger nova-scheduler for more scaling options)
> - Need highly scalable architecture, first step we can implement simple
> validator method but in the future, there are some other approach such
> as using fuzzy logic or AI to make an appropriate decision.

[Hongbin Lu] I think this is a valid requirement but I wonder why you want it 
in Magnum. However, if you have a valid reason to do that, you can create a 
custom bay driver. You can add logic to the custom driver to retrieve metrics 
from the monitoring URL and send them to ceilometers. Users can pass scaling 
policy via "labels" when they create the bay. The custom driver is responsible 
to  validate the policy and trigger the action based on that. Does it satisfy 
your requirement?

> 
> Some use case for operators:
> - I want to create a k8s cluster, and if CCU or network bandwidth is
> high please scale-out X nodes in other regions.
> - I want to create swarm cluster, and if CPU or memory is too high,
> please scale-out X nodes to make sure total CPU and memory is about 50%.
> 
> What do you think about these above ideas/problems?
> 
> [1]. https://blueprints.launchpad.net/heat/+spec/support-conditions-
> function
> 
> Thanks,
> Hieu LE.
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hold off on pushing new patches for config option cleanup

2016-08-18 Thread Jay Pipes

Roger that.

On 08/18/2016 11:48 AM, Matt Riedemann wrote:

We have a lot of open changes for the centralize / cleanup config option
work:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/centralize-config-options-newton


We said at the midcycle we'd allow those past non-priority feature
freeze because they are docs changes, but with the gate being backed up
every day let's hold off on pushing new changes for this series, at
least until after feature freeze on 9/2. These changes run all of the
test jobs so they do take a tool on the CI system, which is hurting
other functional things from landing before feature freeze.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matt Riedemann

On 8/18/2016 1:18 PM, Matthew Thode wrote:

Perhaps a better option would be to get oslo.db to run cross-project
checks like we do in requirements.  That way the right team is covering
the usage of postgres and we still have coverage while still lowering
gate load for most projects.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't see the value in this unless there are projects that have 
pg-specific code in them. The reason we have cross-project unit test 
jobs for reqs changes is requirements changes in upper-constraints can 
break and wedge the gate for a project, or multiple project. E.g. 
removing something in a backward incompatible way, or the project with 
the unit test is mocking something out poorly (like we've seen lately 
with nova and python-neutronclient releases).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Hold off on pushing new patches for config option cleanup

2016-08-18 Thread Matt Riedemann
We have a lot of open changes for the centralize / cleanup config option 
work:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/centralize-config-options-newton

We said at the midcycle we'd allow those past non-priority feature 
freeze because they are docs changes, but with the gate being backed up 
every day let's hold off on pushing new changes for this series, at 
least until after feature freeze on 9/2. These changes run all of the 
test jobs so they do take a tool on the CI system, which is hurting 
other functional things from landing before feature freeze.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Ceilometer]:Duplicate messages with Ceilometer Kafka Publisher.

2016-08-18 Thread gordon chung
its fine from what i can tell although i don't see the udp publishing you 
mentioned... and you are using a common pollster so i'm not sure what vsg 
meter. based on what you pasted, it's probably the kafka publisher.

On 18/08/16 01:37 AM, Raghunath D wrote:
Hi Gordon,

Did you get a chance to look at pipeline.yaml pasted to 
http://paste.openstack.org/show/558688/
Could you please provide some pointer how to resolve this issue.

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting



-Raghunath D  wrote: -
To: openstack-dev@lists.openstack.org
From: Raghunath D 
Date: 08/17/2016 09:30AM
Subject: Re: [openstack-dev] FW: [Ceilometer]:Duplicate messages with 
Ceilometer Kafka Publisher.

Hi Gordon,

As suggested I pasted pipeline.yaml content to paste.openstack.org

http://paste.openstack.org/show/558688/

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website:  http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting



-gordon chung  wrote: -
To: 
"openstack-dev@lists.openstack.org" 

From: gordon chung 
Date: 08/16/2016 07:51PM
Subject: Re: [openstack-dev] FW: [Ceilometer]:Duplicate messages with 
Ceilometer Kafka Publisher.


what does your pipeline.yaml look like? maybe paste it to paste.openstack.org. 
i imagine it's correct if your udp publishing works as expected.

On 16/08/16 04:04 AM, Raghunath D wrote:
Hi Simon,

I have two openstack setup's one with kilo and other with mitaka.

Please find details of kafka version's below:
Kilo kafka client library Version:1.3.1
Mitaka kafka client library Version:1.3.1
Kafka server version on both kilo and mitaka: kafka_2.11-0.9.0.0.tgz

One observation is on kilo openstack setup I didn't see duplicate message 
issue,while on mitaka
setup I am experiencing this issue.

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto:  
raghunat...@tcs.com
Website:  http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting



-
From: Simon Pasquier [mailto:spasqu...@mirantis.com]
Sent: Thursday, August 11, 2016 2:34 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Ceilometer]:Duplicate messages with Ceilometer 
Kafka Publisher.

Hi
Which version of Kafka do you use?
BR
Simon

On Thu, Aug 11, 2016 at 10:13 AM, Raghunath D 
<raghunat...@tcs.com> 
wrote:
Hi ,

  We are injecting events to our custom plugin in ceilometer.
  The ceilometer pipeline.yaml  is configured to publish these events over 
kafka and udp, consuming these samples using kafka and udp clients.

  KAFKA publisher:
  ---
  When the events are sent continously ,we can see duplicate msg's are recevied 
in kafka client.
  From the log it seems ceilometer kafka publisher is failed to send msg's,but 
still these msgs
  are received by kafka server.So when kafka resends these failed msgs we can 
see duplicate msg's
  received in kafka client.
  Please find the attached log for reference.
  Is this know issue ?
  Is their any workaround for this issue ?

  UDP publisher:
No duplicate msg's issue seen here and it is working as expected.



With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto:  
raghunat...@tcs.com
Website:  http://www.tcs.com

Experience certainty.IT Services
   Business Solutions
   Consulting


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. 

Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-08-18 13:18:05 -0500:
> On 08/18/2016 10:00 AM, Matt Riedemann wrote:
> > It's that time of year again to talk about killing this job, at least
> > from the integrated gate (move it to experimental for people that care
> > about postgresql, or make it gating on a smaller subset of projects like
> > oslo.db).
> > 
> > The postgresql job used to have three interesting things about it:
> > 
> > 1. It ran keystone with eventlet (which is no longer a thing).
> > 2. It runs the n-api-meta service rather than using config drive.
> > 3. It uses postgresql for the database.
> > 
> > So #1 is gone, and for #3, according to the April 2016 user survey (page
> > 40) [1], 4% of reporting deployments are using it in production.
> > 
> > I don't think we're running n-api-meta in any other integrated gate
> > jobs, but I'm pretty sure there is at least one neutron job out there
> > that's running with it that way. We could also consider making the
> > nova-net dsvm full gate job run n-api-meta, or vice-versa with the
> > neutron dsvm full gate job.
> > 
> > We also have to consider that with HP public cloud being gone as a node
> > provider and we've got fewer test nodes to run with, we have to make
> > tough decisions about which jobs we're going to run in the integrated gate.
> > 
> > I'm bringing this up again because Nova has a few more jobs it would
> > like to make voting on it's repo (neutron LB and live migration, at
> > least in the check queue) but there are concerns about adding yet more
> > jobs that each change has to get through before it's merged, which means
> > if anything goes wrong in any of those we can have a 24 hour turnaround
> > on getting an approved change back through the gate.
> > 
> > [1]
> > https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
> > 
> 
> Perhaps a better option would be to get oslo.db to run cross-project
> checks like we do in requirements.  That way the right team is covering
> the usage of postgres and we still have coverage while still lowering
> gate load for most projects.

I would support functional test jobs, but if a project that uses
oslo.db isn't going to also have the postgresql tests in place then
something checked in there could possibly break the ability to land
patches in oslo.db, so I don't think a one-directional "cross" project
test job is a good idea.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telemetry] Barcelona summit space needs

2016-08-18 Thread gordon chung
i'm cool with less space requirements. as the summit is basically all marketing 
driven now, i don't see any increase in devs being sent to Barcelona.

do we even want the contributors meetup? based on last few summits, a good 
chunk of people just choose to fly out on the last day so basically no one is 
around (or they are all over at Nova discussing something?)

On 18/08/16 08:48 AM, Julien Danjou wrote:

Hey team,

Like other teams, we need to decide how many fishbowl, workroom, etc we
need.

Last time in Austin we had:
- 2 fishbowls
- 7 workroom
- Half a day of a contributors meetup (which IIRC got more or less
  cancelled?)

Many of the sessions were empty and short, so I'd suggest to limit
ourselves to something like:
- 2 fishbowls
- 3 workrooms
- half a day of contributors meetup

WDYT?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matt Riedemann

On 8/18/2016 12:14 PM, Matthew Treinish wrote:

On Thu, Aug 18, 2016 at 11:33:59AM -0500, Matthew Thode wrote:

On 08/18/2016 10:00 AM, Matt Riedemann wrote:

It's that time of year again to talk about killing this job, at least
from the integrated gate (move it to experimental for people that care
about postgresql, or make it gating on a smaller subset of projects like
oslo.db).

The postgresql job used to have three interesting things about it:

1. It ran keystone with eventlet (which is no longer a thing).
2. It runs the n-api-meta service rather than using config drive.
3. It uses postgresql for the database.

So #1 is gone, and for #3, according to the April 2016 user survey (page
40) [1], 4% of reporting deployments are using it in production.

I don't think we're running n-api-meta in any other integrated gate
jobs, but I'm pretty sure there is at least one neutron job out there
that's running with it that way. We could also consider making the
nova-net dsvm full gate job run n-api-meta, or vice-versa with the
neutron dsvm full gate job.

We also have to consider that with HP public cloud being gone as a node
provider and we've got fewer test nodes to run with, we have to make
tough decisions about which jobs we're going to run in the integrated gate.

I'm bringing this up again because Nova has a few more jobs it would
like to make voting on it's repo (neutron LB and live migration, at
least in the check queue) but there are concerns about adding yet more
jobs that each change has to get through before it's merged, which means
if anything goes wrong in any of those we can have a 24 hour turnaround
on getting an approved change back through the gate.

[1]
https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf




I don't know about nova, but at least in keystone when I was testing
upgrades I found an error that had to be fixed before release of Mitaka.
 Guess I'm part of the 4% :(


That's not what we're talking about here. Your issue most likely stemmed from
keystone's lack of tests that do DB migrations with real data. The proposal here
is not talking about stopping all testing on postgres, just removing the 
postgres
dsvm tempset jobs from the integrated gate. Those jobs have very limited
additional value for the reasons Matt outlined. They also clearly did not catch
your upgrade issue and most (if not all) of the other postgres issues are caught
with are more targeted testing of the db layer done in the project repos.

-Matt Treinish



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Good point. Another good point that came up in IRC was that the pg job 
made more sense in the integrated gate for testing postgresql as a DB 
backend before we had oslo.db in all of the projects. Because back in 
the day we were all just getting the DB API code from oslo-incubator and 
it had some PG-specific conditionals in it, but that's all abstracted in 
oslo.db now which everyone should be using, or at least moving to since 
oslo-incubator is EOL.


So definitely gating with a PG backend for oslo.db is a good thing to 
still do, it just makes less sense for *everything* else that's running 
with the integrated gate jobs.


Another thing that came up is that even if we start running n-api-meta 
in other jobs (or all jobs), Tempest will still test forcing an instance 
to boot with config drive here:


https://github.com/openstack/tempest/blob/6b1df9fdf43b298d029e032fe8a737548218c1bf/tempest/scenario/test_server_basic_ops.py#L134

That's config-driven but it's the default in Tempest so it's what we'd 
be using in the gate.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Sean Dague
On 08/18/2016 11:00 AM, Matt Riedemann wrote:
> It's that time of year again to talk about killing this job, at least
> from the integrated gate (move it to experimental for people that care
> about postgresql, or make it gating on a smaller subset of projects like
> oslo.db).
> 
> The postgresql job used to have three interesting things about it:
> 
> 1. It ran keystone with eventlet (which is no longer a thing).
> 2. It runs the n-api-meta service rather than using config drive.
> 3. It uses postgresql for the database.
> 
> So #1 is gone, and for #3, according to the April 2016 user survey (page
> 40) [1], 4% of reporting deployments are using it in production.
> 
> I don't think we're running n-api-meta in any other integrated gate
> jobs, but I'm pretty sure there is at least one neutron job out there
> that's running with it that way. We could also consider making the
> nova-net dsvm full gate job run n-api-meta, or vice-versa with the
> neutron dsvm full gate job.
> 
> We also have to consider that with HP public cloud being gone as a node
> provider and we've got fewer test nodes to run with, we have to make
> tough decisions about which jobs we're going to run in the integrated gate.
> 
> I'm bringing this up again because Nova has a few more jobs it would
> like to make voting on it's repo (neutron LB and live migration, at
> least in the check queue) but there are concerns about adding yet more
> jobs that each change has to get through before it's merged, which means
> if anything goes wrong in any of those we can have a 24 hour turnaround
> on getting an approved change back through the gate.
> 
> [1]
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf

+1.

Postgresql in the gate hasn't provided any real value in a long time
(tempest just really can't tickle the differences between the dbs,
especially as projects put much better input validation in place).
During icehouse the job was even accidentally running mysql for 6 weeks,
and no one noticed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][nova] python-novaclient 5.1.0 release (newton)

2016-08-18 Thread no-reply
We are stoked to announce the release of:

python-novaclient 5.1.0: Client library for OpenStack Compute API

This release is part of the newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/python-novaclient

With package available at:

https://pypi.python.org/pypi/python-novaclient

Please report issues through launchpad:

https://bugs.launchpad.net/python-novaclient

For more details, please see below.

5.1.0
^


New Features


* Added microversion v2.33 that adds pagination support for
  hypervisors with the help of new optional parameters 'limit' and
  'marker' which were added to hypervisor-list command.

* Added microversion v2.35 that adds pagination support for keypairs
  with the help of new optional parameters 'limit' and 'marker' which
  were added to keypair-list command.


Upgrade Notes
*

* Support for microversion 2.34 added.

* The 2.36 microversion deprecated the image proxy API. As such, CLI
  calls now directly call the image service to get image details, for
  example, as a convenience to boot a server with an image name rather
  than the image id. To do this the following is assumed:

  1. There is an **image** entry in the service catalog.

  2. The image v2 API is available.

* The 2.36 microversion deprecated the network proxy APIs in Nova.
  Because of this we now go directly to neutron for name to net-id
  lookups. For nova-net deployements the old proxies will continue to
  be used.

  To do this the following is assumed:

  1. There is a **network** entry in the service catalog.

  2. The network v2 API is available.

Changes in python-novaclient 5.0.0..5.1.0
-

95ad95e Use glanceclient for functional tests
0f6f369 Skip nova-network-only tests if using Neutron
56ee584 Make wait_for_server_os_boot wait longer
05bfe1f Handle successful response in console functional tests
f5b19b3 Use neutron for network name -> id resolution
f839cf1 Look up image names directly in glance
1adabce Move other-requirements.txt to bindep.txt
04613ef Make novaclient functional tests use pretty tox
1d2a20d Updated from global requirements
232711c Added smaller flavors for novaclient functional tests to use
4d971af Split nic parsing out of _boot method
35057e1 Fix boot --nic error message for v2.32 boundary
baeaf60 Updated from global requirements
d166968 Microversion 2.35 adds keypairs pagination support
fae24ee Modify flatten method to display an empty dict
5694a9a remove start_version arg for keypairs v2.10 shell
e78cc20 Added support for microversion 2.34
6bbcedb Add support for microversion 2.33
454350f Fix python35 job failures
adbfdd0 Refactor test_servers APIVersion setup
48da89e Updated from global requirements
2bdccb1 Updated from global requirements
6b11a1c Remove discover from test-requirements
85ef6f4 Change all test URLs to use example.com
6edb751 Fix deprecation message for --volume-service-name


Diffstat (except docs and test files)
-

bindep.txt |  24 ++
novaclient/__init__.py |   2 +-
novaclient/base.py |  28 +-
novaclient/shell.py|   3 +-
.../functional/v2/legacy/test_readonly_nova.py |   2 +
.../functional/v2/legacy/test_virtual_interface.py |   2 +
novaclient/utils.py|   9 +-
novaclient/v2/client.py|  21 ++
novaclient/v2/hypervisors.py   |  31 +-
novaclient/v2/images.py|  43 +++
novaclient/v2/keypairs.py  |  25 +-
novaclient/v2/networks.py  |  33 ++
novaclient/v2/shell.py | 230 +
other-requirements.txt |  23 --
.../notes/microversion-v2_33-10d12ea3b25839e8.yaml |   5 +
.../notes/microversion-v2_34-a9c5601811152964.yaml |   3 +
.../notes/microversion-v2_35-537619a43278fbb5.yaml |   5 +
.../notes/no-glance-proxy-5c13001a4b13e8ce.yaml|  10 +
.../notes/no-neutron-proxy-18fd54febe939a6b.yaml   |  12 +
requirements.txt   |   4 +-
test-requirements.txt  |   6 +-
tox.ini|   4 +-
35 files changed, 926 insertions(+), 349 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c2b23f2..e9fd28f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@ pbr>=1.6 # Apache-2.0
-keystoneauth1>=2.7.0 # Apache-2.0
+keystoneauth1>=2.10.0 # Apache-2.0
@@ -9 +9 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index b63d28f..34c9e20 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +7,0 @@ 

Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matthew Thode
On 08/18/2016 10:00 AM, Matt Riedemann wrote:
> It's that time of year again to talk about killing this job, at least
> from the integrated gate (move it to experimental for people that care
> about postgresql, or make it gating on a smaller subset of projects like
> oslo.db).
> 
> The postgresql job used to have three interesting things about it:
> 
> 1. It ran keystone with eventlet (which is no longer a thing).
> 2. It runs the n-api-meta service rather than using config drive.
> 3. It uses postgresql for the database.
> 
> So #1 is gone, and for #3, according to the April 2016 user survey (page
> 40) [1], 4% of reporting deployments are using it in production.
> 
> I don't think we're running n-api-meta in any other integrated gate
> jobs, but I'm pretty sure there is at least one neutron job out there
> that's running with it that way. We could also consider making the
> nova-net dsvm full gate job run n-api-meta, or vice-versa with the
> neutron dsvm full gate job.
> 
> We also have to consider that with HP public cloud being gone as a node
> provider and we've got fewer test nodes to run with, we have to make
> tough decisions about which jobs we're going to run in the integrated gate.
> 
> I'm bringing this up again because Nova has a few more jobs it would
> like to make voting on it's repo (neutron LB and live migration, at
> least in the check queue) but there are concerns about adding yet more
> jobs that each change has to get through before it's merged, which means
> if anything goes wrong in any of those we can have a 24 hour turnaround
> on getting an approved change back through the gate.
> 
> [1]
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
> 

Perhaps a better option would be to get oslo.db to run cross-project
checks like we do in requirements.  That way the right team is covering
the usage of postgres and we still have coverage while still lowering
gate load for most projects.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.i18n 3.9.0 release (newton)

2016-08-18 Thread no-reply
We are grateful to announce the release of:

oslo.i18n 3.9.0: Oslo i18n library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.i18n

With package available at:

https://pypi.python.org/pypi/oslo.i18n

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.i18n

For more details, please see below.

Changes in oslo.i18n 3.8.0..3.9.0
-

757a654 Updated from global requirements
69cf173 Fix parameters of assertEqual are misplaced


Diffstat (except docs and test files)
-

test-requirements.txt   |  2 +-
3 files changed, 17 insertions(+), 17 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 80bdbbb..c372f96 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ coverage>=3.6 # Apache-2.0
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.messaging 5.8.0 release (newton)

2016-08-18 Thread no-reply
We are frolicsome to announce the release of:

oslo.messaging 5.8.0: Oslo Messaging API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

Changes in oslo.messaging 5.7.0..5.8.0
--

a55c974 Clean outdated docstring and comment
a011cb2 Add docstring for get_notification_transport
ee8fff0 Updated from global requirements
20a07e7 [zmq] Implement retries for unacknowledged CASTs


Diffstat (except docs and test files)
-

oslo_messaging/_drivers/amqp.py|   7 +-
.../publishers/dealer/zmq_dealer_publisher_base.py |  16 +-
.../dealer/zmq_dealer_publisher_direct.py  |  17 +-
.../dealer/zmq_dealer_publisher_proxy.py   |   7 +-
.../client/publishers/zmq_publisher_base.py|   6 +
.../_drivers/zmq_driver/client/zmq_ack_manager.py  | 111 +
.../_drivers/zmq_driver/client/zmq_client.py   |  29 +++-
.../_drivers/zmq_driver/client/zmq_client_base.py  |   7 +
.../_drivers/zmq_driver/client/zmq_receivers.py|  70 ++--
.../_drivers/zmq_driver/client/zmq_response.py |  50 --
.../zmq_driver/client/zmq_routing_table.py |   4 +-
.../_drivers/zmq_driver/client/zmq_senders.py  |  67 ++--
.../_drivers/zmq_driver/matchmaker/base.py |  32 ++--
.../_drivers/zmq_driver/poller/green_poller.py |   5 +
.../_drivers/zmq_driver/poller/threading_poller.py |   4 +
.../server/consumers/zmq_dealer_consumer.py|  74 ++---
.../server/consumers/zmq_router_consumer.py|  20 ++-
.../server/consumers/zmq_sub_consumer.py   |   3 +-
.../zmq_driver/server/zmq_incoming_message.py  |  27 +--
.../_drivers/zmq_driver/server/zmq_ttl_cache.py|  79 +
oslo_messaging/_drivers/zmq_driver/zmq_async.py|   9 +
oslo_messaging/_drivers/zmq_driver/zmq_names.py|   1 -
oslo_messaging/_drivers/zmq_driver/zmq_options.py  |  31 
oslo_messaging/_drivers/zmq_driver/zmq_poller.py   |   7 +
oslo_messaging/notify/notifier.py  |  25 +++
requirements.txt   |   2 +-
32 files changed, 890 insertions(+), 174 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index f58bf6b..95e0b29 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
+oslo.context>=2.6.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.middleware 3.18.0 release (newton)

2016-08-18 Thread no-reply
We are glowing to announce the release of:

oslo.middleware 3.18.0: Oslo Middleware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.middleware

With package available at:

https://pypi.python.org/pypi/oslo.middleware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

For more details, please see below.

Changes in oslo.middleware 3.17.0..3.18.0
-

0e98f3c Updated from global requirements
d541df1 Fixed typo in SSL
736270c Fix parameters of assertEqual are misplaced


Diffstat (except docs and test files)
-

oslo_middleware/ssl.py   |   2 +-
requirements.txt |   2 +-
7 files changed, 131 insertions(+), 130 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 0c6af5d..a51ac77 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context>=2.6.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.config 3.16.0 release (newton)

2016-08-18 Thread no-reply
We are overjoyed to announce the release of:

oslo.config 3.16.0: Oslo Configuration API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.config

With package available at:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

3.16.0
^^

New Features

* Integer and Float now support *min*, *max* and *choices*. Choices
  must respect *min* and *max* (if provided).

* Added Port type as an Integer in the closed interval [0, 65535].

Changes in oslo.config 3.15.0..3.16.0
-

8f1c750 Fix: default value of prog should remove extension
14ec2c6 Add Port type to allow configuration of a list of tcp/ip ports
ae8e56b Add set_override method test with ListOpt


Diffstat (except docs and test files)
-

oslo_config/cfg.py |  40 ++---
oslo_config/types.py   | 101 +++--
.../notes/add-port_type-8704295c6a56265d.yaml  |   5 +
5 files changed, 368 insertions(+), 72 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.db 4.12.0 release (newton)

2016-08-18 Thread no-reply
We are glad to announce the release of:

oslo.db 4.12.0: Oslo Database library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

4.12.0
^^

Bug Fixes

* Decorator "oslo_db.api.wrap_db_retry" now defaults to 10 retries.
  Previously the number of attempts was 0, and users had to explicitly
  pass "max_retry_interval" value greater than 0 to actually enable
  retries on errors.

Changes in oslo.db 4.11.0..4.12.0
-

2862b18 Updated from global requirements
89532b3 Link enginefacade to test database provisioning
af27831 Display full reason for backend not available
89bc44d Updated from global requirements
93a0467 Deprecate argument sqlite_db in method set_defaults
73b435f Add test helpers to enginefacade
8e2e97c Add logging_name to enginefacade config
c1735c5 Fix parameters of assertEqual are misplaced
4e4ec2d release notes: mention changes in wrap_db_retry()


Diffstat (except docs and test files)
-

oslo_db/options.py |   4 +
oslo_db/sqlalchemy/enginefacade.py |  90 +++-
oslo_db/sqlalchemy/provision.py| 156 -
oslo_db/sqlalchemy/test_base.py|  21 ++-
.../notes/wrap_db_retry-34c7ff2d82afa3f5.yaml  |   6 +
requirements.txt   |   2 +-
19 files changed, 655 insertions(+), 167 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 5bfff26..74c885d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context!=2.6.0,>=2.4.0 # Apache-2.0
+oslo.context>=2.6.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.cache 1.13.0 release (newton)

2016-08-18 Thread no-reply
We are mirthful to announce the release of:

oslo.cache 1.13.0: Cache storage for Openstack projects.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

For more details, please see below.

Changes in oslo.cache 1.12.0..1.13.0


6e9f722 Updated from global requirements


Diffstat (except docs and test files)
-

requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2f4ebb9..49915bf 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ six>=1.9.0 # MIT
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-18 Thread Matthew Thode
On 08/17/2016 03:52 PM, Nick Papadonis wrote:
> Comments
> 
> 
> Glance worked for me in Mitaka.  I had to specify 'chunked transfers'
> and increase the size limit to 5GB.  I had to pull some of the WSGI
> source from glance and alter it slightly to call from Apache.
> 
> I saw that Nova claims mod_wsgi is 'experimental'.  Interested in it's
> really experimental or folks use it in production.
> 
> Nick 

I haven't found any docs on getting mod_wsgi working for glance do you
happen to have a link?

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.concurrency 3.14.0 release (newton)

2016-08-18 Thread no-reply
We are excited to announce the release of:

oslo.concurrency 3.14.0: Oslo Concurrency library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

With package available at:

https://pypi.python.org/pypi/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.

Changes in oslo.concurrency 3.13.0..3.14.0
--

ebd3921 Updated from global requirements
3c46e8f Fix external lock tests on Windows


Diffstat (except docs and test files)
-

requirements.txt  |  2 +-
2 files changed, 56 insertions(+), 43 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 81d8537..14727e5 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ iso8601>=0.1.11 # MIT
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.context 2.9.0 release (newton)

2016-08-18 Thread no-reply
We are high-spirited to announce the release of:

oslo.context 2.9.0: Oslo Context library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.context

With package available at:

https://pypi.python.org/pypi/oslo.context

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.context

For more details, please see below.

Changes in oslo.context 2.8.0..2.9.0


1bbe016 Updated from global requirements
10fd6fd Fix parameters of assertEqual are misplaced
3a118fa Manually specify from_dict parameters
9e6c924 Emit deprecation warnings when positional args passed


Diffstat (except docs and test files)
-

oslo_context/context.py| 30 -
requirements.txt   |  2 ++
test-requirements.txt  |  1 +
4 files changed, 76 insertions(+), 11 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 95d0fe8..99569a8 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,0 +6,2 @@ pbr>=1.6 # Apache-2.0
+
+positional>=1.0.1 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 79fca04..830e499 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4,0 +5 @@
+fixtures>=3.0.0 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] taskflow 2.6.0 release (newton)

2016-08-18 Thread no-reply
We are satisfied to announce the release of:

taskflow 2.6.0: Taskflow structured state management library.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

For more details, please see below.

Changes in taskflow 2.5.0..2.6.0


3530178 Start to add a location for contributed useful tasks/flows/more


Diffstat (except docs and test files)
-

taskflow/contrib/__init__.py | 0
1 file changed, 0 insertions(+), 0 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.log 3.15.0 release (newton)

2016-08-18 Thread no-reply
We are eager to announce the release of:

oslo.log 3.15.0: oslo.log library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 3.14.0..3.15.0
--

caf5aa2 Fixes unit tests on Windows


Diffstat (except docs and test files)
-

1 file changed, 7 insertions(+), 7 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.versionedobjects 1.16.0 release (newton)

2016-08-18 Thread no-reply
We are delighted to announce the release of:

oslo.versionedobjects 1.16.0: Oslo Versioned Objects library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.

Changes in oslo.versionedobjects 1.15.0..1.16.0
---

ba1bf1d Updated from global requirements
0146734 Fix to_json_schema() call


Diffstat (except docs and test files)
-

oslo_versionedobjects/base.py   | 4 ++--
requirements.txt| 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index ca11536..8e986e4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context>=2.6.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslotest 2.9.0 release (newton)

2016-08-18 Thread no-reply
We are joyful to announce the release of:

oslotest 2.9.0: Oslo test framework

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslotest

With package available at:

https://pypi.python.org/pypi/oslotest

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

For more details, please see below.

Changes in oslotest 2.8.0..2.9.0


619ee74 Updated from global requirements


Diffstat (except docs and test files)
-

requirements.txt  | 2 +-
test-requirements.txt | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 9c881de..53b93d9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13 +13 @@ mox3>=0.7.0 # Apache-2.0
-os-client-config>=1.13.1 # Apache-2.0
+os-client-config!=1.19.0,>=1.13.1 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 8282002..b00d5a2 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] futurist 0.18.0 release (newton)

2016-08-18 Thread no-reply
We are grateful to announce the release of:

futurist 0.18.0: Useful additions to futures, from the future.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/futurist

With package available at:

https://pypi.python.org/pypi/futurist

Please report issues through launchpad:

http://bugs.launchpad.net/futurist

For more details, please see below.

Changes in futurist 0.17.0..0.18.0
--

4650a2b Eliminate unneccessary patching in GreenFuture


Diffstat (except docs and test files)
-

futurist/_futures.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] tooz 1.43.0 release (newton)

2016-08-18 Thread no-reply
We are joyful to announce the release of:

tooz 1.43.0: Coordination library for distributed systems.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.42.0..1.43.0
--

18e31db Makedirs only throws oserror, so only catch that


Diffstat (except docs and test files)
-

tooz/utils.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][glance] glance_store 0.17.0 release (newton)

2016-08-18 Thread no-reply
We are amped to announce the release of:

glance_store 0.17.0: OpenStack Image Service Store Library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/glance_store

With package available at:

https://pypi.python.org/pypi/glance_store

Please report issues through launchpad:

http://bugs.launchpad.net/glance-store

For more details, please see below.

0.17.0
^^

Improved configuration options for glance_store. Please refer to the
"other" section for more information.

Some deprecated exceptions have been removed. See upgrade section for
more details.


Upgrade Notes
*

* The following list of exceptions have been deprecated since 0.10.0
  release -- "Conflict", "ForbiddenPublicImage"
  "ProtectedImageDelete", "BadDriverConfiguration", "InvalidRedirect",
  "WorkerCreationFailure", "SchemaLoadError", "InvalidObject",
  "UnsupportedHeaderFeature", "ImageDataNotFound",
  "InvalidParameterValue", "InvalidImageStatusTransition". This
  release removes these exceptions so any remnant consumption of the
  same must be avoided/removed.


Other Notes
***

* The glance_store configuration options have been improved with
  detailed help texts, defaults for sample configuration files,
  explicit choices of values for operators to choose from, and a
  strict range defined with "min" and "max" boundaries. It is to be
  noted that the configuration options that take integer values now
  have a strict range defined with "min" and/or "max" boundaries where
  appropriate. This renders the configuration options incapable of
  taking certain values that may have been accepted before but were
  actually invalid. For example, configuration options specifying
  counts, where a negative value was undefined, would have still
  accepted the supplied negative value. Such options will no longer
  accept negative values. However, options where a negative value was
  previously defined (for example, -1 to mean unlimited) will remain
  unaffected by this change. Values that do not comply with the
  appropriate restrictions will prevent the service from starting. The
  logs will contain a message indicating the problematic configuration
  option and the reason why the supplied value has been rejected.

Changes in glance_store 0.16.0..0.17.0
--

5677b6f Add release notes for 0.17.0
14b9e1f Release note for glance_store configuration opts.
044da5c Improving help text for Swift store opts.
cbe1e5d Improving help text for Swift store util opts.
007aa98 Improve help text of cinder driver opts
0ec9da6 Fix help text of swift_store_config_file
9b24554 Improving help text for backend store opts.
5ff6709 Remove "Services which consume this" section
fe58b4b Improve the help text for Swift driver opts
ec5ec71 Updated from global requirements
3050338 Improving help text for Sheepdog opts
521b6e7 Use constraints for all tox environments
135bede Improve help text of http driver opts
e9b3efd Improve help text of filesystem store opts
b17d093 Improve help text of rbd driver opts
a326259 Improving help text for Glance store Swift  opts.
a7ba290 Remove deprecated exceptions
2a60cdf Improve the help text for vmware datastore driver opts
51f86db Updated from global requirements


Diffstat (except docs and test files)
-

glance_store/_drivers/cinder.py| 240 +++--
glance_store/_drivers/filesystem.py| 116 ++--
glance_store/_drivers/http.py  |  67 -
glance_store/_drivers/rbd.py   | 105 ++--
glance_store/_drivers/sheepdog.py  |  78 +-
glance_store/_drivers/swift/store.py   | 298 -
glance_store/_drivers/swift/utils.py   |  67 -
glance_store/_drivers/vmware_datastore.py  | 198 +++---
glance_store/backend.py| 108 ++--
glance_store/exceptions.py |  78 --
...ved-configuration-options-3635b56aba3072c9.yaml |  29 ++
.../notes/releasenote-0.17.0-efee3f557ea2096a.yaml |  14 +
requirements.txt   |   4 +-
tox.ini|   9 -
14 files changed, 1118 insertions(+), 293 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e4f1b3f..9482d6f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4 +4 @@
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
@@ -17 +17 @@ jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
-python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0
+python-keystoneclient!=2.1.0,>=2.0.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [new][oslo] stevedore 1.17.1 release (newton)

2016-08-18 Thread no-reply
We are overjoyed to announce the release of:

stevedore 1.17.1: Manage dynamic plugins for Python applications

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/stevedore

With package available at:

https://pypi.python.org/pypi/stevedore

Please report issues through launchpad:

https://bugs.launchpad.net/python-stevedore

For more details, please see below.

Changes in stevedore 1.17.0..1.17.1
---

62398ed do not emit warnings for missing hooks


Diffstat (except docs and test files)
-

stevedore/hook.py  | 17 -
stevedore/named.py |  9 +++--
2 files changed, 23 insertions(+), 3 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-18 Thread Nick Papadonis
I followed the instructions from IBM and Swift does appear to work
correctly under mod_wsgi.  I have yet to do extensive multi-node testing.
I'm a bit surprised why those, ~50 SLOC code snippets to start the service,
have yet to be integrated into the Swift repo.

In my single node environment, Glance, Cinder and Heat mod_wsgi also appear
to work correct such that a VM can boot on a filesystem and Heat
orchestration works.

Nick

On Thu, Aug 18, 2016 at 12:39 AM, John Dickinson  wrote:

> I don't know of people running Swift in production with mod_wsgi. The
> original doc you referenced and the related work done upstream was done
> several years ago, IIRC by IBM. Personally, I've never deployed Swift that
> way.
>
> However, I too am really interested in the general answers to your
> question, especially from the ops mailing list. If there's something broken
> in docs or code that is preventing people from solving their problems with
> Swift, I want to hear about it and fix it.
>
> --John
>
>
>
>
> On 17 Aug 2016, at 13:22, Nick Papadonis wrote:
>
> > Hi Folks,
> >
> > I was hacking in this area on Mitaka and enabled Glance, Cinder, Heat,
> > Swift, Ironic, Horizon and Keystone under Apache mod_wsgi instead of the
> > Eventlet server.Cinder, Keystone, Heat and Ironic provide Python
> source
> > in Github to easily enable this.  It appears that Glance and Swift
> (despite
> > the existence of
> > https://github.com/openstack/swift/blob/2bf5eb775fe3ad6d3a2afddfc75723
> 18e85d10be/doc/source/apache_deployment_guide.rst)
> > provide no such Python source to call from the Apache conf file.
> >
> > That said, is anyone using Glance, Swift, Neutron or Nova (marked
> > experimental) in production environments with mod_wsgi?  I had to put
> > together code to launch a subset of these which does not appear
> integrated
> > in Github.  Appreciate your insight.
> >
> > Thanks,
> > Nick
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-18 Thread Bashmakov, Alexander
Concrete example of an api-ref difference between Mitaka and Newton:
https://review.openstack.org/#/c/356693/1/api-ref/source/v2/images-parameters.yaml

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Thursday, August 18, 2016 10:20 AM
To: Nikhil Komawar ; OpenStack Development Mailing List 
(not for usage questions) 
Subject: Re: [openstack-dev] [all] versioning the api-ref?

On 08/18/2016 11:57 AM, Nikhil Komawar wrote:
> I guess the intent was to indicate the need for indicating the micro 
> or in case of Glance minor version bump when required.
> 
> The API isn't drastically different, there are new and old elements as 
> shown in the Nova api ref linked.

Right, so the point is that it should all be describable in a single document. 
It's like the fact that when you go to python API docs you get things like - 
https://docs.python.org/2/library/wsgiref.html

"New in version 2.5."

Perhaps if there is a concrete example of the expected differences between what 
would be in the mitaka tree vs. newton tree was can figure out an appropriate 
way to express that in api-ref.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-6, 22-26 Aug

2016-08-18 Thread Doug Hellmann
Focus
-

We're approaching feature freeze deadline so teams should be wrapping
up feature development as we approach the final milestone 29 Aug -
2 Sept.

General Notes
-

The upcoming third milestone marks the start of several freezes in
our release cycle to let us shift our focus to bug fixes and generally
hardening the release.

The general feature freeze allows teams to wrap up Newton and start
thinking about Ocata planning.

We freeze releases of all libraries and changes to requirements
between the third milestone and the final release to give downstream
packagers time to vet the libraries. Only emergency bug fix updates
are allowed during that period, not releases for FFEs.

We start a soft string freeze at the milestone to give translators
time to catch up with the work that has already been done this
cycle. A hard string freeze will follow two weeks later at R-3.

Release Actions
---

The last day for releases for non-client libraries will be 25 Aug.
File your release request in time to have the release done on the
25th.

Review the members of your $project-release group in gerrit, based
on the instructions Thierry sent on 15 Aug. You may not be able to
merge patches during the release candidate period if the group
membership is not set correctly.

Important Dates
---

Library release freeze date Aug 25.

Newton 3 milestone, Sept 1.

Newton release schedule: http://releases.openstack.org/newton/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2016-08-18 Thread michael mccune

Greetings OpenStack community,

Today's open discussion centered around the concept of capabilities and 
the need to ensure that however we end up doing them in APIs there 
should be consistency amongst the existing OpenStack services. In fact, 
it would be great if there were consistency between how OpenStack 
chooses to do it and how it is done on the rest of the web. More 
research required.


Brian Rosmaita visited to discuss similar issues around discovery of 
detailed functionality in the glance import API[6]. It's unclear if 
these details are the same as capabilities. What's clear is that we'll 
need to be tracking this carefully to avoid creating confusion.


The meeting logs are in the usual place. [5]

# New guidelines

* A guideline for links
  https://review.openstack.org/354266

# API guidelines proposed for freeze

The following guidelines are available for broader review by interested 
parties. These will be merged in one week if there is no further feedback.


* Add the beginning of a set of guidlines for URIs
  https://review.openstack.org/#/c/322194/
* Clarify backslash usage for 'in' operator
  https://review.openstack.org/#/c/353396/

# Guidelines currently under review

These are guidelines that the working group are debating and working on 
for consistency and language. We encourage any interested parties to 
join in the conversation.


* Clear the case if the version string isn't parsable
  https://review.openstack.org/#/c/346846/

# API Impact reviews currently open

Reviews marked as APIImpact [1] are meant to help inform the working 
group about changes which would benefit from wider inspection by group 
members and liaisons. While the working group will attempt to address 
these reviews whenever possible, it is highly recommended that 
interested parties attend the API-WG meetings [2] to promote 
communication surrounding their reviews.


To learn more about the API WG mission and the work we do, see OpenStack 
API Working Group [3].


Thanks for reading and see you next week!

[1] 
https://review.openstack.org/#/q/status:open+AND+(message:ApiImpact+OR+message:APIImpact),n,z

[2] https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
[3] http://specs.openstack.org/openstack/api-wg/
[4]: https://bugs.launchpad.net/openstack-api-wg
[5]: http://eavesdrop.openstack.org/meetings/api_wg/
[6]: 
https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html#discovery


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-18 Thread Sean Dague
On 08/18/2016 11:57 AM, Nikhil Komawar wrote:
> I guess the intent was to indicate the need for indicating the micro or
> in case of Glance minor version bump when required.
> 
> The API isn't drastically different, there are new and old elements as
> shown in the Nova api ref linked.

Right, so the point is that it should all be describable in a single
document. It's like the fact that when you go to python API docs you get
things like - https://docs.python.org/2/library/wsgiref.html

"New in version 2.5."

Perhaps if there is a concrete example of the expected differences
between what would be in the mitaka tree vs. newton tree was can figure
out an appropriate way to express that in api-ref.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matthew Treinish
On Thu, Aug 18, 2016 at 11:33:59AM -0500, Matthew Thode wrote:
> On 08/18/2016 10:00 AM, Matt Riedemann wrote:
> > It's that time of year again to talk about killing this job, at least
> > from the integrated gate (move it to experimental for people that care
> > about postgresql, or make it gating on a smaller subset of projects like
> > oslo.db).
> > 
> > The postgresql job used to have three interesting things about it:
> > 
> > 1. It ran keystone with eventlet (which is no longer a thing).
> > 2. It runs the n-api-meta service rather than using config drive.
> > 3. It uses postgresql for the database.
> > 
> > So #1 is gone, and for #3, according to the April 2016 user survey (page
> > 40) [1], 4% of reporting deployments are using it in production.
> > 
> > I don't think we're running n-api-meta in any other integrated gate
> > jobs, but I'm pretty sure there is at least one neutron job out there
> > that's running with it that way. We could also consider making the
> > nova-net dsvm full gate job run n-api-meta, or vice-versa with the
> > neutron dsvm full gate job.
> > 
> > We also have to consider that with HP public cloud being gone as a node
> > provider and we've got fewer test nodes to run with, we have to make
> > tough decisions about which jobs we're going to run in the integrated gate.
> > 
> > I'm bringing this up again because Nova has a few more jobs it would
> > like to make voting on it's repo (neutron LB and live migration, at
> > least in the check queue) but there are concerns about adding yet more
> > jobs that each change has to get through before it's merged, which means
> > if anything goes wrong in any of those we can have a 24 hour turnaround
> > on getting an approved change back through the gate.
> > 
> > [1]
> > https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
> > 
> 
> 
> I don't know about nova, but at least in keystone when I was testing
> upgrades I found an error that had to be fixed before release of Mitaka.
>  Guess I'm part of the 4% :(

That's not what we're talking about here. Your issue most likely stemmed from
keystone's lack of tests that do DB migrations with real data. The proposal here
is not talking about stopping all testing on postgres, just removing the 
postgres
dsvm tempset jobs from the integrated gate. Those jobs have very limited
additional value for the reasons Matt outlined. They also clearly did not catch
your upgrade issue and most (if not all) of the other postgres issues are caught
with are more targeted testing of the db layer done in the project repos.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Proposing vikasc for kuryr-libnetwork core

2016-08-18 Thread Mohammad Banikazemi

+1
Vikas has been working on various aspects of Kuryr with dedication for some
time. So yes it's about time :)

Best,

Mohammad




From:   Antoni Segura Puimedon 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   08/16/2016 05:55 PM
Subject:[openstack-dev] [Kuryr] Proposing vikasc for kuryr-libnetwork
core



Hi Kuryrs,

I would like to propose Vikas Choudhary for the core team for the
kuryr-libnetwork subproject. Vikas has kept submitting patches and reviews
at a very good rhythm in the past cycle and I believe he will help a lot to
move kuryr forward.

I would also like to propose him for the core team for the kuryr-kubernetes
subproject since he has experience in the day to day work with kubernetes
and can help with the review and refactoring of the prototype upstreaming.

Regards,

Antoni Segura Puimedon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matthew Thode
On 08/18/2016 10:00 AM, Matt Riedemann wrote:
> It's that time of year again to talk about killing this job, at least
> from the integrated gate (move it to experimental for people that care
> about postgresql, or make it gating on a smaller subset of projects like
> oslo.db).
> 
> The postgresql job used to have three interesting things about it:
> 
> 1. It ran keystone with eventlet (which is no longer a thing).
> 2. It runs the n-api-meta service rather than using config drive.
> 3. It uses postgresql for the database.
> 
> So #1 is gone, and for #3, according to the April 2016 user survey (page
> 40) [1], 4% of reporting deployments are using it in production.
> 
> I don't think we're running n-api-meta in any other integrated gate
> jobs, but I'm pretty sure there is at least one neutron job out there
> that's running with it that way. We could also consider making the
> nova-net dsvm full gate job run n-api-meta, or vice-versa with the
> neutron dsvm full gate job.
> 
> We also have to consider that with HP public cloud being gone as a node
> provider and we've got fewer test nodes to run with, we have to make
> tough decisions about which jobs we're going to run in the integrated gate.
> 
> I'm bringing this up again because Nova has a few more jobs it would
> like to make voting on it's repo (neutron LB and live migration, at
> least in the check queue) but there are concerns about adding yet more
> jobs that each change has to get through before it's merged, which means
> if anything goes wrong in any of those we can have a 24 hour turnaround
> on getting an approved change back through the gate.
> 
> [1]
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
> 


I don't know about nova, but at least in keystone when I was testing
upgrades I found an error that had to be fixed before release of Mitaka.
 Guess I'm part of the 4% :(

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Adding RHEL 7 STIG to openstack-ansible-security

2016-08-18 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/18/2016 11:26 AM, Hooper, Mark (Nokia - US) wrote:
> This makes perfect sense and will fit well into the work my team is already 
> doing on RHEL7 STIG hardening and will allow us to easily upstream our work.

Thanks for the input, Mark! :)

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIsBAEBCAAWBQJXteJHDxxtYWpvckBtaHR4Lm5ldAAKCRBzcFHgwQEfscaxD/9g
gL9yvPldW8rICf+WNw2nEUsVI5omtknza0n7BJLOlWe0m600rLJWgtvFTROXbaAq
Yjsoz3gsS9i8wZTooeTW3cYfJp/TCQwGQAO3YYjVZVxrwtGwZbplWLrRsQbLyRCF
Rot0m0PIyjK8u0doYR7qQR016X+Kd5iiBvkF1+au4P0p1ve8y7O5yDUfzgGykWd1
98maluNOT7KCI+lyAHT8Vm4/xm3gtf76TM/JLTk2Nor9EuAVZMfj7mA0Cmc+fqF+
GfYxjePS+mj3MKa4WrJPZYblSRFaCLDi4AvSMp4nuWYdpiToPr4uB3YJuVylFK5T
SAXYmatxzsLcq3xZr9WAIp0InmFkQxl4gk+ox00OcDbzymPPoDIV64jiu5KlXwKa
pqcaNRsZONkEJHU6/JIVV2Gil35h+D8e4SX2v1HZUEu7tg0gPAielgwO2bljBq6j
npT87t4FVk57XCguMqrtO2l5kdDXZUFdupeQGRjE5btYXu1WriphFyia8/Q4J49w
UE7t9w4hXpL979qebY76F1qLQH6HeneqTVPBXZLqBhj5lyPTmJ0tCydanMNaZ03W
yJGZMPh42ExhmbIA3EAxlDGvRf9f6AKg/g8TFMYAhAMsJE8NWlMQMvplwN+fsS8p
t/oMxIX7Zu8dEd6QpVuoG0MBVBbodV5rn3X7GUx+bw==
=toqs
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][security] Adding RHEL 7 STIG to openstack-ansible-security

2016-08-18 Thread Hooper, Mark (Nokia - US)
Major,
This makes perfect sense and will fit well into the work my team is already 
doing on RHEL7 STIG hardening and will allow us to easily upstream our work.


Regards
MH

-Original Message-
From: Major Hayden [mailto:ma...@mhtx.net] 
Sent: Friday, August 12, 2016 10:39 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [openstack-ansible][security] Adding RHEL 7 STIG 
to openstack-ansible-security

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/04/2016 12:45 PM, Major Hayden wrote:
> The existing openstack-ansible-security role uses security configurations 
> from the Security Technical Implementation Guide (STIG) and the new Red Hat 
> Enterprise Linux 7 STIG is due out soon.  The role is currently based on the 
> RHEL 6 STIG, and although this works quite well for Ubuntu 14.04, the RHEL 7 
> STIG has plenty of improvements that work better with Ubuntu 16.04, CentOS 7 
> and RHEL 7.

I've gone ahead and proposed a spec for these changes here:

  https://review.openstack.org/#/c/354389/

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIsBAEBCAAWBQJXre2XDxxtYWpvckBtaHR4Lm5ldAAKCRBzcFHgwQEfsUOTEACc
Y+8TwsdFpmePraheCu/REA3f+Jd/Qu+DE6ZWWD9KdMdzYJZY4ODmnevkKxg2aOw6
kvh1b3cHOa6WD6Vppw9645hj4rAm/Gisi0ULUl4gAiLuti8Q/A+hbO9GTgEryXW5
ptVVKV+zfV6Ieul0C5LopfUj+6ItvvHWlkQJ9JHVgCsFEVA2nN5dcP79A13KHkzH
qdCCkWeS/3S6fSiNTg8npHkigd4CxQuGHzn4mE/rVMGRjq80SJZUOvaKQFl9yB7s
eeblvRiwpK568S1jxLzfktH/L1s9JrS06LP510vzTM0lTv787HOKd9wRcYe56RvG
UED7wsCy4DwQJQL8UmFhoHvNlEGwZ0EOPavstiur3vUu2yyKf8WxXUPlvs43hWyf
YDfayr6MPvcq5SvplN8BJadB3dIMjWdGlCoVtW7Kfgr1MVrZphdJtPyvzRlZhi1n
7zvrhqa/1zed/uAMncpMvGnO4NVw50QUzCZ3A0ZspoQzhIP4Gtx0aZiKfjm51xey
q5QCGQRYXA8h9iD7dCx0q0kkTGCRMfeNPkFOapawlzP+KhsxoJm7rIZQbtrM3Qv8
hBbF/D8mf+fwVjU17eb0D1FjaRcPQEINoiMUPEayZBIW7ZhzsUGTf53bcRrR53u/
oOoWYCE3XfOgMogunZzz4ncvqEXJfxsPahXfUcfytg==
=Wk7j
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-18 Thread Nikhil Komawar
I guess the intent was to indicate the need for indicating the micro or
in case of Glance minor version bump when required.

The API isn't drastically different, there are new and old elements as
shown in the Nova api ref linked.

On 8/12/16 5:49 AM, Sean Dague wrote:
> On 08/11/2016 06:02 PM, Brian Rosmaita wrote:
>> I have a question about the api-ref. Right now, for example, the new
>> images v1/v2 api-refs are accurate for Mitaka.  But DocImpact bugs are
>> being generated as we speak for changes in master that won't be
>> available to consumers until Newton is released (unless they build from
>> source). If those bug fixes get merged, then the api-ref will no longer
>> be accurate for Mitaka API consumers (since it's published upon update).
> I'm confused about this statement.
>
> Are you saying that the Glance v2 API in Mitaka and Newton are different
> in some user visible ways? But both are called the v2 API? How does an
> end user know which to use?
>
> The assumption with the api-ref work is that the API document should be
> timeless (branchless), and hence why building from master is always
> appropriate. That information works for all time.
>
> We do support microversion markup in the document, you can see some of
> that in action here in the Nova API Ref -
> http://developer.openstack.org/api-ref/compute/?expanded=list-servers-detail
>
>
>   -Sean
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Let's clean up APi reference

2016-08-18 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi, 

https://review.openstack.org/#/c/327112/ targets the parameter verification for 
servers-actions.inc.

Br, 
Gerg0

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Thursday, August 18, 2016 2:33 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] Let's clean up APi reference

On 08/18/2016 08:16 AM, Akihiro Motoki wrote:
> Hi Neutron team,
> 
> As you may know, the OpenStack API references have been moved into
> individual project repositories, but it contains a lot of wrong
> information now :-(
> 
> Let's clean up API reference.
> It's now time to start the clean up and finish the cleanup by Newton-1.
> 
> I prepared the etherpad page to share the guideline of the cleanup and
> useful information.
> This page shares my experience of 'router' resource cleanup.
> 
> https://etherpad.openstack.org/p/neutron-api-ref-sprint
> 
> I hope everyone work on at least one resource :)
> The etherpad page has the progress tracking section (bottom of the page)
> Make sure to add your name when you start to work.
> 
> Feel free to ask me if you have a question.
> 
> Thanks,
> Akihiro

Fwiw, I built a burndown dashboard for this with Nova -
http://burndown.dague.org/ (source -
https://github.com/sdague/rst-burndown). It should be reasonably
adaptable to other projects if you have a host to run it on.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage][vitrage-dashboard] cropped entity information

2016-08-18 Thread Yujun Zhang
Reported at https://bugs.launchpad.net/vitrage-dashboard/+bug/1614585
--
Yujun

On Thu, Aug 18, 2016 at 9:44 PM Afek, Ifat (Nokia - IL) 
wrote:

> Hi Yujun,
>
> This is something that we need to handle. You can open a bug about it.
>
> Best Regards,
> Ifat.
>
> From: Yujun Zhang
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> Date: Wednesday, 17 August 2016 at 07:08
> To: "OpenStack Development Mailing List (not for usage questions)"
> Subject: [openstack-dev] [vitrage][vitrage-dashboard] cropped entity
> information
>
> In the topology view, the content of entity information is sometimes
> cropped when there are too many rows.
>
> Is there a way to peek the cropped content?
>
> [image: xbm5sl.jpg]
> --
> Yujun
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]concerns on driver deprecation policy

2016-08-18 Thread Sean McGinnis
On Thu, Aug 18, 2016 at 03:44:10AM +, Husheng (TommyLike, R IT 
Equipment Dept) wrote:
> Hi all,
> Sorry for absence from IRC meeting of this week and put forward this topic on 
> driver deprecation policy again. Actually I support the driver support tag 
> policy completely, it's a reasonable policy for both sides, and these below 
> are my 2 concerns:
> 
> 1. With the release of driver deprecation policy, we may leave a negative 
> impression on our customers/operators, cause we just send them a warning 
> message while we bring out the unstable or unusable driver codes together . 
> If I were the customer, I will probably not change my vendor for a few 
> warning messages, so this is what may happen for the unlucky boys, they 
> insist on their choice by setting the enable_unsupported_driver = TRUE and 
> get stuck. So this action may let them to take the risk(result) of their own, 
> maybe we can set up a guide of this situation rather than the announce 
> possible result.

I'm not actually clear what you mean here for setting up "a guide of
this situation". Can you clarify that?

> 
> 2. As this is a big change for vendor teams to get noticed and keep. What's 
> the detail rule of this policy, who or what will decide the vendor's support 
> attribute. What's the deadline for vendor before we create the decisive 
> patch. And when the vendor meets the standards again, what is the procedure 
> followed for the customer to make their system work again. Do we need a 
> document on all of this in detail?

Actually, this isn't a big change for vendor teams at all. This is
actually a smaller change for them versus the current policy of removing
the driver completely.
> 
> Thanks
> TommyLike.Hu
> 

For those that are just seeing this without the background context of
the other ML thread and the IRC discussions in Cinder, here's a little
summary.

It was raised in this thread [1] that Cinder's policy of removing
drivers that lacked vendor involvement and CI was against our current
tag of follows-standard-deprecation.

While a few options were discussed (I won't rehash them, but reading
through that thread should cover most of it), the current idea under
consideration is here: [2]

Rather than removing drivers, we would now mark them as deprecated and
add a flag to the driver that would indicate it is unsupported. On
initialization we would then be able to check for that flag and log very
clearly that it isn't considered a supported driver.

The one additional piece under consideration is whether or not to also
add a config file setting required to allow that driver to actually be
used. So an operator would need to set enable_unsupported_driver=True to
very explicitly acknowledge that they still want to be able to use the
driver.

Marking the driver as deprecated would then allow for some leeway (for
customers to angrily yell at their storage vendor to get their act
together) so that hopefully they would fix their CI issues and make sure
they are maintaining their driver. If they are able to turn things
around we can then remove that deprecation tag and unsupported flag and
everything's back to normal.

If however they don't turn things around and continue to ignore/abandon
their driver, we could then remove the driver for the next release and
not need to continue to keep around something that we have no idea if
still works. It allows us to follow the follows-standard-deprecation tag
intent better and gives users a release to make any necessary changes.

Feel free to comment on the review if you have any thoughts/concerns on
this.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-August/101428.html
[2] https://review.openstack.org/#/c/355608/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Meeting for Aug - 18 is canceled

2016-08-18 Thread Andrew Woodward
The agenda [1] for today's meeting has no items, so I'm calling it for the
meeting this week.

Please review your action items from last meeting and update the ML if you
haven't done so already

romcheg to annouunce fuel2 promotion and old fuel CLI removal on ML
akscram to send ML about moving cluster_upgrade extension
gkibardin to follow up on ML about mcollective config issues
xarses will update spec regarding release naming and summarize on ML

[1] https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cross-Project] [Cinder][Neutron][Cue]

2016-08-18 Thread Adam Young
These changes are necessary so policy files can in include the check 
"is_admin_project:True"  which allows us to Scope what is meant by "Admin"



Use from_environ to load context

Use to_policy_values for enforcing policy

Use context from_environ to load contexts

Use from_dict to load context params


https://review.openstack.org/340206 

https://review.openstack.org/340205 

https://review.openstack.org/340195 

https://review.openstack.org/340194 


Let's please get them merged.  This will improve RBAC for all of the 
services.



It looks like there are a handful for Cue as well:

Use oslo.context's from_environ to create context

Use standard to_policy_values for policy enforcement

Explicitly load context attributes in from_dict


https://review.openstack.org/#/c/345694/

https://review.openstack.org/#/c/345695/

https://review.openstack.org/#/c/345693/


Try to go light on the testing requirements in review feedback, as Jamie 
is working tio make this happen across a lot of projects.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ThirdParty][CI] DSS Cinder CI <budik...@list.ru> spam

2016-08-18 Thread Sean McGinnis
On Thu, Aug 18, 2016 at 07:57:25AM -0700, Elizabeth K. Joseph wrote:
> On Thu, Aug 18, 2016 at 7:30 AM, Elizabeth K. Joseph
>  wrote:
> > On Thu, Aug 18, 2016 at 7:14 AM, Jesse Pretorius
> >  wrote:
> >> Hi everyone,
> >>
> >> Today we started getting CI feedback for all our (OpenStack-Ansible) 
> >> patches
> >> from DSS Cinder CI  which always immediately fail. I
> >> expect this may be because they’re in the process of getting setup and are
> >> supposed to be targeting cinder repositories.
> >>
> >> We would appreciate it if the administrators of the account would properly
> >> scope their external CI tests. At the moment it’s feeling quite a bit like
> >> spam and if it continues without being resolved we’ll have to ask for the
> >> account to be disabled.
> >
> > This was also reported by the Murano folks earlier today, see:
> > https://review.openstack.org/#/c/355369/
> >
> > Just give me a nudge if you'd like us to disable the account.
> 
> I've received another comment about this, so I went ahead and disabled it.
> 
> They can request that it to be re-enabled when it's resolved.

Thanks, they were also failing against all Cinder patches as well.
I'm not even sure who this is, so hopefully this gets their attention
and they realize they should disable posting until they have it running
and stable.

> 
> -- 
> Elizabeth Krumbach Joseph || Lyz || pleia2
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] P and Q naming: all hail Pike and Queens

2016-08-18 Thread Anita Kuno

On 16-08-18 10:59 AM, Monty Taylor wrote:

Hey everybody,

As you all know, voting is only one step in the selection process. We
also have to have vetting done on the choices to identify risk
associated with the selected names.

This time around, the top choice by vote for both P and Q was too
problematic, so we moved down to the next most popular choice on the
list. Luckily, the second choice in each case was not problematic.

So with that, I'd like to officially introduce your release names for
the P and Q releases: Pike and Queens.

Enjoy.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


A fish and a university, could be worse.

Thanks Monty,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Let's drop the postgresql gate job

2016-08-18 Thread Matt Riedemann
It's that time of year again to talk about killing this job, at least 
from the integrated gate (move it to experimental for people that care 
about postgresql, or make it gating on a smaller subset of projects like 
oslo.db).


The postgresql job used to have three interesting things about it:

1. It ran keystone with eventlet (which is no longer a thing).
2. It runs the n-api-meta service rather than using config drive.
3. It uses postgresql for the database.

So #1 is gone, and for #3, according to the April 2016 user survey (page 
40) [1], 4% of reporting deployments are using it in production.


I don't think we're running n-api-meta in any other integrated gate 
jobs, but I'm pretty sure there is at least one neutron job out there 
that's running with it that way. We could also consider making the 
nova-net dsvm full gate job run n-api-meta, or vice-versa with the 
neutron dsvm full gate job.


We also have to consider that with HP public cloud being gone as a node 
provider and we've got fewer test nodes to run with, we have to make 
tough decisions about which jobs we're going to run in the integrated gate.


I'm bringing this up again because Nova has a few more jobs it would 
like to make voting on it's repo (neutron LB and live migration, at 
least in the check queue) but there are concerns about adding yet more 
jobs that each change has to get through before it's merged, which means 
if anything goes wrong in any of those we can have a 24 hour turnaround 
on getting an approved change back through the gate.


[1] 
https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] P and Q naming: all hail Pike and Queens

2016-08-18 Thread Monty Taylor
Hey everybody,

As you all know, voting is only one step in the selection process. We
also have to have vetting done on the choices to identify risk
associated with the selected names.

This time around, the top choice by vote for both P and Q was too
problematic, so we moved down to the next most popular choice on the
list. Luckily, the second choice in each case was not problematic.

So with that, I'd like to officially introduce your release names for
the P and Q releases: Pike and Queens.

Enjoy.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ThirdParty][CI] DSS Cinder CI <budik...@list.ru> spam

2016-08-18 Thread Elizabeth K. Joseph
On Thu, Aug 18, 2016 at 7:30 AM, Elizabeth K. Joseph
 wrote:
> On Thu, Aug 18, 2016 at 7:14 AM, Jesse Pretorius
>  wrote:
>> Hi everyone,
>>
>> Today we started getting CI feedback for all our (OpenStack-Ansible) patches
>> from DSS Cinder CI  which always immediately fail. I
>> expect this may be because they’re in the process of getting setup and are
>> supposed to be targeting cinder repositories.
>>
>> We would appreciate it if the administrators of the account would properly
>> scope their external CI tests. At the moment it’s feeling quite a bit like
>> spam and if it continues without being resolved we’ll have to ask for the
>> account to be disabled.
>
> This was also reported by the Murano folks earlier today, see:
> https://review.openstack.org/#/c/355369/
>
> Just give me a nudge if you'd like us to disable the account.

I've received another comment about this, so I went ahead and disabled it.

They can request that it to be re-enabled when it's resolved.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ThirdParty][CI] DSS Cinder CI <budik...@list.ru> spam

2016-08-18 Thread Elizabeth K. Joseph
On Thu, Aug 18, 2016 at 7:14 AM, Jesse Pretorius
 wrote:
> Hi everyone,
>
> Today we started getting CI feedback for all our (OpenStack-Ansible) patches
> from DSS Cinder CI  which always immediately fail. I
> expect this may be because they’re in the process of getting setup and are
> supposed to be targeting cinder repositories.
>
> We would appreciate it if the administrators of the account would properly
> scope their external CI tests. At the moment it’s feeling quite a bit like
> spam and if it continues without being resolved we’ll have to ask for the
> account to be disabled.

This was also reported by the Murano folks earlier today, see:
https://review.openstack.org/#/c/355369/

Just give me a nudge if you'd like us to disable the account.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] PTL summer fun vacation week - 8/22

2016-08-18 Thread Matt Riedemann
This is just a heads up that I'm out on vacation next week. I'll have my 
laptop with me and probably checking email sporadically, but hope not 
too all that much.


Dan Smith has graciously agreed to back me up next week so if there are 
questions about anything related to the release please get with him.


My priorities before I leave:

1. Getting a python-novaclient 5.1.0 release done today.

2. Getting the 2.36 support into novaclient:

https://review.openstack.org/#/c/347514/

And release that as 6.0.0.

3. Get the 2.37 patch for novaclient ready:

https://review.openstack.org/#/c/353018/

--

If you need anything from me before I leave please get me with in IRC 
today or tomorrow.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ThirdParty][CI] DSS Cinder CI <budik...@list.ru> spam

2016-08-18 Thread Jesse Pretorius
Hi everyone,

Today we started getting CI feedback for all our (OpenStack-Ansible) patches 
from DSS Cinder CI  which always immediately fail. I expect 
this may be because they’re in the process of getting setup and are supposed to 
be targeting cinder repositories.

We would appreciate it if the administrators of the account would properly 
scope their external CI tests. At the moment it’s feeling quite a bit like spam 
and if it continues without being resolved we’ll have to ask for the account to 
be disabled.

Thanks,

Jesse
IRC: odyssey4me


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] State of upgrade CLI commands

2016-08-18 Thread Brad P. Crochet
On Thu, Aug 18, 2016 at 4:25 AM, mathieu bultel  wrote:
> On 08/18/2016 09:29 AM, Marios Andreou wrote:
>> On 17/08/16 15:46, Jiří Stránský wrote:
>>> On 16.8.2016 21:08, Brad P. Crochet wrote:
 Hello TripleO-ians,

 I've started to look again at the introduced, but unused/undocumented
 upgrade commands. It seems to me that given the current state of the
 upgrade process (at least from Liberty -> Mitaka), these commands make
 a lot less sense.

 I see one of two directions to take on this. Of course I would love to
 hear other options.

 1) Revert these commands immediately, and forget they ever existed.
 They don't exactly work, and as I said, were never officially
 documented, so I don't think a revert is out of the question.

 or

 2) Do a major overhaul, and rethink the interface entirely. For
 instance, the L->M upgrade introduced a couple of new steps (the AODH
 migration and the Keystone migration). These would have either had to
 have completely new commands added, or have some type of override to
 the existing upgrade command to handle them.

 Personally, I would go for step 1. The 'overcloud deploy' command can
 accomplish all of the upgrade steps that involve Heat. In order for
 the new upgrade commands to work properly, there's a lot that needs to
 be refactored out of the deploy command itself so that it can be
 shared with deploy and upgrade, like passing of passwords and the
 like. I just don't see a need for discrete commands when we have an
 existing command that will do it for us. And with the addition of an
 answer file, it makes it even easier.

 Thoughts?

>>> +1 for approach no. 1. Currently `overcloud deploy` meets the upgrade
>>> needs and it gave us some flexibility to e.g. do migrations like AODH
>>> and Keystone WSGI. I don't think we should have a special command for
>>> upgrades at this point.
>>>
>>> The situation may change as we go towards upgrades of composable
>>> services, and perhaps wrap upgrades in Mistral if/when applicable, but
>>> then the potential upgrade command(s) would probably be different from
>>> the current ones anyway, so +1 for removing them.
>> +1 from me too, especially because this ^^^ (the workflow we currently
>> have and use will change quite drastically I expect)
>>
>> thanks, sorry I didn't spot this earlier,
>> marios
>
> +1 for me too, even if, I think for an end-user experience it's not
> ideal and the CLI would be a better way for this point.
>>> Jirka
>>>

I have proposed the following reverts:

python-tripleoclient:

https://review.openstack.org/#/c/357192/
https://review.openstack.org/#/c/357194/
https://review.openstack.org/#/c/357195/

tripleo-common:

https://review.openstack.org/#/c/357219/
https://review.openstack.org/#/c/357220/
https://review.openstack.org/#/c/357221/

-- 
Brad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] valid source and target for add_causal_relationship

2016-08-18 Thread Yujun Zhang
Fix proposed in https://review.openstack.org/#/c/356974/

To enable validation of definition content, I have also made a refactoring
to the validator as in https://review.openstack.org/#/c/356947/

--
Yujun

On Wed, Aug 17, 2016 at 10:29 PM Har-Tal, Liat (Nokia - IL) <
liat.har-...@nokia.com> wrote:

> Yes, it should be limited to ‘ALARM’
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Wednesday, August 17, 2016 9:00 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [vitrage] valid source and target for
> add_causal_relationship
>
>
>
> The issue comes from my carelessness that creating a causal relationship
> between two relationship instead of entities [1].
>
>
>
> But it seems not be detected by the template validator.
>
>
>
> I wonder what could be a valid `source` and `target` for causal
> relationship? Should it be limited to `ALARM`?
>
>
>
> [1]
> https://github.com/openzero-zte/vitrage-demo/commit/ce82f6f03e1b7168499233de431323e3cba43f9d#diff-aef3ec3ecbcccbad905bf3c57bb47e95R49
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage][vitrage-dashboard] cropped entity information

2016-08-18 Thread Afek, Ifat (Nokia - IL)
Hi Yujun,

This is something that we need to handle. You can open a bug about it.

Best Regards,
Ifat.

From: Yujun Zhang
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, 17 August 2016 at 07:08
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [vitrage][vitrage-dashboard] cropped entity information

In the topology view, the content of entity information is sometimes cropped 
when there are too many rows.

Is there a way to peek the cropped content?

[xbm5sl.jpg]
--
Yujun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release][ptl] tentative ocata schedule up for review

2016-08-18 Thread Doug Hellmann
The release team has prepared a proposed schedule for the Ocata cycle.
Please look over https://review.openstack.org/357214 and let us know if
you spot any issues.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovo] Need help to understand the exception of NeutronSyntheticFieldMultipleForeignKeys

2016-08-18 Thread Korzeniewski, Artur
Hi,
So in the first place, you do not need the multiple foreign keys in Flavor db 
use case. 
You can only declare flavor_id and service_profile_id in 
FlavorServiceProfileBinding object, since the relationships ('flavor' and 
'service_profile') is not used anywhere, only ids.

Secondly, nobody right now is working on improving the situation. I have two 
ideas how to fix it:
The example:
{
'network_id': 'id', 
'agent_id': id
}

1) add a name of object to the value ('id' i.e.)
{
'network_id': 'Network.id', 
'agent_id': 'Agent.id'
}
It would be kind of complicated to get this used in [1], where the foreign keys 
are accessed

2) get deeper structure like:
{
'Network': {'network_id': 'id'},
'Agent': {'agent_id': id}
}
It looks better, because you just add proper foreign key under the related 
object name. Then you can proceed without any issues in [1], just grab the 
proper object name value as dictionary of single foreign key, you can check the 
[2] to see how it should look like for second option.

Regards,
Artur

[1] 
https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/objects/base.py#L433
[2] https://review.openstack.org/#/c/357207
-Original Message-
From: taget [mailto:qiaoliy...@gmail.com] 
Sent: Thursday, August 18, 2016 5:08 AM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Qiao, Liyong ; Bhatia, Manjeet S 
; Korzeniewski, Artur 
Subject: [neutron][ovo] Need help to understand the exception of 
NeutronSyntheticFieldMultipleForeignKeys

hi neutron ovo hacker:

Recently I am working on neutron OVO blue print, and found there are some 
blocking (slow progress) issues when tring to add new object to Neutron.

When I try to add Flavor related object on [1], I need to add 2 foreign_keys to 
FlavorServiceProfileBinding :

flavor_id: Flavor.id
service_profile_id: ServiceProfile.id

For ServiceProfile and Flavor object, FlavorServiceProfileBinding is a 
synthetic_fields, we refer FlavorServiceProfileBinding in [2], but in currently 
object base implementation, we only allow synthetic_fields to only have 1 
foreignkeys[3].

can anyone help to clarify this? or give some guide on how to overcome [1]? If 
there's anyone who working on to fix it too?


P. S There are other use case for multiple foreign keys [4]
[1]https://review.openstack.org/#/c/306685/6/neutron/db/flavor/models.py@45
[2]https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/db/flavors_db.py#L86
[3]https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/objects/base.py#L429-L430
[4]https://review.openstack.org/#/c/307964/20/neutron/objects/router.py@33

-- 
Best Regards,
Eli Qiao (乔立勇), Intel OTC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Telemetry] Barcelona summit space needs

2016-08-18 Thread Julien Danjou
Hey team,

Like other teams, we need to decide how many fishbowl, workroom, etc we
need.

Last time in Austin we had:
- 2 fishbowls
- 7 workroom
- Half a day of a contributors meetup (which IIRC got more or less
  cancelled?)

Many of the sessions were empty and short, so I'd suggest to limit
ourselves to something like:
- 2 fishbowls
- 3 workrooms
- half a day of contributors meetup

WDYT?

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging-deb] - status of packaging directly within the OpenStack infra ?

2016-08-18 Thread Saverio Proto
Hello,

I just subscribed this list, usually I am on the operators list.

I have been reading the archives of this list:
http://lists.openstack.org/pipermail/openstack-dev/2016-June/097947.html

I am more than happy to hear that packages will be built in the
Openstack infra. Are we going to be able to have packages built
automatically for every gerrit review?

As far as I understood every operator has his own procedures to build
packages for debian ubuntu, when an emergency patch is needed in a
production system.
It never managed to read documentation to build packages that is
somewhat official.

Here at SWITCH for example we use this:
https://github.com/zioproto/ubuntu-cloud-archive-vagrant-vm/tree/xenial
This procedure used to work for our Kilo packages, but now looks like
something broke upstream.

Next week in NYC there will be the Openstack Ops Midcycle
https://etherpad.openstack.org/p/NYC-ops-meetup

There is a session about Ubuntu packages, looks like also Corey Bryant
will be there.

Is anyone coming to give an update about the packaging directly within the
OpenStack infra ?

Thank you

Saverio

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Let's clean up APi reference

2016-08-18 Thread Sean Dague
On 08/18/2016 08:16 AM, Akihiro Motoki wrote:
> Hi Neutron team,
> 
> As you may know, the OpenStack API references have been moved into
> individual project repositories, but it contains a lot of wrong
> information now :-(
> 
> Let's clean up API reference.
> It's now time to start the clean up and finish the cleanup by Newton-1.
> 
> I prepared the etherpad page to share the guideline of the cleanup and
> useful information.
> This page shares my experience of 'router' resource cleanup.
> 
> https://etherpad.openstack.org/p/neutron-api-ref-sprint
> 
> I hope everyone work on at least one resource :)
> The etherpad page has the progress tracking section (bottom of the page)
> Make sure to add your name when you start to work.
> 
> Feel free to ask me if you have a question.
> 
> Thanks,
> Akihiro

Fwiw, I built a burndown dashboard for this with Nova -
http://burndown.dague.org/ (source -
https://github.com/sdague/rst-burndown). It should be reasonably
adaptable to other projects if you have a host to run it on.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Let's clean up APi reference

2016-08-18 Thread Akihiro Motoki
Hi Neutron team,

As you may know, the OpenStack API references have been moved into
individual project repositories, but it contains a lot of wrong
information now :-(

Let's clean up API reference.
It's now time to start the clean up and finish the cleanup by Newton-1.

I prepared the etherpad page to share the guideline of the cleanup and
useful information.
This page shares my experience of 'router' resource cleanup.

https://etherpad.openstack.org/p/neutron-api-ref-sprint

I hope everyone work on at least one resource :)
The etherpad page has the progress tracking section (bottom of the page)
Make sure to add your name when you start to work.

Feel free to ask me if you have a question.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [neutron] [api] [doc] API reference for neutron stadium projects (re: API status report)

2016-08-18 Thread Anne Gentle
On Thu, Aug 18, 2016 at 3:33 AM, Akihiro Motoki  wrote:

> 2016-08-13 7:36 GMT+09:00 Anne Gentle :
> >
> >
> > On Fri, Aug 12, 2016 at 3:29 AM, Akihiro Motoki 
> wrote:
> >>
> >> this mail focuses on neutron-specific topics. I dropped cinder and
> ironic
> >> tags.
> >>
> >> 2016-08-11 23:52 GMT+09:00 Anne Gentle :
> >> >
> >> >
> >> > On Wed, Aug 10, 2016 at 2:49 PM, Anne Gentle
> >> > 
> >> > wrote:
> >> >>
> >> >> Hi all,
> >> >> I wanted to report on status and answer any questions you all have
> >> >> about
> >> >> the API reference and guide publishing process.
> >> >>
> >> >> The expectation is that we provide all OpenStack API information on
> >> >> developer.openstack.org. In order to meet that goal, it's simplest
> for
> >> >> now
> >> >> to have all projects use the RST+YAML+openstackdocstheme+os-api-ref
> >> >> extension tooling so that users see available OpenStack APIs in a
> >> >> sidebar
> >> >> navigation drop-down list.
> >> >>
> >> >> --Migration--
> >> >> The current status for migration is that all WADL content is migrated
> >> >> except for trove. There is a patch in progress and I'm in contact
> with
> >> >> the
> >> >> team to assist in any way. https://review.openstack.org/#/c/316381/
> >> >>
> >> >> --Theme, extension, release requirements--
> >> >> The current status for the theme, navigation, and Sphinx extension
> >> >> tooling
> >> >> is contained in the latest post from Graham proposing a solution for
> >> >> the
> >> >> release number switchover and offers to help teams as needed:
> >> >>
> >> >> http://lists.openstack.org/pipermail/openstack-dev/2016-
> August/101112.html I
> >> >> hope to meet the requirements deadline to get those changes landed.
> >> >> Requirements freeze is Aug 29.
> >> >>
> >> >> --Project coverage--
> >> >> The current status for project coverage is that these projects are
> now
> >> >> using the RST+YAML in-tree workflow and tools and publishing to
> >> >> http://developer.openstack.org/api-ref/ so they will be
> >> >> included in the upcoming API navigation sidebar intended to span all
> >> >> OpenStack APIs:
> >> >>
> >> >> designate http://developer.openstack.org/api-ref/dns/
> >> >> glance http://developer.openstack.org/api-ref/image/
> >> >> heat http://developer.openstack.org/api-ref/orchestration/
> >> >> ironic http://developer.openstack.org/api-ref/baremetal/
> >> >> keystone http://developer.openstack.org/api-ref/identity/
> >> >> manila http://developer.openstack.org/api-ref/shared-file-systems/
> >> >> neutron-lib http://developer.openstack.org/api-ref/networking/
> >> >> nova http://developer.openstack.org/api-ref/compute/
> >> >> sahara http://developer.openstack.org/api-ref/data-processing/
> >> >> senlin http://developer.openstack.org/api-ref/clustering/
> >> >> swift http://developer.openstack.org/api-ref/object-storage/
> >> >> zaqar http://developer.openstack.org/api-ref/messaging/
> >> >>
> >> >> These projects are using the in-tree workflow and common tools, but
> do
> >> >> not
> >> >> have a publish job in project-config in the
> jenkins/jobs/projects.yaml
> >> >> file.
> >> >>
> >> >> ceilometer
> >> >
> >> >
> >> > Sorry, in reviewing further today I found another project that does
> not
> >> > have
> >> > a publish job but has in-tree source files:
> >> >
> >> > cinder
> >> >
> >> > Team cinder: can you let me know where you are in your publishing
> >> > comfort
> >> > level? Please add an api-ref-jobs: line with a target of block-storage
> >> > to
> >> > jenkins/jobs/projects.yaml in the project-config repo to ensure
> >> > publishing
> >> > is correct.
> >> >
> >> > Another issue is the name of the target directory for the final URL.
> >> > Team
> >> > ironic can I change your api-ref-jobs: line to bare-metal instead of
> >> > baremetal? It'll be better for search engines and for alignment with
> the
> >> > other projects URLs: https://review.openstack.org/354135
> >> >
> >> > I've also uncovered a problem where a neutron project's API does not
> >> > have an
> >> > official service name, and am working on a solution but need help from
> >> > the
> >> > neutron team: https://review.openstack.org/#/c/351407
> >>
> >> I followed the discussion in https://review.openstack.org/#/c/351407
> >> and my understanding of the conclusion is to add API reference source
> >> of neutron stadium projects
> >> to neutron-lib and publish them under
> >> http://developer.openstack.org/api-ref/networking/ .
> >> I sounds reasonable to me.
> >>
> >> We can have a dedicated pages for each stadium project like
> networking-sfc
> >> like api-ref/networking/service-function-chaining.
> >> At now all APIs are placed under v2/ directory, but it is not good
> >> both from user and
> >> maintenance perspectives.
> >>
> >>
> >> So, the next thing we need to clarify is what names and directory
> >> structure are 

[openstack-dev] [puppet] Barcelona Design Summit space needs

2016-08-18 Thread Emilien Macchi
Team,

Thierry sent an email to all PTLs about space needs for next Summit.

Here's what we can have:

* Fishbowl sessions (from Wednesday 4pm to Friday noon)
Our traditional largish rooms organized in fishbowl style, with
advertised session content on the summit schedule for increased external
participation. Ideal for when wider feedback is essential.

* Workroom sessions (from Wednesday 4pm to Friday noon)
Smaller rooms organized in boardroom style, with topic buried in the
session description, in an effort to limit attendance and not overcrowd
the room. Ideal to get work done and prioritize work in small teams.

* Contributors meetup (Friday afternoon)
Half-day session on Friday afternoon to get into the Ocata action while
decisions and plans are still hot, or to finish discussions started
during the week, whatever works for you.

Note:
- Ops summit on Tuesday morning until 4pm
- Cross-project workshops from Tuesday 4pm to Wednesday 4pm

As a reminder, here's what we asked for Austin:
Fishbowl slots (Wed-Thu): 2
Workroom slots (Tue-Thu): 3
Contributors meetup (Fri): 1

Those who were here can also remember we didn't need all those rooms.
I suggest this time we ask for 2 Workroom slots (max 3) and that's it.
I'm not sure we actually need Fishbowl and Contributor meetup slots,
but feel free to propose if I'm wrong.

I created an etherpad for topic ideas, feel free to start thinking about it:
https://etherpad.openstack.org/p/ocata-puppet

Thanks for reading so far,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] reference a type of alarm in template

2016-08-18 Thread Afek, Ifat (Nokia - IL)
Hi Yujun,

You understood correctly. ‘name’ is checked if it exists, and is ignored 
otherwise.

Best Regards,
Ifat.

From: Yujun Zhang
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, 17 August 2016 at 18:01
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [vitrage] reference a type of alarm in template

Thanks a lot Liat.

So if `name` is omitted, all alarms from that source will match the entity. 
Otherwise the `name` is checked. The property mapped to `name` is defined in 
datasource driver and transformer. Is that so?

P.S. I know that the ultimate answers can be digged outs from code and 
document. It just sometimes consumes too much time.

Thanks again for the clarifications from the team. Your answers really helped 
me a lot in the understanding of vitrage design.
--
Yujun

On Wed, Aug 17, 2016 at 10:03 PM Har-Tal, Liat (Nokia - IL) 
> wrote:
Hi Yujun,

There is no limitation for the number of alarms from the same data source you 
can define in the template.

See the following examples:

Example 1: two different alarms definitions from the same data source

- entity:
category: ALARM
type: nagios
name: CPU load
template_id: nagios_alarm_1
- entity:
category: ALARM
type: nagios
name: Memory use
template_id: nagios_alarm_2

Example 2: definition for a general alarm from specific data source

- entity:
category: ALARM
type: nagios
template_id: nagios_alarm

Best regards,
Liat


From: Yujun Zhang 
[mailto:zhangyujun+...@gmail.com]
Sent: Wednesday, August 17, 2016 3:51 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] reference a type of alarm in template

Hi, Liat,

Thanks for clarification. But my question still remains.

Under this definition (type <=> datasource), we can only refer to **one** alarm 
from each datasource in the template. Is it a limitation?

How should I refer two different aodh alarms in the template?
--
Yujun

On Tue, Aug 16, 2016 at 10:24 PM Har-Tal, Liat (Nokia - IL) 
> wrote:
Hi Yujun,

The template example you are looking at is invalid.
I added a new valid example (see the following link: 
https://review.openstack.org/#/c/355840/)

As Elisha wrote, the ‘type’ field means the alarm type itself or in simple 
words where it was generated (Vitrage/Nagios/Zabbix/AODH)
The ‘name’ field is not mandatory and it describes the actual problem which the 
alarm was raised about.

In the example you can see two alarm types:


1.   Zabbix alarm - No use of “name” field:
 category: ALARM
 type: zabbix
 rawtext: Processor load is too high on {HOST.NAME}
 template_id: zabbix_alarm

2.   Vitrage alarm
category: ALARM
type: vitrage
name: CPU performance degradation
template_id: instance_alarm

One more point, in order to define an entity in the template, the only 
mandatory fields are:

· template_id

· category
All the other fields are optional and they are designed so that you more 
accurately define the entity.
Each alarm data source has its own set of fields you can use – we will add 
documentation for the in the future.

Best regards,
Liat Har-Tal



From: Yujun Zhang 
[mailto:zhangyujun+...@gmail.com]
Sent: Tuesday, August 16, 2016 5:18 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] reference a type of alarm in template

Hi, Elisha

There is no `name` in the template [1], and type is not one of 'nagios', 'aodh' 
and 'vitrage' in the examples [2].

- entity:


category: ALARM


type: Free disk space is less than 20% on volume /


template_id: host_alarm


[1] 
https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/deduced_host_disk_space_to_instance_at_risk.yaml#L8
[2] 
https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst#examples


On Tue, Aug 16, 2016 at 2:21 AM Rosensweig, Elisha (Nokia - IL) 
> wrote:
Hi,

The "type" means where it was generated - aodh, vitrage, nagios...

I think you are looking for"name", a field that describes the actual problem. 
We should add that to our documentation to clarify.

Sent from Nine

From: Yujun Zhang >
Sent: Aug 15, 2016 16:10
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [vitrage] 

[openstack-dev] [Kuryr] - Design summit rooms allocation

2016-08-18 Thread Gal Sagie
Hello all,

We need to decide on the number of rooms we need for the design summit.
I have started the below etherpad so we can all brainstorm the sessions we
would like
to conduct (and who is going to chair them):

https://etherpad.openstack.org/p/kuryr-barcelona

(For the Austin summit we had 1 fishbowl room, 5 working sessions and
Friday half day)

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] How to start all OpenStack services after restarting system?

2016-08-18 Thread Amrith Kumar
The attached script is something I’ve used for a while and it works pretty 
well. It does a little more than just the screen reconnection, it also gets you 
the swift/cinder volumes that you’ll need if you want to run, say Trove.

-amrith


From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: Thursday, August 18, 2016 5:52 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [devstack] How to start all OpenStack services 
after restarting system?

Hi Zhi,

Use "screen -c stack-screenrc" this will take you to the services screen.

On Thu, Aug 18, 2016 at 3:03 PM, zhi 
> wrote:
hi, all.

Currently, there is no "rejoin-stack.sh" script in devstack.

 It will clear all resources and create all resources if I rerun 
"./stack.sh" after restarting system.

 So,  how to start all OpenStack services after restarting system quickly?


Thanks
Zhi Chang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
[https://docs.google.com/uc?export=download=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks & Regards,
Abhishek
Cloudbyte Inc.
#!/bin/bash
# restart-devstack

# remount the data volumes being used by swift and cinder 
sudo losetup -f /opt/stack/data/swift/drives/images/swift.img
sudo losetup -f /opt/stack/data/stack-volumes-default-backing-file
sudo losetup -f /opt/stack/data/stack-volumes-lvmdriver-1-backing-file
 
sudo mount -t xfs -o rw,noatime,nodiratime,nobarrier,logbufs=8 /dev/loop0 
/opt/stack/data/swift/drives/sdb1
 
# re launch the screen session
screen -c /opt/stack/devstack/stack-screenrc -m -q -d
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] How to start all OpenStack services after restarting system?

2016-08-18 Thread Abhishek Shrivastava
Hi Zhi,

Use "screen -c stack-screenrc" this will take you to the services screen.

On Thu, Aug 18, 2016 at 3:03 PM, zhi  wrote:

> hi, all.
>
> Currently, there is no "rejoin-stack.sh" script in devstack.
>
>  It will clear all resources and create all resources if I rerun
> "./stack.sh" after restarting system.
>
>  So,  how to start all OpenStack services after restarting system
> quickly?
>
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 


*Thanks & Regards,*
*Abhishek*
*Cloudbyte Inc. *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] How to start all OpenStack services after restarting system?

2016-08-18 Thread zhi
hi, all.

Currently, there is no "rejoin-stack.sh" script in devstack.

 It will clear all resources and create all resources if I rerun
"./stack.sh" after restarting system.

 So,  how to start all OpenStack services after restarting system
quickly?


Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread hie...@vn.fujitsu.com
> > Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, 
> > and Heat can grab these to decide the right scaling method. IMO, this 
> > approach have some problems, please take a look and give feedbacks:
> > - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle 
> > complex scaling policies. For example: 
> > If CPU > 80% then scale out
> > If Mem < 40% then scale in
> > -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> > There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
> > conditional logic of Heat also cannot resolve the conflict of scaling 
> > policies. For example:
> > If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% 
> > then scale in
> > -> What if CPU = 90% and Mem = 30%.
>
> What would you like Heat to do in this scenario ? Is it that you would like 
> to have a user defined logic option as well as basic conditionals ?

Thank you Tim for feedback.

Yes, I'd like Heat to validate the user-defined policies along with heat 
template-validate mechanism. 

>
> I would expect the same problem to occur in pure Heat scenarios also so a 
> user defined scaling policy would probably be of interest there too and avoid 
> code duplication.
>
> Tim

Currently, there are some bp like [1] related to auto scaling policies but I 
cannot see the interest related to this problem. Hope someone can show me.
The Magnum scaler can be a centralize spot for both Magnum and Zun to scale COE 
node via Heat or container via COE scaling API (k8s already have auto scaling 
engine).

[1]. https://blueprints.launchpad.net/heat/+spec/as-lib

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Thursday, August 18, 2016 3:19 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design?


> On 18 Aug 2016, at 09:56, hie...@vn.fujitsu.com wrote:
> 
> Hi Magnum folks,
> 
> I have some interests in our auto scaling features and currently testing with 
> some container monitoring solutions such as heapster, telegraf and 
> prometheus. I have seen the PoC session corporate with Senlin in Austin and 
> have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun, so is 
> there only one level of scaling (node) instead of both node and container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on 
> Heat/Ceilometer for gathering metrics and do the scaling work based on auto 
> scaling policies, but is Heat/Ceilometer is the best choice for Magnum auto 
> scaling? 
> 
> Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, 
> and Heat can grab these to decide the right scaling method. IMO, this 
> approach have some problems, please take a look and give feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle 
> complex scaling policies. For example: 
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
> conditional logic of Heat also cannot resolve the conflict of scaling 
> policies. For example:
> If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% 
> then scale in
> -> What if CPU = 90% and Mem = 30%.

What would you like Heat to do in this scenario ? Is it that you would like to 
have a user defined logic option as well as basic conditionals ?

I would expect the same problem to occur in pure Heat scenarios also so a user 
defined scaling policy would probably be of interest there too and avoid code 
duplication.

Tim

> Thus, I think that we need to implement magnum scaler for validating the 
> policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE. 
> 
> I think we need a new design for auto scaling feature, not for Magnum only 
> but also Zun (because the scaling level of container maybe forked to Zun 
> too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and show 
> the monitoring URL when creating cluster (bay) complete. For example, we can 
> use Prometheus as monitoring container for each cluster. (Heapster is the 
> best choice for k8s, but not good enough for swarm or mesos).
> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if need.
> - Manage user-defined scaling policy: not only cpu and memory but also other 
> metrics like network bw, CCU.
> - Validate user-defined scaling policy and trigger heat for scaling 
> actions. (can trigger nova-scheduler for more scaling options)
> - Need highly scalable architecture, first step we can implement simple 
> validator method but in the future, there are some other approach such as 
> using fuzzy logic or 

Re: [openstack-dev] [OpenStack-docs] [neutron] [api] [doc] API reference for neutron stadium projects (re: API status report)

2016-08-18 Thread Akihiro Motoki
2016-08-13 7:36 GMT+09:00 Anne Gentle :
>
>
> On Fri, Aug 12, 2016 at 3:29 AM, Akihiro Motoki  wrote:
>>
>> this mail focuses on neutron-specific topics. I dropped cinder and ironic
>> tags.
>>
>> 2016-08-11 23:52 GMT+09:00 Anne Gentle :
>> >
>> >
>> > On Wed, Aug 10, 2016 at 2:49 PM, Anne Gentle
>> > 
>> > wrote:
>> >>
>> >> Hi all,
>> >> I wanted to report on status and answer any questions you all have
>> >> about
>> >> the API reference and guide publishing process.
>> >>
>> >> The expectation is that we provide all OpenStack API information on
>> >> developer.openstack.org. In order to meet that goal, it's simplest for
>> >> now
>> >> to have all projects use the RST+YAML+openstackdocstheme+os-api-ref
>> >> extension tooling so that users see available OpenStack APIs in a
>> >> sidebar
>> >> navigation drop-down list.
>> >>
>> >> --Migration--
>> >> The current status for migration is that all WADL content is migrated
>> >> except for trove. There is a patch in progress and I'm in contact with
>> >> the
>> >> team to assist in any way. https://review.openstack.org/#/c/316381/
>> >>
>> >> --Theme, extension, release requirements--
>> >> The current status for the theme, navigation, and Sphinx extension
>> >> tooling
>> >> is contained in the latest post from Graham proposing a solution for
>> >> the
>> >> release number switchover and offers to help teams as needed:
>> >>
>> >> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101112.html
>> >>  I
>> >> hope to meet the requirements deadline to get those changes landed.
>> >> Requirements freeze is Aug 29.
>> >>
>> >> --Project coverage--
>> >> The current status for project coverage is that these projects are now
>> >> using the RST+YAML in-tree workflow and tools and publishing to
>> >> http://developer.openstack.org/api-ref/ so they will be
>> >> included in the upcoming API navigation sidebar intended to span all
>> >> OpenStack APIs:
>> >>
>> >> designate http://developer.openstack.org/api-ref/dns/
>> >> glance http://developer.openstack.org/api-ref/image/
>> >> heat http://developer.openstack.org/api-ref/orchestration/
>> >> ironic http://developer.openstack.org/api-ref/baremetal/
>> >> keystone http://developer.openstack.org/api-ref/identity/
>> >> manila http://developer.openstack.org/api-ref/shared-file-systems/
>> >> neutron-lib http://developer.openstack.org/api-ref/networking/
>> >> nova http://developer.openstack.org/api-ref/compute/
>> >> sahara http://developer.openstack.org/api-ref/data-processing/
>> >> senlin http://developer.openstack.org/api-ref/clustering/
>> >> swift http://developer.openstack.org/api-ref/object-storage/
>> >> zaqar http://developer.openstack.org/api-ref/messaging/
>> >>
>> >> These projects are using the in-tree workflow and common tools, but do
>> >> not
>> >> have a publish job in project-config in the jenkins/jobs/projects.yaml
>> >> file.
>> >>
>> >> ceilometer
>> >
>> >
>> > Sorry, in reviewing further today I found another project that does not
>> > have
>> > a publish job but has in-tree source files:
>> >
>> > cinder
>> >
>> > Team cinder: can you let me know where you are in your publishing
>> > comfort
>> > level? Please add an api-ref-jobs: line with a target of block-storage
>> > to
>> > jenkins/jobs/projects.yaml in the project-config repo to ensure
>> > publishing
>> > is correct.
>> >
>> > Another issue is the name of the target directory for the final URL.
>> > Team
>> > ironic can I change your api-ref-jobs: line to bare-metal instead of
>> > baremetal? It'll be better for search engines and for alignment with the
>> > other projects URLs: https://review.openstack.org/354135
>> >
>> > I've also uncovered a problem where a neutron project's API does not
>> > have an
>> > official service name, and am working on a solution but need help from
>> > the
>> > neutron team: https://review.openstack.org/#/c/351407
>>
>> I followed the discussion in https://review.openstack.org/#/c/351407
>> and my understanding of the conclusion is to add API reference source
>> of neutron stadium projects
>> to neutron-lib and publish them under
>> http://developer.openstack.org/api-ref/networking/ .
>> I sounds reasonable to me.
>>
>> We can have a dedicated pages for each stadium project like networking-sfc
>> like api-ref/networking/service-function-chaining.
>> At now all APIs are placed under v2/ directory, but it is not good
>> both from user and
>> maintenance perspectives.
>>
>>
>> So, the next thing we need to clarify is what names and directory
>> structure are appropropriate
>> from the documentation perspective.
>> My proposal is to prepare a dedicated directory per networking project
>> repository.
>> The directory name should be a function name rather than a project
>> name. For example,
>> - neutron => ???
>> - neutron-lbaas => load-balancer
>> - neutron-vpnaas => vpn
>> - 

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread Tim Bell

> On 18 Aug 2016, at 09:56, hie...@vn.fujitsu.com wrote:
> 
> Hi Magnum folks,
> 
> I have some interests in our auto scaling features and currently testing with 
> some container monitoring solutions such as heapster, telegraf and 
> prometheus. I have seen the PoC session corporate with Senlin in Austin and 
> have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun, so is 
> there only one level of scaling (node) instead of both node and container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on 
> Heat/Ceilometer for gathering metrics and do the scaling work based on auto 
> scaling policies, but is Heat/Ceilometer is the best choice for Magnum auto 
> scaling? 
> 
> Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, 
> and Heat can grab these to decide the right scaling method. IMO, this 
> approach have some problems, please take a look and give feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle 
> complex scaling policies. For example: 
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
> conditional logic of Heat also cannot resolve the conflict of scaling 
> policies. For example:
> If CPU > 80% and Mem >70% then scale out
> If CPU < 30% or Mem < 50% then scale in
> -> What if CPU = 90% and Mem = 30%.

What would you like Heat to do in this scenario ? Is it that you would like to 
have a user defined logic option as well as basic conditionals ?

I would expect the same problem to occur in pure Heat scenarios also so a user 
defined scaling policy would probably be of interest there too and avoid code 
duplication.

Tim

> Thus, I think that we need to implement magnum scaler for validating the 
> policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE. 
> 
> I think we need a new design for auto scaling feature, not for Magnum only 
> but also Zun (because the scaling level of container maybe forked to Zun 
> too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and show 
> the monitoring URL when creating cluster (bay) complete. For example, we can 
> use Prometheus as monitoring container for each cluster. (Heapster is the 
> best choice for k8s, but not good enough for swarm or mesos).
> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if need.
> - Manage user-defined scaling policy: not only cpu and memory but also other 
> metrics like network bw, CCU.
> - Validate user-defined scaling policy and trigger heat for scaling actions. 
> (can trigger nova-scheduler for more scaling options)
> - Need highly scalable architecture, first step we can implement simple 
> validator method but in the future, there are some other approach such as 
> using fuzzy logic or AI to make an appropriate decision.
> 
> Some use case for operators:
> - I want to create a k8s cluster, and if CCU or network bandwidth is high 
> please scale-out X nodes in other regions.
> - I want to create swarm cluster, and if CPU or memory is too high, please 
> scale-out X nodes to make sure total CPU and memory is about 50%.
> 
> What do you think about these above ideas/problems?
> 
> [1]. https://blueprints.launchpad.net/heat/+spec/support-conditions-function
> 
> Thanks,
> Hieu LE.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] State of upgrade CLI commands

2016-08-18 Thread mathieu bultel
On 08/18/2016 09:29 AM, Marios Andreou wrote:
> On 17/08/16 15:46, Jiří Stránský wrote:
>> On 16.8.2016 21:08, Brad P. Crochet wrote:
>>> Hello TripleO-ians,
>>>
>>> I've started to look again at the introduced, but unused/undocumented
>>> upgrade commands. It seems to me that given the current state of the
>>> upgrade process (at least from Liberty -> Mitaka), these commands make
>>> a lot less sense.
>>>
>>> I see one of two directions to take on this. Of course I would love to
>>> hear other options.
>>>
>>> 1) Revert these commands immediately, and forget they ever existed.
>>> They don't exactly work, and as I said, were never officially
>>> documented, so I don't think a revert is out of the question.
>>>
>>> or
>>>
>>> 2) Do a major overhaul, and rethink the interface entirely. For
>>> instance, the L->M upgrade introduced a couple of new steps (the AODH
>>> migration and the Keystone migration). These would have either had to
>>> have completely new commands added, or have some type of override to
>>> the existing upgrade command to handle them.
>>>
>>> Personally, I would go for step 1. The 'overcloud deploy' command can
>>> accomplish all of the upgrade steps that involve Heat. In order for
>>> the new upgrade commands to work properly, there's a lot that needs to
>>> be refactored out of the deploy command itself so that it can be
>>> shared with deploy and upgrade, like passing of passwords and the
>>> like. I just don't see a need for discrete commands when we have an
>>> existing command that will do it for us. And with the addition of an
>>> answer file, it makes it even easier.
>>>
>>> Thoughts?
>>>
>> +1 for approach no. 1. Currently `overcloud deploy` meets the upgrade
>> needs and it gave us some flexibility to e.g. do migrations like AODH
>> and Keystone WSGI. I don't think we should have a special command for
>> upgrades at this point.
>>
>> The situation may change as we go towards upgrades of composable
>> services, and perhaps wrap upgrades in Mistral if/when applicable, but
>> then the potential upgrade command(s) would probably be different from
>> the current ones anyway, so +1 for removing them.
> +1 from me too, especially because this ^^^ (the workflow we currently
> have and use will change quite drastically I expect)
>
> thanks, sorry I didn't spot this earlier,
> marios

+1 for me too, even if, I think for an end-user experience it's not
ideal and the CLI would be a better way for this point.
>> Jirka
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread hie...@vn.fujitsu.com
Hi Magnum folks,

I have some interests in our auto scaling features and currently testing with 
some container monitoring solutions such as heapster, telegraf and prometheus. 
I have seen the PoC session corporate with Senlin in Austin and have some 
questions regarding of this design:
- We have decided to move all container management from Magnum to Zun, so is 
there only one level of scaling (node) instead of both node and container?
- The PoC design show that Magnum (Magnum Scaler) need to depend on 
Heat/Ceilometer for gathering metrics and do the scaling work based on auto 
scaling policies, but is Heat/Ceilometer is the best choice for Magnum auto 
scaling? 

Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, and 
Heat can grab these to decide the right scaling method. IMO, this approach have 
some problems, please take a look and give feedbacks:
- The AutoScaling Policy and AutoScaling Resource of Heat cannot handle complex 
scaling policies. For example: 
If CPU > 80% then scale out
If Mem < 40% then scale in
-> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
conditional logic of Heat also cannot resolve the conflict of scaling policies. 
For example:
If CPU > 80% and Mem >70% then scale out
If CPU < 30% or Mem < 50% then scale in
-> What if CPU = 90% and Mem = 30%.
Thus, I think that we need to implement magnum scaler for validating the policy 
conflicts.
- Ceilometer may have troubles if we deploy thousands of COE. 

I think we need a new design for auto scaling feature, not for Magnum only but 
also Zun (because the scaling level of container maybe forked to Zun too). Here 
are some ideas:
1. Add new field enable_monitor to cluster template (ex baymodel) and show the 
monitoring URL when creating cluster (bay) complete. For example, we can use 
Prometheus as monitoring container for each cluster. (Heapster is the best 
choice for k8s, but not good enough for swarm or mesos).
2. Create Magnum scaler manager (maybe a new service):
- Monitoring enabled monitor cluster and send metric to ceilometer if need.
- Manage user-defined scaling policy: not only cpu and memory but also other 
metrics like network bw, CCU.
- Validate user-defined scaling policy and trigger heat for scaling actions. 
(can trigger nova-scheduler for more scaling options)
- Need highly scalable architecture, first step we can implement simple 
validator method but in the future, there are some other approach such as using 
fuzzy logic or AI to make an appropriate decision.

Some use case for operators:
- I want to create a k8s cluster, and if CCU or network bandwidth is high 
please scale-out X nodes in other regions.
- I want to create swarm cluster, and if CPU or memory is too high, please 
scale-out X nodes to make sure total CPU and memory is about 50%.

What do you think about these above ideas/problems?

[1]. https://blueprints.launchpad.net/heat/+spec/support-conditions-function

Thanks,
Hieu LE.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Need help in simulating Service Function VM for forwarding packets from IngressPort to EgressPort

2016-08-18 Thread Mohan Kumar
Hi Ravi ,

I would suggest ,

[1] Use Ubuntu cloud images from "
http://cloud-images.ubuntu.com/trusty/current/; , so you can do ssh login
and enable ipv4 forwarding  by changing "net.ipv4.ip_forward = 1" in file
/etc/sysctl.conf

[2]  In case of basic cirros image  please add some static route as  Jim
suggested  or run some simple c socket program using gcc to forward packets.

Thanks.,
Mohankumar.N

On Thu, Aug 18, 2016 at 12:52 PM, 张广明  wrote:

> Ravi:
>If you only want to let the VM-SF to forward traffic from p2 to p3,
>  you can enable ip_forward in VM-SF and configure two static route in
> VM-SF.
>Eg :ip route add VM-DST dev p2
> ip route add VM-SRC dev p1
>
> Jim
>
> 2016-08-18 8:02 GMT+08:00 Ravi Sekhar Reddy Konda <
> ravisekhar.ko...@oracle.com>:
>
>> Hi Networking SFC team
>>
>> I need some help in using the openstack Networking-SFC for service
>> chaining.
>>
>>  I am trying out Openstack networking-SFC on the devstack VM.
>>  I brought up AllInOne Devstack(master branch) on the Ubuntu VM on the
>> VirtualBox.
>>  I am trying out the following scenario
>>
>>   |--| |--|
>> |--|
>>   | VM-SRC  | |   VM-SF
>> |   |  VM-DST |
>>   |--|
>> |--|   |--|
>> p1 |p2 |
>> |p3   p4 |
>>  | |
>> ||
>>  | |
>> ||
>> Net1
>> -
>>
>>
>> I am using  "cirros-0.3.4-x86_64-uec" image for bringing up VM-SRC,
>> VM-DST and VM-SF
>>
>> I got struck in enabling the "cirros-0.3.4-x86_64-uec" VM for forwarding
>> the packets coming from Ingress-Port(p2) to Egress-Port(P3)
>> I think we can use IPTables rules for enabling VM to forward packets from
>> Ingress-Port to Egress-Port, but the basic cirros-0.3.4-x86_64-uec
>> image does not contain IPTables also,  I tried to enable external
>> connectivity for the VM using Floating-IP, I am able to ssh to "VM-SF" from
>> devstack VM using
>> floating-IP, but not able to connect to to external world from VM-SF, and
>> I could not even find yum-install on the cirros-0.3.4-x86_64-uec image for
>> installing.
>>
>> I want to know if there is any VM with minimal footprint(like UFM) which
>> can be brought up on the virtualBox setup to act as Service Function which
>> enables forwarding.
>> Also can anybody give me the steps how to make cirros-0.3.4-x86_64-uec
>> image to act as Service Function for forwarding traffic.
>>
>> Thanks in Advance
>> Ravi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] State of upgrade CLI commands

2016-08-18 Thread Marios Andreou
On 17/08/16 15:46, Jiří Stránský wrote:
> On 16.8.2016 21:08, Brad P. Crochet wrote:
>> Hello TripleO-ians,
>>
>> I've started to look again at the introduced, but unused/undocumented
>> upgrade commands. It seems to me that given the current state of the
>> upgrade process (at least from Liberty -> Mitaka), these commands make
>> a lot less sense.
>>
>> I see one of two directions to take on this. Of course I would love to
>> hear other options.
>>
>> 1) Revert these commands immediately, and forget they ever existed.
>> They don't exactly work, and as I said, were never officially
>> documented, so I don't think a revert is out of the question.
>>
>> or
>>
>> 2) Do a major overhaul, and rethink the interface entirely. For
>> instance, the L->M upgrade introduced a couple of new steps (the AODH
>> migration and the Keystone migration). These would have either had to
>> have completely new commands added, or have some type of override to
>> the existing upgrade command to handle them.
>>
>> Personally, I would go for step 1. The 'overcloud deploy' command can
>> accomplish all of the upgrade steps that involve Heat. In order for
>> the new upgrade commands to work properly, there's a lot that needs to
>> be refactored out of the deploy command itself so that it can be
>> shared with deploy and upgrade, like passing of passwords and the
>> like. I just don't see a need for discrete commands when we have an
>> existing command that will do it for us. And with the addition of an
>> answer file, it makes it even easier.
>>
>> Thoughts?
>>
> 
> +1 for approach no. 1. Currently `overcloud deploy` meets the upgrade
> needs and it gave us some flexibility to e.g. do migrations like AODH
> and Keystone WSGI. I don't think we should have a special command for
> upgrades at this point.
> 
> The situation may change as we go towards upgrades of composable
> services, and perhaps wrap upgrades in Mistral if/when applicable, but
> then the potential upgrade command(s) would probably be different from
> the current ones anyway, so +1 for removing them.

+1 from me too, especially because this ^^^ (the workflow we currently
have and use will change quite drastically I expect)

thanks, sorry I didn't spot this earlier,
marios

> 
> Jirka
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka together with ODL-Beryllium

2016-08-18 Thread Rui Zang

Hi Nikolas,

First of all, neutron-...@lists.opendaylight.org (copied) might be a 
better place to ask networking-odl questions.


It seems that the external network you described is not managed by 
OpenDaylight, so it failed port binding.


You probably want to configure multiple mechanism drivers, say if 
physnet1 is connected by ovs br-xxx on  node-3.domain.tld, you could
run ovs agent on that host and configure bridge_mappings correctly. The 
openvswitch mechanism driver would succeed the port binding.


Thanks,
Zang, Rui

On 8/17/2016 7:38 PM, Nikolas Hermanns wrote:

Hey Networking-ODL folks,

I just setup a Mirantis 9.0 release together with Opendaylight Beryllium. Using 
networking-odl v2 I see constantly the error:
2016-08-17 11:28:07.927 4040 ERROR neutron.plugins.ml2.managers 
[req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind port 
faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld for vnic_type 
normal using segments [{'segmentation_id': None, 'physical_network': 
u'physnet1', 'id': u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': 
u'flat'}]
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology 
[req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Network topology element 
has failed binding port:
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology 
Traceback (most recent call last):
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology   File 
"/usr/local/lib/python2.7/dist-packages/networking_odl/ml2/network_topology.py",
 line 117, in bind_port
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology 
port_context, vif_type, self._vif_details)
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology   File 
"/usr/local/lib/python2.7/dist-packages/networking_odl/ml2/ovsdb_topology.py", 
line 172, in bind_port
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology 
raise ValueError('Unable to find any valid segment in given context.')
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology 
ValueError: Unable to find any valid segment in given context.
2016-08-17 11:28:07.937 4040 ERROR networking_odl.ml2.network_topology
2016-08-17 11:28:07.938 4040 ERROR networking_odl.ml2.network_topology 
[req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Unable to bind port 
element for given host and valid VIF types:
2016-08-17 11:28:07.939 4040 ERROR neutron.plugins.ml2.managers 
[req-7e834676-81b4-479b-ad45-fa39f0fabed3 - - - - -] Failed to bind port 
faeaa465-6f08-4097-b173-48636cc71539 on host node-3.domain.tld for vnic_type 
normal using segments [{'segmentation_id': None, 'physical_network': 
u'physnet1', 'id': u'58d9518c-5664-4099-bcd1-b7818bea853b', 'network_type': 
u'flat'}]

Looking at the code I saw that you can only bind ports which have a valid 
segmentation:
/usr/local/lib/python2.7/dist-packages/networking_odl/ml2/ovsdb_topology.py(151)bind_port()
def bind_port(self, port_context, vif_type, vif_details):

port_context_id = port_context.current['id']
network_context_id = port_context.network.current['id']
# Bind port to the first valid segment
for segment in port_context.segments_to_bind:
if self._is_valid_segment(segment): <---
# Guest best VIF type for given host
vif_details = self._get_vif_details(
vif_details=vif_details, port_context_id=port_context_id,
vif_type=vif_type)
LOG.debug(
'Bind port with valid segment:\n'
'\tport: %(port)r\n'
'\tnetwork: %(network)r\n'
'\tsegment: %(segment)r\n'
'\tVIF type: %(vif_type)r\n'
'\tVIF details: %(vif_details)r',
{'port': port_context_id,
 'network': network_context_id,
 'segment': segment, 'vif_type': vif_type,
 'vif_details': vif_details})
port_context.set_binding(
segment[driver_api.ID], vif_type, vif_details,
status=n_const.PORT_STATUS_ACTIVE)
return

raise ValueError('Unable to find any valid segment in given context.')

A valid segmentation is defined by:
[constants.TYPE_LOCAL, constants.TYPE_GRE,
constants.TYPE_VXLAN, constants.TYPE_VLAN]

The port which I try to bind here is a port on an external network which is 
flat since we do not have segmentation for external network. Any idea why it is 
changed that I can bind this port?

BR Nikolas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






  1   2   >