Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-24 Thread Vladimir Kozhukalov
The status is as follows:

1) Fuel-main [1] and fuel-library [2] patches can deploy the master node
w/o docker containers
2) I've not built experimental ISO yet (have been testing and debugging
manually)
3) There are still some flaws (need better formatting, etc.)
4) Plan for tomorrow is to build experimental ISO and to begin fixing
system tests and fix the spec.

[1] https://review.openstack.org/#/c/248649
[2] https://review.openstack.org/#/c/248650

Vladimir Kozhukalov

On Mon, Nov 23, 2015 at 7:51 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Colleagues,
>
> I've started working on the change. Here are two patches (fuel-main [1]
> and fuel-library [2]). They are not ready to review (still does not work
> and under active development). Changes are not going to be huge. Here is a
> spec [3]. Will keep the status up to date in this ML thread.
>
>
> [1] https://review.openstack.org/#/c/248649
> [2] https://review.openstack.org/#/c/248650
> [3] https://review.openstack.org/#/c/248814
>
>
> Vladimir Kozhukalov
>
> On Mon, Nov 23, 2015 at 3:35 PM, Aleksandr Maretskiy <
> amarets...@mirantis.com> wrote:
>
>>
>>
>> On Mon, Nov 23, 2015 at 2:27 PM, Bogdan Dobrelya 
>> wrote:
>>
>>> On 23.11.2015 12:47, Aleksandr Maretskiy wrote:
>>> > Hi all,
>>> >
>>> > as you know, Rally runs inside docker on Fuel master node, so docker
>>> > removal (good improvement) is a problem for Rally users.
>>> >
>>> > To solve this, I'm planning to make native Rally installation on Fuel
>>> > master node that is running on CentOS 7,
>>> > and then make a step-by-step instruction how to make this installation.
>>> >
>>> > So I hope docker removal will not make issues for Rally users.
>>>
>>> I believe the most backwards compatible scenario is to keep the docker
>>> installed while removing the fuel-* docker things back to the host OS.
>>> So nothing would prevent user from pulling and running whichever docker
>>> containers he wants to put on the Fuel master node. Makes sense?
>>>
>>>
>> Sounds good
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] 'NoMatchingFunctionException: No function "#operator_." matches supplied arguments' error when adding an application to an environment

2015-11-24 Thread Vahid S Hashemian
Hi,

I am working on the TOSCA CSAR plugin for murano and so far am able to 
successfully import an application definition archive of my CSAR example 
to murano.
However, when I try to add the imported application to an environment I 
get this error from Murano Dashboard:

DEBUG:muranodashboard.common.cache:Using cached value from 
/tmp/muranodashboard-cache/apps/91/4c2bfd5d504419a94a9affb7af809a/package_fqn.
DEBUG:muranodashboard.catalog.views:Clearing forms data for application 
io.murano.apps.generated.CsarHelloWorld.
DEBUG:muranodashboard.catalog.views:Clearing any leftover wizard step 
data.
DEBUG:muranodashboard.common.cache:Using cached value from 
/tmp/muranodashboard-cache/apps/91/4c2bfd5d504419a94a9affb7af809a/ui/ui.yaml.
DEBUG:muranodashboard.common.cache:Using cached value from 
/tmp/muranodashboard-cache/apps/91/4c2bfd5d504419a94a9affb7af809a/package_fqn.
DEBUG:muranodashboard.dynamic_ui.services:Using data {} for app 
io.murano.apps.generated.CsarHelloWorld
DEBUG:muranodashboard.dynamic_ui.services:Using in-memory forms for app 
io.murano.apps.generated.CsarHelloWorld
DEBUG:muranodashboard.common.cache:Using cached value from 
/tmp/muranodashboard-cache/apps/91/4c2bfd5d504419a94a9affb7af809a/ui/ui.yaml.
DEBUG:muranodashboard.common.cache:Using cached value from 
/tmp/muranodashboard-cache/apps/91/4c2bfd5d504419a94a9affb7af809a/package_fqn.
DEBUG:muranodashboard.dynamic_ui.services:Using data {} for app 
io.murano.apps.generated.CsarHelloWorld
DEBUG:muranodashboard.dynamic_ui.services:Using in-memory forms for app 
io.murano.apps.generated.CsarHelloWorld
INFO:muranodashboard.dynamic_ui.forms:Creating form workflowManagement
INFO:muranodashboard.dynamic_ui.forms:Creating form group0
DEBUG:muranodashboard.api:Murano::Client http://localhost:8082/>
DEBUG:muranoclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
324759651d234c4eaf08f6093dfd7000' -H 'Content-Type: application/json' -H 
'User-Agent: python-muranoclient' 
http://localhost:8082//v1/catalog/packages/914c2bfd5d504419a94a9affb7af809a
DEBUG:muranoclient.common.http:
HTTP/1.1 200 OK
Date: Wed, 25 Nov 2015 01:31:12 GMT
Connection: keep-alive
Content-Type: application/json
Content-Length: 560
X-Openstack-Request-Id: req-b6759ab2-04b4-4882-ac95-3ac06f970cb5

{"updated": "2015-11-24T23:28:52", "description": "Template for deploying 
a single server with predefined properties.", "tags": 
["TOSCA-CSAR-generated"], "class_definitions": 
["io.murano.apps.generated.CsarHelloWorld"], "is_public": false, "id": 
"914c2bfd5d504419a94a9affb7af809a", "categories": [], "name": 
"csar_hello_world", "created": "2015-11-24T23:28:52", "author": "OASIS 
TOSCA TC", "enabled": true, "supplier": {}, "fully_qualified_name": 
"io.murano.apps.generated.CsarHelloWorld", "type": "Application", 
"owner_id": "1fee909728c54a698c96f0f7853412ae"}

DEBUG:muranoclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
324759651d234c4eaf08f6093dfd7000' -H 'Content-Type: application/json' -H 
'User-Agent: python-muranoclient' 
http://localhost:8082//v1/environments/b8d83a0b6fde465ab9de013f084518d4
DEBUG:muranoclient.common.http:
HTTP/1.1 200 OK
Date: Wed, 25 Nov 2015 01:31:12 GMT
Connection: keep-alive
Content-Type: application/json
Content-Length: 245
X-Openstack-Request-Id: req-0ccb2b46-8a58-418f-b063-63067042e3f6

{"status": "ready", "updated": "2015-11-24T23:29:11", "created": 
"2015-11-24T23:29:11", "tenant_id": "1fee909728c54a698c96f0f7853412ae", 
"acquired_by": null, "version": 0, "services": [], "id": 
"b8d83a0b6fde465ab9de013f084518d4", "name": "env1"}

DEBUG:muranodashboard.common.cache:Using cached value from 
/tmp/muranodashboard-cache/apps/91/4c2bfd5d504419a94a9affb7af809a/ui/ui.yaml.
DEBUG:muranodashboard.common.cache:Using cached value from 
/tmp/muranodashboard-cache/apps/91/4c2bfd5d504419a94a9affb7af809a/package_fqn.
DEBUG:muranodashboard.dynamic_ui.services:Using data {} for app 
io.murano.apps.generated.CsarHelloWorld
DEBUG:muranodashboard.dynamic_ui.services:Using in-memory forms for app 
io.murano.apps.generated.CsarHelloWorld
DEBUG:muranoclient.common.http:curl -i -X GET -H 'X-Auth-Token: 
324759651d234c4eaf08f6093dfd7000' -H 'Content-Type: application/json' -H 
'User-Agent: python-muranoclient' 
http://localhost:8082//v1/catalog/packages/914c2bfd5d504419a94a9affb7af809a
DEBUG:muranoclient.common.http:
HTTP/1.1 200 OK
Date: Wed, 25 Nov 2015 01:31:12 GMT
Connection: keep-alive
Content-Type: application/json
Content-Length: 560
X-Openstack-Request-Id: req-ee4545dd-6dfe-4944-90bc-48a525b099d5

{"updated": "2015-11-24T23:28:52", "description": "Template for deploying 
a single server with predefined properties.", "tags": 
["TOSCA-CSAR-generated"], "class_definitions": 
["io.murano.apps.generated.CsarHelloWorld"], "is_public": false, "id": 
"914c2bfd5d504419a94a9affb7af809a", "categories": [], "name": 
"csar_hello_world", "created": "2015-11-24T23:28:52", "author": "OASIS 
TOSCA TC", "enabled": true, "supplier": {}, "fully_qualified_name": 

Re: [openstack-dev] [all] Time to increase our IRC Meetings infrastructure?

2015-11-24 Thread Tony Breeds
On Tue, Nov 24, 2015 at 11:35:34AM +0100, Thierry Carrez wrote:
> Tony Breeds wrote:
> > [...]
> > I understand that we want to keep the number of parallel meetings to a 
> > minimum,
> > but is it time to add #openstack-meeting-5?
> 
> Some slots are taken by groups that have not been holding meetings for
> quite a long time. For good hygiene, I'd like us to clean those up
> /before/ we consider adding a new meeting channel. It might well still
> be necessary to create a new channel after the cleanup, but we need to
> do that first.

There is lots of hygiene stuff to do in the repo :)  I'll see what I can come up
with but at the very least it's going to require a bunch of people to +1 a
review to drop their meeting ;P

So there are 17 slots that are full so I'll look at them.  If a slot hasn't
been used this month I'll open a review to drop them (with appropriate CC's)

I'll go through the other slots later as they're not blocking anything right
now.
 
> It's a bit of a painful work (especially for teams without a clear
> meeting_id) so it's been safely staying at the bottom of my TODO list
> for a while...

Yeah and mine ;P
 
> > [...]
> > Confusion
> > =
> > So this is really quite trivial.  As you can see above we don't have an
> > #openstack-meeting-2 channel the ...-alt is clearly that, but still people 
> > are
> > confused.  How would people feel about creating #openstack-meeting-2 and 
> > making
> > it redirect to #openstack-meeting-alt?
> 
> I'd rather standardize and rename -alt to -2 (i.e. redirect -alt to -2
> on IRC, and bulk-change all meetings YAML from -alt to -2).

I wanted to go the other way as it's slightly less impactful. Most people grok
that '-alt' == '-2'.  if we redirect from 2 -> alt it's only a few people
that transparently get redirected.

If we do it the other way around everyone already in -alt needs to move.

That's arguing implementation, rather than objecting so I'll start the process
of registering #openstack-meeting-2 which requires people to:
  a) not use it ; and
  b) everyone in there to leave.

Registering only takes 30seconds, once people leave (and I'm awake).

In order to be pro-active I'll register #openstack-meeting-5 so when we *do*
need it the startup costs are smaller.

The first thing I'll do after registering the channels is give @infra access.

Yours Tony.


pgp8wZyQ3Mswy.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-24 Thread Hui Xiang
FYI,  bug 1513367 is opened for this apparmor problem to track when booting
vms failed with ovs-dpdk enabled.

https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367

On Tue, Nov 24, 2015 at 9:49 PM, Mooney, Sean K 
wrote:

> Out of interest
>
> Have you removed apparmor or placed all Libvirt apparmor profies into
> complain mode?
>
>
>
> If not you will get permission denied errors.
>
>
>
> You can confirm by checking dmesg to see if you have any permission denied
> messages from apparmor
>
> Or run aa-status and see if the the Libvirt profie is in enforce/complain
> mode.
>
>
>
> The  /tmp/qemu.orig file is just a file we write the original qemu command
> to for debugging. It is not needed
>
> But all uses should be able to read/write to /tmp.
>
>
>
> We wrap the qemu/kvm binary with a script that on Ubuntu can be found here
> /usr/bin/kvm
>
>
>
> If you comment out echo "qemu ${args[@]}" > /tmp/qemu.orig in this script
> it will silence that warning.
>
>
>
>
> https://github.com/openstack/networking-ovs-dpdk/blob/master/devstack/libs/ovs-dpdk#L104
>
>
>
> I may remove this from our wrapper script as we most never use it for
> debugging  anymore however in the past it was
>
> Useful to compare the original qemu command line and the update qemu
> command line.
>
>
>
> I don’t know if I have mentioned this before but we also have a Ubuntu
> version of our getting start guide that should merge shortly
>
>
>
> https://review.openstack.org/#/c/243190/6/doc/source/getstarted/ubuntu.rst
>
>
>
> Regards
>
> Sean.
>
>
>
> *From:* Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
> *Sent:* Tuesday, November 24, 2015 12:42 PM
> *To:* Mooney, Sean K
> *Cc:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [networking-ovs-dpdk]
>
>
>
> Hi All,
>
> I also found another error while launching an instance.
>
> libvirtError: internal error: process exited while connecting to monitor:
> /usr/bin/kvm-spice: line 42: /tmp/qemu.orig: Permission denied
>
> I dont want to change any permissions manually and again face the
> dependency issues. So kindly help
>
> Thanks,
>
> Prathyusha
>
>
>
>
>
>
>
> On Tue, Nov 24, 2015 at 4:02 PM, Prathyusha Guduri <
> prathyushaconne...@gmail.com> wrote:
>
> Hi Sean,
>
> Thanks for you kind help.
>
> I did the following.
>
> # apt-get install ubuntu-cloud-keyring
> # echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu
> "
> \
> "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
> # apt-get update && apt-get dist-upgrade
>
> and then uninstalled the libvirt and qemu that were installed manually and
> then ran stack.sh after cleaning and unstacking.
>
> Now fortunately libvirt and qemu satisfy minimum requirements.
>
> $ virsh --version
> 1.2.12
>
> $ kvm --version
> /usr/bin/kvm: line 42: /tmp/qemu.orig: Permission denied
> QEMU emulator version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.3~cloud0),
> Copyright (c) 2003-2008 Fabrice Bellard
>
> Am using an ubuntu 14.04 system
> $ uname -a
> Linux ubuntu-Precision-Tower-5810 3.13.0-24-generic #46-Ubuntu SMP Thu Apr
> 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
> After stack.sh which was successful, tried creating a new instance - which
> gave an ERROR again.
>
> $ nova list
>
> +--++++-+---+
> | ID   | Name   | Status | Task
> State | Power State | Networks
> |
>
> +--++++-+---+
> | 31a7e160-d04c-4216-91cf-30ce86c2b1fa | demo-instance1 | ERROR  |
> -  | NOSTATE | private=10.0.0.3,
> fd34:f4c5:412:0:f816:3eff:fea4:b9fe |
>
> $ sudo service ovs-dpdk status
> sourcing config
> ovs alive
> VHOST_CONFIG: bind to /var/run/openvswitch/vhufb8052e5-d3
> 2015-11-24T10:23:25Z|00126|dpdk|INFO|Socket
> /var/run/openvswitch/vhufb8052e5-d3 created for vhost-user port
> vhufb8052e5-d3
> 2015-11-24T10:23:25Z|4|dpif_netdev(pmd18)|INFO|Core 2 processing port
> 'vhufb8052e5-d3'
> 2015-11-24T10:23:25Z|2|dpif_netdev(pmd19)|INFO|Core 8 processing port
> 'dpdk0'
> 2015-11-24T10:23:25Z|00127|bridge|INFO|bridge br-int: added interface
> vhufb8052e5-d3 on port 6
> 2015-11-24T10:23:25Z|5|dpif_netdev(pmd18)|INFO|Core 2 processing port
> 'dpdk0'
> 2015-11-24T10:23:26Z|00128|connmgr|INFO|br-int<->unix: 1 flow_mods in the
> last 0 s (1 deletes)
> 2015-11-24T10:23:26Z|00129|ofp_util|INFO|normalization changed ofp_match,
> details:
> 2015-11-24T10:23:26Z|00130|ofp_util|INFO| pre:
> in_port=5,nw_proto=58,tp_src=136
> 2015-11-24T10:23:26Z|00131|ofp_util|INFO|post: in_port=5
> 

[openstack-dev] [all] Thoughts on python-future?

2015-11-24 Thread Eric Kao
Hi all, 

I¹ve been using the python-future library for Python 3 porting and want to
see what people think of it. http://python-future.org/overview.html#features

The end result is standard Python3 code made compatible with Python2 through
library imports. The great thing is that Python 3 execution is mostly
independent of the library, so once Python 2 support is dropped, the use of
the library can be dropped too.

Anyone know why it¹s not used in OpenStack perhaps alongside six? Thanks!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Thoughts on python-future?

2015-11-24 Thread Robert Collins
Which bit in particular do you like? Some of the stuff - like
past.translation - is likely to be deeply fragile, other bits are
fine, but not really any different to six.

On 25 November 2015 at 15:33, Eric Kao  wrote:
> Hi all,
>
> I’ve been using the python-future library for Python 3 porting and want to
> see what people think of it. http://python-future.org/overview.html#features
>
> The end result is standard Python3 code made compatible with Python2 through
> library imports. The great thing is that Python 3 execution is mostly
> independent of the library, so once Python 2 support is dropped, the use of
> the library can be dropped too.
>
> Anyone know why it’s not used in OpenStack perhaps alongside six? Thanks!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Neutron] dashboard repository for neutron subprojects

2015-11-24 Thread Akihiro Motoki
Hi,

Neutron has now various subprojects and some of them would like to
implement Horizon supports. Most of them are additional features.
I would like to start the discussion where we should have horizon support.

[Background]
Horizon team introduced a plugin mechanism and we can add horizon panels
from external repositories. Horizon team is recommending external repos for
additional services for faster iteration and features.
We have various horizon related repositories now [1].

In Neutron related world, we have neutron-lbaas-dashboard and
horizon-cisco-ui repos.

[Possible options]
There are several possible options for neutron sub-projects.
My current vote is (b), and the next is (a). It looks a good balance to me.
I would like to gather broader opinions,

(a) horizon in-tree repo
- [+] It was a legacy approach and there is no initial effort to setup a repo.
- [+] Easy to share code conventions.
- [-] it does not scale. Horizon team can be a bottleneck.

(b) a single dashboard repo for all neutron sub-projects
- [+] No need to set up a repo by each sub-project
- [+] Easier to share the code convention. Can get horizon reviewers.
- [-] who will be a core reviewer of this repo?

(c) neutron sub-project repo
- [+] Each sub-project can develop a dashboard fast.
- [-] It is doable, but the directory tree can be complicated.
- [-] Lead to too many repos and the horizon team/liaison cannot cover all.

(d) a separate repo per neutron sub-project
Similar to (c)
- [+] A dedicate repo for dashboard simplifies the directory tree.
- [-] Need to setup a separate repo.
- [-] Lead to too many repos and the horizon team/liaison cannot cover all.


Note that this mail is not intended to move the current neutron
support in horizon
to outside of horizon tree. I would like to discuss Horizon support of
additional features.

Akihiro

[1] http://docs.openstack.org/developer/horizon/plugins.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Tom Fifield

On 24/11/15 19:20, Dolph Mathews wrote:

Scenarios I've been personally involved with where the
"distrustful" model either did help or would have helped:

- Employee is reprimanded by management for not positively reviewing &
approving a coworkers patch.

- A team of employees is pressured to land a feature with as fast as
possible. Minimal community involvement means a faster path to "merged,"
right?

- A large group of reviewers from the author's organization repeatedly
throwing *many* careless +1s at a single patch. (These happened to not
be cores, but it's a related organizational behavior taken to an extreme.)

I can actually think of a few more specific examples, but they are
already described by one of the above.

It's not cores that I do not trust, its the organizations they operate
within which I have learned not to trust.


I think this is a good summary of people's fears and practical experience.

Though, It seems that those cases above are derived from not 
understanding how we work, rather than out of deliberate malice. We can 
fix this kind of stuff with education :)


Putting this out there - over at the Foundation, we're here to Protect 
and Empower you. So, if you've ever been reprimanded by management for 
choosing not to abuse the community process, perhaps we should arrange 
an education session with that manager (or their manager) on how 
OpenStack works.





On Monday, November 23, 2015, Morgan Fainberg > wrote:

Hi everyone,

This email is being written in the context of Keystone more than any
other project but I strongly believe that other projects could
benefit from a similar evaluation of the policy.

Most projects have a policy that prevents the following scenario (it
is a social policy not enforced by code):

* Employee from Company A writes code
* Other Employee from Company A reviews code
* Third Employee from Company A reviews and approves code.

This policy has a lot of history as to why it was implemented. I am
not going to dive into the depths of this history as that is the
past and we should be looking forward. This type of policy is an
actively distrustful policy. With exception of a few potentially bad
actors (again, not going to point anyone out here), most of the
folks in the community who have been given core status on a project
are trusted to make good decisions about code and code quality. I
would hope that any/all of the Cores would also standup to their
management chain if they were asked to "just push code through" if
they didn't sincerely think it was a positive addition to the code base.

Now within Keystone, we have a fair amount of diversity of core
reviewers, but we each have our specialities and in some cases
(notably KeystoneAuth and even KeystoneClient) getting the required
diversity of reviews has significantly slowed/stagnated a number of
reviews.

What I would like us to do is to move to a trustful policy. I can
confidently say that company affiliation means very little to me
when I was PTL and nominating someone for core. We should explore
making a change to a trustful model, and allow for cores (regardless
of company affiliation) review/approve code. I say this since we
have clear steps to correct any abuses of this policy change.

With all that said, here is the proposal I would like to set forth:

1. Code reviews still need 2x Core Reviewers (no change)
2. Code can be developed by a member of the same company as both
core reviewers (and approvers).
3. If the trust that is being given via this new policy is violated,
the code can [if needed], be reverted (we are using git here) and
the actors in question can lose core status (PTL discretion) and the
policy can be changed back to the "distrustful" model described above.

I hope that everyone weighs what it means within the community to
start moving to a trusting-of-our-peers model. I think this would be
a net win and I'm willing to bet that it will remove noticeable
roadblocks [and even make it easier to have an organization work
towards stability fixes when they have the resources dedicated to it].

Thanks for your time reading this.

Regards,
--Morgan
PTL Emeritus, Keystone



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-24 Thread Prathyusha Guduri
Thank you so much Sean,

It helped a lot and that I have VMs active running :) That's so kind of you
to spend your time. It would be nice if this apparmor thing is mentioned
somewhere in the documentation, will help newbies like me.

@Hui - Thank you for the info and the config file :)

Thanks,
Prathyusha


On Wed, Nov 25, 2015 at 7:28 AM, Hui Xiang  wrote:

> FYI,  bug 1513367 is opened for this apparmor problem to track when
> booting vms failed with ovs-dpdk enabled.
>
> https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1513367
>
> On Tue, Nov 24, 2015 at 9:49 PM, Mooney, Sean K 
> wrote:
>
>> Out of interest
>>
>> Have you removed apparmor or placed all Libvirt apparmor profies into
>> complain mode?
>>
>>
>>
>> If not you will get permission denied errors.
>>
>>
>>
>> You can confirm by checking dmesg to see if you have any permission
>> denied messages from apparmor
>>
>> Or run aa-status and see if the the Libvirt profie is in enforce/complain
>> mode.
>>
>>
>>
>> The  /tmp/qemu.orig file is just a file we write the original qemu
>> command to for debugging. It is not needed
>>
>> But all uses should be able to read/write to /tmp.
>>
>>
>>
>> We wrap the qemu/kvm binary with a script that on Ubuntu can be found
>> here /usr/bin/kvm
>>
>>
>>
>> If you comment out echo "qemu ${args[@]}" > /tmp/qemu.orig in this script
>> it will silence that warning.
>>
>>
>>
>>
>> https://github.com/openstack/networking-ovs-dpdk/blob/master/devstack/libs/ovs-dpdk#L104
>>
>>
>>
>> I may remove this from our wrapper script as we most never use it for
>> debugging  anymore however in the past it was
>>
>> Useful to compare the original qemu command line and the update qemu
>> command line.
>>
>>
>>
>> I don’t know if I have mentioned this before but we also have a Ubuntu
>> version of our getting start guide that should merge shortly
>>
>>
>>
>> https://review.openstack.org/#/c/243190/6/doc/source/getstarted/ubuntu.rst
>>
>>
>>
>> Regards
>>
>> Sean.
>>
>>
>>
>> *From:* Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
>> *Sent:* Tuesday, November 24, 2015 12:42 PM
>> *To:* Mooney, Sean K
>> *Cc:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [networking-ovs-dpdk]
>>
>>
>>
>> Hi All,
>>
>> I also found another error while launching an instance.
>>
>> libvirtError: internal error: process exited while connecting to monitor:
>> /usr/bin/kvm-spice: line 42: /tmp/qemu.orig: Permission denied
>>
>> I dont want to change any permissions manually and again face the
>> dependency issues. So kindly help
>>
>> Thanks,
>>
>> Prathyusha
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Nov 24, 2015 at 4:02 PM, Prathyusha Guduri <
>> prathyushaconne...@gmail.com> wrote:
>>
>> Hi Sean,
>>
>> Thanks for you kind help.
>>
>> I did the following.
>>
>> # apt-get install ubuntu-cloud-keyring
>> # echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu
>> "
>> \
>> "trusty-updates/kilo main" >
>> /etc/apt/sources.list.d/cloudarchive-kilo.list
>> # apt-get update && apt-get dist-upgrade
>>
>> and then uninstalled the libvirt and qemu that were installed manually
>> and then ran stack.sh after cleaning and unstacking.
>>
>> Now fortunately libvirt and qemu satisfy minimum requirements.
>>
>> $ virsh --version
>> 1.2.12
>>
>> $ kvm --version
>> /usr/bin/kvm: line 42: /tmp/qemu.orig: Permission denied
>> QEMU emulator version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.3~cloud0),
>> Copyright (c) 2003-2008 Fabrice Bellard
>>
>> Am using an ubuntu 14.04 system
>> $ uname -a
>> Linux ubuntu-Precision-Tower-5810 3.13.0-24-generic #46-Ubuntu SMP Thu
>> Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>>
>> After stack.sh which was successful, tried creating a new instance -
>> which gave an ERROR again.
>>
>> $ nova list
>>
>> +--++++-+---+
>> | ID   | Name   | Status | Task
>> State | Power State | Networks
>> |
>>
>> +--++++-+---+
>> | 31a7e160-d04c-4216-91cf-30ce86c2b1fa | demo-instance1 | ERROR  |
>> -  | NOSTATE | private=10.0.0.3,
>> fd34:f4c5:412:0:f816:3eff:fea4:b9fe |
>>
>> $ sudo service ovs-dpdk status
>> sourcing config
>> ovs alive
>> VHOST_CONFIG: bind to /var/run/openvswitch/vhufb8052e5-d3
>> 2015-11-24T10:23:25Z|00126|dpdk|INFO|Socket
>> /var/run/openvswitch/vhufb8052e5-d3 created for vhost-user port
>> vhufb8052e5-d3
>> 2015-11-24T10:23:25Z|4|dpif_netdev(pmd18)|INFO|Core 2 processing port
>> 'vhufb8052e5-d3'
>> 2015-11-24T10:23:25Z|2|dpif_netdev(pmd19)|INFO|Core 8 processing port
>> 

Re: [openstack-dev] [Horizon][Neutron] dashboard repository for neutron subprojects

2015-11-24 Thread Fawad Khaliq
Hi Akihiro,

Great initiative.

On Wed, Nov 25, 2015 at 10:46 AM, Akihiro Motoki  wrote:

> Hi,
>
> Neutron has now various subprojects and some of them would like to
> implement Horizon supports. Most of them are additional features.
> I would like to start the discussion where we should have horizon support.
>
> [Background]
> Horizon team introduced a plugin mechanism and we can add horizon panels
> from external repositories. Horizon team is recommending external repos for
> additional services for faster iteration and features.
> We have various horizon related repositories now [1].
>
> In Neutron related world, we have neutron-lbaas-dashboard and
> horizon-cisco-ui repos.
>
> [Possible options]
> There are several possible options for neutron sub-projects.
> My current vote is (b), and the next is (a). It looks a good balance to me.
> I would like to gather broader opinions,
>
> (a) horizon in-tree repo
> - [+] It was a legacy approach and there is no initial effort to setup a
> repo.
> - [+] Easy to share code conventions.
> - [-] it does not scale. Horizon team can be a bottleneck.

Based on the learnings from Neutron plugins, it makes sense not to go down
this route.

>
> (b) a single dashboard repo for all neutron sub-projects
> - [+] No need to set up a repo by each sub-project
> - [+] Easier to share the code convention. Can get horizon reviewers.
> - [-] who will be a core reviewer of this repo?

Over here, for sub-project specific Horizon features, we will be adding
complexity of packaging/configuration.


>
> (c) neutron sub-project repo
> - [+] Each sub-project can develop a dashboard fast.
> - [-] It is doable, but the directory tree can be complicated.
> - [-] Lead to too many repos and the horizon team/liaison cannot cover all.
>
If we work in the same mode as we do DevStack plugins today, that is inside
the sub-projects repo, the number of repos should be equal to the number of
sub-projects. Also, this simplifies the packaging and allow each subproject
owners to take ownership of the code. Guidance from Horizon folks will be
appreciated to help the interested people get started. Question is: does
Horizon plugin follow the same model as Neutron plugins like all the code
is part of Horizon umbrella? If yes, then this might not be the ideal
option.


>
> (d) a separate repo per neutron sub-project
> Similar to (c)
> - [+] A dedicate repo for dashboard simplifies the directory tree.
> - [-] Need to setup a separate repo.
> - [-] Lead to too many repos and the horizon team/liaison cannot cover all.

 Agree

>
>
> Note that this mail is not intended to move the current neutron
> support in horizon
> to outside of horizon tree. I would like to discuss Horizon support of
> additional features.
>
> Akihiro
>
> [1] http://docs.openstack.org/developer/horizon/plugins.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Time to increase our IRC Meetings infrastructure?

2015-11-24 Thread Tony Breeds
On Wed, Nov 25, 2015 at 12:51:54PM +1100, Tony Breeds wrote:

> There is lots of hygiene stuff to do in the repo :)  I'll see what I can come 
> up
> with but at the very least it's going to require a bunch of people to +1 a
> review to drop their meeting ;P

Okay it wasn't too bad.  Of the periods that are "full" I found ~10 meetings
that can potentially be removed.

https://review.openstack.org/#/q/is:open+project:openstack-infra/irc-meetings+topic:feature/hygiene,n,z

If they all go then we're down to only one hour that is fully booked.  Which
probably doesn't justify adding a new channel.

Tony.


pgp5wASNAVVh2.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally as A Service?

2015-11-24 Thread JJ Asghar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 11/24/15 1:35 PM, Munoz, Obed N wrote:
> Is there any plan or work-in-progress for RaaS (Rally as a
> Service)? I saw it mentioned on
> https://wiki.openstack.org/wiki/Rally


I think we've talked about something like this at the Operators
Meetups. But this[1] is more or less what everyone just defaults to.

[1]: https://github.com/svashu/docker-rally

- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWVOCjAAoJEDZbxzMH0+jTo9gP/18pkAs9FMiL9qIWADZ5Q2BH
bZlnIud0Yk5Qj9uVx3o+/Tk9OpFcDQy49FLZ1ytQD2hJeP+51Bk/JRSRCYW+GVo1
D4qlzu5FiQBDFIEn4YB4n2x1v7DrrxR9ADb5CjADdFf1RitkHJXSMXdh0XO+yO5n
BWCDpIq/dVced2jMT4FhNDyArwgKrO/KMMbDaYG1TZueMXdU6JCMbshUOKkWiF1j
8hK8ergjzo/FDwv98NnD5cYbizPee1IgTBjsfaLO+PYmrKjU6qZrEqndabrnkPnQ
DKSu+xxq6q8SzHnB/tQgJDfOMJkwtr8qk7wMHquzbVNfiNYcR6Yroke1C+QdPtwQ
rVCcU6pLy2hNGQXvZpSWtXLe5PMohwQsCxHrWUtyB/DbUD5Eu1BetfCwSkBJCu2c
6m7KG+fMOwlEXmhUUQYDjrKBg8NkIFWwGXpNS3ITWekb4jkVyR3NG9Pr++yzTp0f
nMR0vSaYn7xFgMJbJuO2jCsv2PMZ2SGt87CPnrbI5Hry9gyqw1D/u9jEamxtNCMU
MOG0nDP+fWfiTuDXlVPOr4YeHyTYOWPWtGedj+f4jXIjiQ2e/Y6VXqWynfGQZXUX
12zgk11poOM8O9rgAQ+PHJpJqnZV5jhj4jyG+av9D+pm3kQxIgIXSLLnb4e5Kh+d
2yfcHXS5+Cgly28F+NMZ
=V7wA
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally as A Service?

2015-11-24 Thread Munoz, Obed N

-- 
Obed N Munoz
Cloud Engineer @ ClearLinux Project
Open Source Technology Center








On 11/24/15, 4:11 PM, "JJ Asghar"  wrote:

>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>On 11/24/15 1:35 PM, Munoz, Obed N wrote:
>> Is there any plan or work-in-progress for RaaS (Rally as a
>> Service)? I saw it mentioned on
>> https://wiki.openstack.org/wiki/Rally
>
>
>I think we've talked about something like this at the Operators
>Meetups. But this[1] is more or less what everyone just defaults to.
>
>[1]: https://github.com/svashu/docker-rally

Yeah, actually, I’d prefer to use the official Rally Project’s  Dockerfile.

https://github.com/openstack/rally/blob/master/Dockerfile 


We’re creating a new Dockerfile that takes the above as base and then adds some 
automation for 
Running the db create and html file generation.

>
>- -- 
>Best Regards,
>JJ Asghar
>c: 512.619.0722 t: @jjasghar irc: j^2
>-BEGIN PGP SIGNATURE-
>Version: GnuPG/MacGPG2 v2
>Comment: GPGTools - https://gpgtools.org
>
>iQIcBAEBCgAGBQJWVOCjAAoJEDZbxzMH0+jTo9gP/18pkAs9FMiL9qIWADZ5Q2BH
>bZlnIud0Yk5Qj9uVx3o+/Tk9OpFcDQy49FLZ1ytQD2hJeP+51Bk/JRSRCYW+GVo1
>D4qlzu5FiQBDFIEn4YB4n2x1v7DrrxR9ADb5CjADdFf1RitkHJXSMXdh0XO+yO5n
>BWCDpIq/dVced2jMT4FhNDyArwgKrO/KMMbDaYG1TZueMXdU6JCMbshUOKkWiF1j
>8hK8ergjzo/FDwv98NnD5cYbizPee1IgTBjsfaLO+PYmrKjU6qZrEqndabrnkPnQ
>DKSu+xxq6q8SzHnB/tQgJDfOMJkwtr8qk7wMHquzbVNfiNYcR6Yroke1C+QdPtwQ
>rVCcU6pLy2hNGQXvZpSWtXLe5PMohwQsCxHrWUtyB/DbUD5Eu1BetfCwSkBJCu2c
>6m7KG+fMOwlEXmhUUQYDjrKBg8NkIFWwGXpNS3ITWekb4jkVyR3NG9Pr++yzTp0f
>nMR0vSaYn7xFgMJbJuO2jCsv2PMZ2SGt87CPnrbI5Hry9gyqw1D/u9jEamxtNCMU
>MOG0nDP+fWfiTuDXlVPOr4YeHyTYOWPWtGedj+f4jXIjiQ2e/Y6VXqWynfGQZXUX
>12zgk11poOM8O9rgAQ+PHJpJqnZV5jhj4jyG+av9D+pm3kQxIgIXSLLnb4e5Kh+d
>2yfcHXS5+Cgly28F+NMZ
>=V7wA
>-END PGP SIGNATURE-
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum-ui][magnum] Suggestions for Features/Improvements

2015-11-24 Thread Jay Lau
Thanks Brad, just added a new item of "Enable end user create container
objects via magnum-ui, such as creating pod, rc, service etc", it would be
great if we can enable end user create some container applications via
magnum UI, comments? Thanks!

On Wed, Nov 25, 2015 at 1:15 AM, Bradley Jones (bradjone) <
bradj...@cisco.com> wrote:

> We have started to compile a list of possible features/improvements for
> Magnum-UI. If you have any suggestions about what you would like to see in
> the plugin please leave them in the etherpad so we can prioritise what we
> are going to work on.
>
> https://etherpad.openstack.org/p/magnum-ui-feature-list
>
> Thanks,
> Brad Jones
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally as A Service?

2015-11-24 Thread Boris Pavlovic
Obed,

We are refactoring Rally to make it possible to run it as a deamon.
There is plenty amount of work that should be done, including this spec:
https://review.openstack.org/#/c/182245/

Best regards,
Boris Pavlovic

On Tue, Nov 24, 2015 at 2:20 PM, Munoz, Obed N 
wrote:

>
> --
> Obed N Munoz
> Cloud Engineer @ ClearLinux Project
> Open Source Technology Center
>
>
>
>
>
>
>
>
> On 11/24/15, 4:11 PM, "JJ Asghar"  wrote:
>
> >-BEGIN PGP SIGNED MESSAGE-
> >Hash: SHA512
> >
> >On 11/24/15 1:35 PM, Munoz, Obed N wrote:
> >> Is there any plan or work-in-progress for RaaS (Rally as a
> >> Service)? I saw it mentioned on
> >> https://wiki.openstack.org/wiki/Rally
> >
> >
> >I think we've talked about something like this at the Operators
> >Meetups. But this[1] is more or less what everyone just defaults to.
> >
> >[1]: https://github.com/svashu/docker-rally
>
> Yeah, actually, I’d prefer to use the official Rally Project’s  Dockerfile.
>
> https://github.com/openstack/rally/blob/master/Dockerfile
>
>
> We’re creating a new Dockerfile that takes the above as base and then adds
> some automation for
> Running the db create and html file generation.
>
> >
> >- --
> >Best Regards,
> >JJ Asghar
> >c: 512.619.0722 t: @jjasghar irc: j^2
> >-BEGIN PGP SIGNATURE-
> >Version: GnuPG/MacGPG2 v2
> >Comment: GPGTools - https://gpgtools.org
> >
> >iQIcBAEBCgAGBQJWVOCjAAoJEDZbxzMH0+jTo9gP/18pkAs9FMiL9qIWADZ5Q2BH
> >bZlnIud0Yk5Qj9uVx3o+/Tk9OpFcDQy49FLZ1ytQD2hJeP+51Bk/JRSRCYW+GVo1
> >D4qlzu5FiQBDFIEn4YB4n2x1v7DrrxR9ADb5CjADdFf1RitkHJXSMXdh0XO+yO5n
> >BWCDpIq/dVced2jMT4FhNDyArwgKrO/KMMbDaYG1TZueMXdU6JCMbshUOKkWiF1j
> >8hK8ergjzo/FDwv98NnD5cYbizPee1IgTBjsfaLO+PYmrKjU6qZrEqndabrnkPnQ
> >DKSu+xxq6q8SzHnB/tQgJDfOMJkwtr8qk7wMHquzbVNfiNYcR6Yroke1C+QdPtwQ
> >rVCcU6pLy2hNGQXvZpSWtXLe5PMohwQsCxHrWUtyB/DbUD5Eu1BetfCwSkBJCu2c
> >6m7KG+fMOwlEXmhUUQYDjrKBg8NkIFWwGXpNS3ITWekb4jkVyR3NG9Pr++yzTp0f
> >nMR0vSaYn7xFgMJbJuO2jCsv2PMZ2SGt87CPnrbI5Hry9gyqw1D/u9jEamxtNCMU
> >MOG0nDP+fWfiTuDXlVPOr4YeHyTYOWPWtGedj+f4jXIjiQ2e/Y6VXqWynfGQZXUX
> >12zgk11poOM8O9rgAQ+PHJpJqnZV5jhj4jyG+av9D+pm3kQxIgIXSLLnb4e5Kh+d
> >2yfcHXS5+Cgly28F+NMZ
> >=V7wA
> >-END PGP SIGNATURE-
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][glance][api][infra][defcore] last call for feedback on image import (again)

2015-11-24 Thread Brian Rosmaita
Thanks to all the OpenStackers who have participated so far.  A major revision 
is now available for your reading pleasure.

https://review.openstack.org/#/c/232371/

I realize a USA holiday is coming up very soon this week ... the Glance 
community would be thankful to get some comments before then, if possible.

thanks!
brian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] process change for closing bugs when patches merge

2015-11-24 Thread Thomas Goirand
Hi Doug!

Thanks for taking the time to gather opinions before going ahead.

On 11/23/2015 10:58 PM, Doug Hellmann wrote:
> As part of completing the release automation work and deprecating our
> use of Launchpad for release content tracking, we also want make some
> changes to the way patches associated with bugs are handled.
> 
> Right now, when a patch with Closes-Bug in the commit message merges,
> the bug status is updated to "Fix Committed" to indicate that the change
> is in git, but not a formal release, and we rely on the release tools to
> update the bug status to "Fix Released" when the release is  made. This
> is one of the most error prone areas of the release process, due to
> Launchpad service timeouts and other similar issues.

So, because Launchpad is unreliable, we're going to have bad bug
tracking. Why can't we have Launchpad fixed? Or the signaling to it
fixed so that it retries?

I've been told (during the Storyboard session in Tokyo) that debbugs is
a bad choice, because people don't know how to use email. Well, at
least, it has version tracking, and the email protocol is by its nature
asynchronous, and there are retries...

> The fact that we
> cannot reliably automate this step is the main reason we want to stop
> using Launchpad's release content tracking capabilities in the first
> place.

This is a good call for stopping using Launchpad, not stopping tracking
things correctly.

> Please let me know if this change would represent a significant
> regression to your project's workflow.

As long as there's a comment left in the bug, that's fine for *my*
workflow as a package maintainer.

Though I still believe that having to deal with such Launchpad's
deficiency this way is very frustrating. I have no doubts that you have
little choice here, otherwise you wouldn't propose it.

On 11/24/2015 05:26 PM, Thierry Carrez wrote:
> I don't think we need to guarantee that the comment is added.

So, when I don't see the comment in a fix which I am trying to apply to
one of my packages, I should tell myself: "oh, are we in the case of a
timeout?" ... Gosh, this seems so broken and unreliable ... If we don't
guarantee that the comment will be there in case of a release, then why
bother at all to have them sent?

> So it would be a shame to restrict task tracking possibilities to two
> projects per bug just to preserve accurate change tracking in
> Launchpad, while the plan is to focus Launchpad usage solely on task
> tracking :)

I'm not sure I am following here. Are we already using phabricator for
bug tracking?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]Ironic operations on nodes in maintenance mode

2015-11-24 Thread Shraddha Pandhe
On Tue, Nov 24, 2015 at 7:39 AM, Jim Rollenhagen 
wrote:

> On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> > Hi,
> >
> > I would like to know how everyone is using maintenance mode and what is
> > expected from admins about nodes in maintenance. The reason I am bringing
> > up this topic is because, most of the ironic operations, including manual
> > cleaning are not allowed for nodes in maintenance. Thats a problem for
> us.
> >
> > The way we use it is as follows:
> >
> > We allow users to put nodes in maintenance mode (indirectly) if they find
> > something wrong with the node. They also provide a maintenance reason
> along
> > with it, which gets stored as "user_reason" under maintenance_reason. So
> > basically we tag it as user specified reason.
> >
> > To debug what happened to the node our operators use manual cleaning to
> > re-image the node. By doing this, they can find out all the issues
> related
> > to re-imaging (dhcp, ipmi, image transfer, etc). This debugging process
> > applies to all the nodes that were put in maintenance either by user, or
> by
> > system (due to power cycle failure or due to cleaning failure).
>
> Interesting; do you let the node go through cleaning between the user
> nuking the instance and doing this manual cleaning stuff?
>

Do you mean automated cleaning? If so, yes we let that go through since
thats allowed in maintenance mode.

>
> At Rackspace, we leverage the fact that maintenance mode will not allow
> the node to proceed through the state machine. If a user reports a
> hardware issue, we maintenance the node on their behalf, and when they
> delete it, it boots the agent for cleaning and begins heartbeating.
> Heartbeats are ignored in maintenance mode, which gives us time to
> investigate the hardware, fix things, etc. When the issue is resolved,
> we remove maintenance mode, it goes through cleaning, then back in the
> pool.


What is the provision state when maintenance mode is removed? Does it
automatically go back into the available pool? How does a user report a
hardware issue?

Due to large scale, we can't always assure that someone will take care of
the node right away. So we have some automation to make sure that user's
quota is freed.

1. If a user finds some problem with the node, the user calls our break-fix
extension (with reason for break-fix) which deletes the instance for the
user and frees the quota.
2. Internally nova deletes the instance and calls destroy on virt driver.
This follows the normal delete flow with automated cleaning.
3. We have an automated tool called Reparo which constantly monitors the
node list for nodes in maintenance mode.
4. If it finds any nodes in maintenance, it runs one round of manual
cleaning on it to check if the issue was transient.
5. If cleaning fails, we need someone to take a look at it.
6. If cleaning succeeds, we put the node back in available pool.

This is only way we can scale to hundreds of thousands of nodes. If manual
cleaning was not allowed in maintenance mode, our operators would hate us :)

If the provision state of the node is such a way that the node cannot be
picked up by the scheduler, we can remove maintenance mode and run manual
cleaning.


> We used to enroll nodes in maintenance mode, back when the API put them
> in the available state immediately, to avoid them being scheduled to
> until we knew they were good to go. The enroll state solved this for us.
>
> Last, we use maintenance mode on available nodes if we want to
> temporarily pull them from the pool for a manual process or some
> testing. This can also be solved by the manageable state.
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Time to increase our IRC Meetings infrastructure?

2015-11-24 Thread Ed Leafe
On Nov 24, 2015, at 4:35 AM, Thierry Carrez  wrote:

> I'd rather standardize and rename -alt to -2 (i.e. redirect -alt to -2
> on IRC, and bulk-change all meetings YAML from -alt to -2).

+1. 'alt' is confusing.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc]

2015-11-24 Thread Cathy Zhang
Hi Oguz,

I will forward you the email on the steps of using DevStack to set up SFC. 

As you mentioned, the DevStack support patch for networking-sfc is being worked 
on and under review. 
The codes that support this feature including unit test codes have been 
developed and uploaded for review. We are now  actively working on integration 
testing of all code pieces as well as end-to-end testing of this feature, and 
bug fixes. 

Thanks,
Cathy

-Original Message-
From: Oguz Yarimtepe [mailto:oguzyarimt...@gmail.com] 
Sent: Tuesday, November 24, 2015 1:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][sfc]

Hi,

Is there any working Devstack configuration for sfc testing? I just saw one 
commit that is waiting review.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Puppet 4 and Beaker testing

2015-11-24 Thread Cody Herriges
Today we do unit testing against all kinds of puppet versions but
beaker's acceptance testing is currently limited to the 3.8.x series.
The is because we install "latest" Puppet from Puppet Labs repositories
but do so using the old packaging style which relied on dependency
packages being delivered by the Operating System vendors, things like
Ruby or OpenSSL.  With the release of Puppet 4.0, Puppet Labs decided to
take a different approach and has pushed into the open source space a
compilation package that contains all runtime dependencies, packaged as
"puppet-agent" and generally referred to as PC1.  This is the same thing
Puppet Labs had been doing for Puppet Enterprise customers for a few years.

Along with this comes new paths for common files.

* All puppet code is now expected to be in /etc/puppetlabs/code
* Puppet configs are in /etc/puppetlabs/puppet/puppet.conf
* etc.
* etc.
* etc.

To better understand what is going on, visit the following links:

* http://docs.puppetlabs.com/puppet/4.3/reference/about_agent.html
* http://docs.puppetlabs.com/puppet/4.3/reference/whered_it_go.html

The result of this will be beaker tests running against Ruby 2.x, Puppet
4.x, and the native C++11 rewrite of Facter but I need some eyes and
discussion on the best way to phase in this change to our CI.  I am
asking people to take a look at the following three commits.

https://review.openstack.org/#/q/status:open+topic:pc1,n,z

* (WIP) Prototype the usage of different base paths
  - The changes here would be pushed to puppet-modulesync-configs and
deployed to all modules
* (WIP) Add support for different module paths
  - Sets of the use of a "BASE_PATH" environment variable to handle the
differences of /etc/puppetlabs vs. /etc/puppet
* (WIP) Test pupept-openstack modules on more Puppet
  - Initial phase in of new style tests that handle beaker runs for old
style "foss" agent that is going to be stuck in the 3.8.x series and the
new style "agent" from PC1 for 4.x

Thanks,

-- Cody





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Weekly IRC meeting cancelled

2015-11-24 Thread Tripp, Travis S
We will not be holding our weekly IRC meeting this week due to the
US Thanksgiving holiday.  Our regular meeting will resume Thursday,
December 3rd.

As always, you can find the meeting schedule and agenda here:
http://eavesdrop.openstack.org/#Searchlight_Team_Meeting


Thanks,
Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-24 Thread Nathan Reller
> the cinder admin and the nova admin are ALWAYS the same people

There is interest in hybrid clouds where the Nova and Cinder services
are managed by different providers. The customer would place higher
trust in Nova because you must trust the compute service, and the
customer would place less trust in Cinder. One way to achieve this
would be to have all encryption done by Nova. Cinder would simply see
encrypted data and provide a good cheap storage solution for data.

Consider a company with sensitive data. They can run the compute nodes
themselves and offload Cinder service to some third-party service.
This way they are the only ones who can manage the machines that see
the plaintext.

-Nate

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovn] Supporting multiple routes per router interface

2015-11-24 Thread Amitabha Biswas
Hi,

The https://review.openstack.org/#/c/237820/ patch added the support for 
adding router interfaces in OVN NB. This patch only provides support for a 
single route per interface. Neutron supports multiple routes per router 
interface. While attempting to model this in OVN Northbound DB, I ran into 
some problems.

The OVN Northbound DB at this point supports only a single route per 
Logical Router Port. To solve the multiple route (per router interface) 
issue while working within this OVN NB model restriction, I created 
multiple lport and lrouter port for each subnet supported by a router 
interface.

Let's say a router interface supports both 192.168.1.0/24 and 
2001:db8:cafe::/64 routes, I added 2 lport and 2 lrouter ports in the OVN 
NB.

Logical Router Table:
UUID-PA ["fa:16:3e:59:80:ad 192.168.1.1"]   "port-1" 
{router-port="UUID-RA"} ["fa:16:3e:59:80:ad"] router
UUID-PB ["fa:16:3e:59:80:ad 2001:db8:cafe::1"]  "port-2" 
{router-port="UUID-RB"} ["fa:16:3e:59:80:ad"] router

Logical Router Port Table:
UUID-RA "fa:16:3e:59:80:ad" "port-1" "192.168.1.1/24"  []
UUID-RB "fa:16:3e:59:80:ad" "port-2" "2001:db8:cafe::1/64" []

Note that both ports (in both tables) have the same MAC corresponding to 
the neutron router interface MAC.

This results in the following logical flow:
...
table=3(switch_in_l2_lkup), priority=   50, match=(eth.dst == 
fa:16:3e:59:80:ad), action=(outport = "port-2"; output;)
table=3(switch_in_l2_lkup), priority=   50, match=(eth.dst == 
fa:16:3e:59:80:ad), action=(outport = "port-1"; output;)

As we can see there are 2 conflicting/overlapping rules for the MAC in 
table 3 and the packet could be sent to the wrong output port.

To support the multiple route per interface feature, we need to propose 
that OVN NB allow multiple routes per logical router port in ovs-dev or 
ovs-discuss. Additionally it seems that the MAC must be unique in the 
Logical Router Table per Logical Flow.

Thanks
Amitabha


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-i18n] DocImpact Changes

2015-11-24 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Akihiro,

Judging by most of the bugs we see in the openstack-manuals queue, the bugs 
that will be raised in projects' own queues will usually be against their own 
developer documentation. In the rare instance that that isn't the case, and it 
needs to come into one of the openstack-manuals docs, then we are happy to 
discuss the issue with the person who raised the bug and, possibly, take the 
bug in our queue to be fixed.

To be clear: just because DocImpact raises a bug in a projects' queue, doesn't 
mean docs aren't available to help fix it. We're always here for you to ask 
questions about docs bugs, even if those bugs are in your repo and your 
developer documentation.

I hope that answers the question.

Thanks,
Lana

On 24/11/15 16:07, Akihiro Motoki wrote:
> it sounds a good idea in general.
> 
> One question is how each project team should handle DocImpact bugs
> filed in their own project.
> It will be a part of "Plan and implement an education campaign"
> mentioned in the docs spec.
> I think some guidance will be needed. One usual question will be which
> guide(s) needs to be updated?
> openstack-manuals now have a lot of manuals and most developers are
> not aware of all of them.
> 
> Akihiro
> 
> 
> 2015-11-24 7:33 GMT+09:00 Lana Brindley :
> Hi everyone,
> 
> The DocImpact flag was implemented back in Havana to make it easier for 
> developers to notify the documentation team when a patch might cause a change 
> in the docs. This has had an overwhelming response, resulting in a huge 
> amount of technical debt for the docs team, so we've been working on a plan 
> to change how DocImpact works. This way, instead of just creating a lot of 
> noise, it will hopefully go back to being a useful tool. This is written in 
> more detail in our spec[1], but this email attempts to hit the highlights for 
> you.
> 
> TL;DR: In the future, you will need to add a description whenever you add a 
> DocImpact flag in your commit message.
> 
> Right now, if you create a patch and include 'DocImpact' in the commit 
> message, Infra raises a bug in either the openstack-manuals queue, or (for 
> some projects) your own project queue. DocImpact is capable of handling a 
> description field, but this is rarely used. Instead, docs writers are having 
> to dig through your patch to determine what (if any) change is required.
> 
> What we are now implementing has two main facets:
> 
> * The DocImpact flag can only be used if it includes a description. A Jenkins 
> job will test for this, and if you include DocImpact without a description, 
> the job will fail. The description can be a short note about what needs 
> changing in the docs, a link to a gist or etherpad with more information, or 
> a note about the location of incorrect docs. There are specific examples in 
> the spec[2].
> * Only Nova, Swift, Glance, Keystone, and Cinder will have DocImpact bugs 
> created in the openstack-manuals queue. All other projects will have 
> DocImpact bugs raised in their own queue. This is handled in [3] and [4].
> 
> This will hopefully reduce the volume of DocImpact bugs being created in the 
> openstack-manuals queue, and also improve the quality of the bugs that are 
> created.
> 
> I know there is a fair amount of confusion over DocImpact already, so I'm 
> more than happy to field questions on this.
> 
> Thanks,
> Lana
> 
> 
> 1: 
> http://specs.openstack.org/openstack/docs-specs/specs/mitaka/review-docimpact.html
> 2: 
> http://specs.openstack.org/openstack/docs-specs/specs/mitaka/review-docimpact.html#examples
> 3: https://review.openstack.org/#/c/248515/
> 4: https://review.openstack.org/#/c/248549/
> 
>>
>> ___
>> Openstack-i18n mailing list
>> openstack-i...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWVNJrAAoJELppzVb4+KUy0ggIALEv0MaKfubZZ4wpyUZnWBsb
QqTTufVJZ8o/p6CanDkFl4EnyaAtsmoe/hYIlmqDMxWCDichIB829G11WjfHZz/H
g6tsUar640bHs2XSh8MTtkTYY5wX2MIL+2FMqIioCSn7rGc/AXBdDRhqp8HYJBoX
yP8PM6uSYQIEzYfmwQ4McZaRmXfPfqnvB+AWcmf96/nwYvndSk+WrELR6vQmhoIf
mWKkdpwtp6sNJ0DYkxwhWjEE5e6bbQ2mnYz+gKGsptoTl8GwxiGWJjakPn6fzKhr
CJ+ofXtOp2PNS4GzQiJrt6sHKgLiXGGQ0iCGn2BqPZqWLjKenWPGmeg2Ybgm6Y8=
=4dRC
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable][keystone] keystonemiddleware release 1.5.3 (kilo)

2015-11-24 Thread gord chung
this is the history Brant provided: 
http://lists.openstack.org/pipermail/openstack-dev/2014-May/036427.html


it's apparently on his todo list... or until he finds the person he 
needs to defer it to[1]


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/latest.log.html#t2015-11-24T15:48:42




On 24/11/15 02:37 PM, Morgan Fainberg wrote:
Would it be possible to get a bit more detail in (especially if other 
projects are mocking like this) what is being done so that I [or 
anothe rKeystone dev] can work towards a real mock/test module in 
keystonemiddleware so this doesn't occur again due to 
internal-interface mocking?


On Tue, Nov 24, 2015 at 11:24 AM, gord chung > wrote:


mriedem and myself resolved this with keystone folks earlier
today[1]. should be all better now [2]

[1]

http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-11-24.log.html#t2015-11-24T14:38:39
[2] https://review.openstack.org/#/c/249268/



On 24/11/15 12:33 PM, Alan Pevec wrote:

keystonemiddleware 1.5.3: Middleware for OpenStack Identity

periodic-ceilometer-python27-kilo started failing after this
release
First bad:

http://logs.openstack.org/periodic-stable/periodic-ceilometer-python27-kilo/40c5453/testr_results.html.gz
test_acl_scenarios failing with 401 Unauthorized


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
gord




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Why set DEFAULT_DOCKER_TIMEOUT = 10 in docker client?

2015-11-24 Thread Qiao,Liyong

hi all
In Magnum code, we hardcode it as DEFAULT_DOCKER_TIMEOUT = 10
This bring troubles in some bad networking environment (or bad 
performance swarm master)

At least it doesn't work on our gate.

Here is the test patch on gate https://review.openstack.org/249522 , I 
set it as 180 to make sure
the failure it due to time_out parameter passed to docker client, but we 
need to chose a suitble one


I check docker client's default value,
DEFAULT_TIMEOUT_SECONDS = 60 , I wonder why we overwrite it  as 10?

Please let me know what's your though? My suggestion is we set 
DEFAULT_DOCKER_TIMEOUT

as long as our rpc time_out.

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Neutron] dashboard repository for neutron subprojects

2015-11-24 Thread Armando M.
On 24 November 2015 at 21:46, Akihiro Motoki  wrote:

> Hi,
>
> Neutron has now various subprojects and some of them would like to
> implement Horizon supports. Most of them are additional features.
> I would like to start the discussion where we should have horizon support.
>
> [Background]
> Horizon team introduced a plugin mechanism and we can add horizon panels
> from external repositories. Horizon team is recommending external repos for
> additional services for faster iteration and features.
> We have various horizon related repositories now [1].
>
> In Neutron related world, we have neutron-lbaas-dashboard and
> horizon-cisco-ui repos.
>
> [Possible options]
> There are several possible options for neutron sub-projects.
> My current vote is (b), and the next is (a). It looks a good balance to me.
> I would like to gather broader opinions,
>
> (a) horizon in-tree repo
> - [+] It was a legacy approach and there is no initial effort to setup a
> repo.
> - [+] Easy to share code conventions.
> - [-] it does not scale. Horizon team can be a bottleneck.
>
> (b) a single dashboard repo for all neutron sub-projects
> - [+] No need to set up a repo by each sub-project
> - [+] Easier to share the code convention. Can get horizon reviewers.
> - [-] who will be a core reviewer of this repo?
>
> (c) neutron sub-project repo
>

All circumstances considered, I think c) is the only viable one.


> - [+] Each sub-project can develop a dashboard fast.
> - [-] It is doable, but the directory tree can be complicated.
>

why? do you envision something else other than /horizon directory in the
tree?


> - [-] Lead to too many repos and the horizon team/liaison cannot cover all.
>

If that's true for horizon, shouldn't the same be true for the neutron team
:)? IMO, the level of feedback/oversight provided is always going to be
constant (you can't clone people) no matter how the efforts are
distributed. I'd rather empower the individual projects.


>
> (d) a separate repo per neutron sub-project
> Similar to (c)
> - [+] A dedicate repo for dashboard simplifies the directory tree.
> - [-] Need to setup a separate repo.
> - [-] Lead to too many repos and the horizon team/liaison cannot cover
> all.
>

> Note that this mail is not intended to move the current neutron
> support in horizon
> to outside of horizon tree. I would like to discuss Horizon support of
> additional features.
>
> Akihiro
>
> [1] http://docs.openstack.org/developer/horizon/plugins.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Neutron] dashboard repository for neutron subprojects

2015-11-24 Thread Fawad Khaliq
On Wed, Nov 25, 2015 at 12:06 PM, Armando M.  wrote:

>
>
> On 24 November 2015 at 21:46, Akihiro Motoki  wrote:
>
>> Hi,
>>
>> Neutron has now various subprojects and some of them would like to
>> implement Horizon supports. Most of them are additional features.
>> I would like to start the discussion where we should have horizon support.
>>
>> [Background]
>> Horizon team introduced a plugin mechanism and we can add horizon panels
>> from external repositories. Horizon team is recommending external repos
>> for
>> additional services for faster iteration and features.
>> We have various horizon related repositories now [1].
>>
>> In Neutron related world, we have neutron-lbaas-dashboard and
>> horizon-cisco-ui repos.
>>
>> [Possible options]
>> There are several possible options for neutron sub-projects.
>> My current vote is (b), and the next is (a). It looks a good balance to
>> me.
>> I would like to gather broader opinions,
>>
>> (a) horizon in-tree repo
>> - [+] It was a legacy approach and there is no initial effort to setup a
>> repo.
>> - [+] Easy to share code conventions.
>> - [-] it does not scale. Horizon team can be a bottleneck.
>>
>> (b) a single dashboard repo for all neutron sub-projects
>> - [+] No need to set up a repo by each sub-project
>> - [+] Easier to share the code convention. Can get horizon reviewers.
>> - [-] who will be a core reviewer of this repo?
>>
>> (c) neutron sub-project repo
>>
>
> All circumstances considered, I think c) is the only viable one.
>
+1

>
>
>> - [+] Each sub-project can develop a dashboard fast.
>> - [-] It is doable, but the directory tree can be complicated.
>>
>
> why? do you envision something else other than /horizon directory in the
> tree?
>
>
>> - [-] Lead to too many repos and the horizon team/liaison cannot cover
>> all.
>>
>
> If that's true for horizon, shouldn't the same be true for the neutron
> team :)? IMO, the level of feedback/oversight provided is always going to
> be constant (you can't clone people) no matter how the efforts are
> distributed. I'd rather empower the individual projects.
>
Agree. +1

>
>
>>
>> (d) a separate repo per neutron sub-project
>> Similar to (c)
>> - [+] A dedicate repo for dashboard simplifies the directory tree.
>> - [-] Need to setup a separate repo.
>> - [-] Lead to too many repos and the horizon team/liaison cannot cover
>> all.
>>
>
>> Note that this mail is not intended to move the current neutron
>> support in horizon
>> to outside of horizon tree. I would like to discuss Horizon support of
>> additional features.
>>
>> Akihiro
>>
>> [1] http://docs.openstack.org/developer/horizon/plugins.html
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][TaaS] Query regarding TaaS API

2015-11-24 Thread reedip banerjee
Dear Members of Neutron-TaaS community,
I have a few queries related to the TaaS API, and would like to know a bit
more details about them.

a) Currently Tap Service and Tap flow Endpoints are listed under /v2_0/taas/

For example:
http://111.000.222.115:9696/v2.0/taas/tap-services.json
http://111.000.222.115:9696/v2.0/taas/tap-flows.json

Is it necessary to list the endpoints under /taas/?
Can we keep them under v2_0 like most of the other Neutron Extensions?
i.e.

http://111.000.222.115:9696/v2.0/taas/tap-services.json  -->
http://111.000.222.115:9696/v2.0/tap-services.json

b) Currently TaaS has 2 different ports:
- Tap Service uses port_id to specify ports
- Tap Flow uses source_port to specify ports.
As both of these attributes are under different end points, can we use
´port´ instead of port_id and source_port?
>From my understanding, port makes bit of a sense, and it is also an known
attribute in Neutron.



-- 
Thanks and Regards,
Reedip Banerjee
irc:reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Why set DEFAULT_DOCKER_TIMEOUT = 10 in docker client?

2015-11-24 Thread Adrian Otto
Li Yong,

At any rate, this should not be hardcoded. I agree that the default value 
should match the RPC timeout.

Adrian

> On Nov 24, 2015, at 11:23 PM, Qiao,Liyong  wrote:
> 
> hi all
> In Magnum code, we hardcode it as DEFAULT_DOCKER_TIMEOUT = 10
> This bring troubles in some bad networking environment (or bad performance 
> swarm master)
> At least it doesn't work on our gate.
> 
> Here is the test patch on gate https://review.openstack.org/249522 , I set it 
> as 180 to make sure
> the failure it due to time_out parameter passed to docker client, but we need 
> to chose a suitble one
> 
> I check docker client's default value,
> DEFAULT_TIMEOUT_SECONDS = 60 , I wonder why we overwrite it  as 10?
> 
> Please let me know what's your though? My suggestion is we set 
> DEFAULT_DOCKER_TIMEOUT
> as long as our rpc time_out.
> 
> -- 
> BR, Eli(Li Yong)Qiao
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-11-24 Thread AFEK, Ifat (Ifat)
Hi Gord, Hi Ryota,

Thanks for your detailed responses.
Hope you don't mind that I'm sending one reply to both of your emails. I think 
it would be easier to have one thread for this discussion.

Let me explain our use case in more details. 
Here is an example of how we would like to integrate with AODH. Let me know 
what you think about it. 

1. Vitrage gets an alarm from Nagios about high cpu load on one of the hosts
2. Vitrage evaluator decides (based on its templates) that an "instance might 
be suffering due to high cpu load on the host" alarm should be triggered for 
every instance on this host
3. Vitrage notifier creates corresponding alarm definitions in AODH
4. AODH stores these alarms in its database
5. Vitrage triggers the alarms
6. AODH updates the alarms states and notifies about it
7. Horizon user queries AODH for a list of all alarms (we are currently 
checking the status of a blueprint that should implement it[2]). AODH returns a 
list that includes the alarms that were triggered by Vitrage.
8. Horizon user selects one of the alarms that Vitrage generated, and asks to 
see its root cause (we will create a new blueprint for that). Vitrage API 
returns the RCA information for this alarm.

Our current discussion is on steps 3-6 (as far as we understand, and please 
correct me if I'm wrong, nothing blocks the implementation of the blueprint for 
step 7).

Looking at AODH API again, here is what I think we need to do:

1. Define an alarm with an external_trigger_rule or something like that. This 
alarm has no metric data. We just want to be able to trigger it and query its 
state.
2. Use AODH API for triggering this alarm. Will "PUT 
/v2/alarms/(alarm_id)/state" do the job? 


Please see also my comments below.

Thanks,
Ifat.


[2] 
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page 




> -Original Message-
> From: gord chung [mailto:g...@live.ca]
> Sent: Monday, November 23, 2015 9:45 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom
> alarms in AODH
> 
> 
> 
> On 23/11/2015 11:14 AM, AFEK, Ifat (Ifat) wrote:
> > I guess I would like to do both: create a new alarm definition, then
> > trigger it (call alarm_actions), and possibly later on set its state
> > back to OK (call ok_action).
> > I understood that currently all alarm triggering is internal in AODH,
> > according to threshold/events/combination alarm rules. Would it be
> > possible to add a new kind of rule, that will allow triggering the
> > alarm externally?
> what type of rule?
> 
> i have https://review.openstack.org/#/c/247211 which would
> theoretically allow you to push an action into queue which would then
> trigger appropriate REST call. not sure if it helps you plug into Aodh
> easier or not?

We need to add an alarm definition with an "external_rule", and then trigger 
it. It is important for us that the alarm definition will be stored in AODH 
database for future queries. As far as I understand, the queue should help only 
with the triggering?

> 
> --
> gord


> -Original Message-
> From: Ryota Mibu [mailto:r-m...@cq.jp.nec.com]
> Sent: Tuesday, November 24, 2015 10:00 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom
> alarms in AODH
> 
> Hi Ifat,
> 
> 
> Thank you for starting discussion how AODH can be integrated with
> Vitrage that would be a good example of AODH integration with other
> OpenStack components.
> 
> The key role of creating alarm definition is to set endpoint
> (alarm_actins) which can be receive alarm notification from AODH. How
> the endpoints can be set in your use case? Those endpoints are
> configured via virtage API and stored in its DB?

We have a graph database that will include resources and alarms imported from 
few sources of information (including Ceilometer), as well as alarms generated 
by Vitrage. However, we would like our alarms to be stored in AODH as well. If 
I understood you correctly, we will need the endpoints in order to be notified 
on Ceilometer alarms.

> 
> I agree with Gordon, you can use even-alarm with generating "event"
> containing alarming message that can be captured in aodh if vitrage
> relay the alarm definition to aodh. That is more feasible way rather
> than creating alarm definition right before triggering alarm
> notification. The reason is that aodh evaluator may not be aware of new
> alarm definitions and won't send notification until its alarm
> definition cache is refreshed in less than 60 sec (default value).

Logically speaking, we would like to create alarms and not events. Our goal is 
to alert when something is wrong. Creating events might work as a workaround, 
but this is not our preferred solution. 

> 
> Having special rule and external evaluator would be alternative, but it
> should be difficult to catch up latest aodh, since it will be changed
> 

Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-11-24 Thread AFEK, Ifat (Ifat)
Hi Gord, Hi Ryota,

(I sent the same mail again in a more readable format)

Thanks for your detailed responses.
Hope you don't mind that I'm sending one reply to both of your emails. I think 
it would be easier to have one thread for this discussion.


Let me explain our use case in more details. 
Here is an example of how we would like to integrate with AODH. Let me know 
what you think about it. 

1. Vitrage gets an alarm from Nagios about high cpu load on one of the hosts 

2. Vitrage evaluator decides (based on its templates) that an "instance might 
be suffering due to high cpu load on the host" alarm should be triggered for 
every instance on this host 

3. Vitrage notifier creates corresponding alarm definitions in AODH 

4. AODH stores these alarms in its database 

5. Vitrage triggers the alarms 

6. AODH updates the alarms states and notifies about it 

7. Horizon user queries AODH for a list of all alarms (we are currently 
checking the status of a blueprint that should implement it[2]). AODH returns a 
list that includes the alarms that were triggered by Vitrage.

8. Horizon user selects one of the alarms that Vitrage generated, and asks to 
see its root cause (we will create a new blueprint for that). Vitrage API 
returns the RCA information for this alarm.


Our current discussion is on steps 3-6 (as far as we understand, and please 
correct me if I'm wrong, nothing blocks the implementation of the blueprint for 
step 7).



Looking at AODH API again, here is what I think we need to do:

1. Define an alarm with an external_trigger_rule or something like that. This 
alarm has no metric data. We just want to be able to trigger it and query its 
state.

2. Use AODH API for triggering this alarm. Will "PUT 
/v2/alarms/(alarm_id)/state" do the job? 


Please see also my comments below.

Thanks,
Ifat.


[2] 
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page 




> -Original Message-
> From: gord chung [mailto:g...@live.ca]
> Sent: Monday, November 23, 2015 9:45 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising 
> custom alarms in AODH
> 
> 
> 
> On 23/11/2015 11:14 AM, AFEK, Ifat (Ifat) wrote:
> > I guess I would like to do both: create a new alarm definition, then 
> > trigger it (call alarm_actions), and possibly later on set its state 
> > back to OK (call ok_action).
> > I understood that currently all alarm triggering is internal in 
> > AODH, according to threshold/events/combination alarm rules. Would 
> > it be possible to add a new kind of rule, that will allow triggering 
> > the alarm externally?
> what type of rule?
> 
> i have https://review.openstack.org/#/c/247211 which would 
> theoretically allow you to push an action into queue which would then 
> trigger appropriate REST call. not sure if it helps you plug into Aodh 
> easier or not?

We need to add an alarm definition with an "external_rule", and then trigger 
it. It is important for us that the alarm definition will be stored in AODH 
database for future queries. As far as I understand, the queue should help only 
with the triggering?

> 
> --
> gord


> -Original Message-
> From: Ryota Mibu [mailto:r-m...@cq.jp.nec.com]
> Sent: Tuesday, November 24, 2015 10:00 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising 
> custom alarms in AODH
> 
> Hi Ifat,
> 
> 
> Thank you for starting discussion how AODH can be integrated with 
> Vitrage that would be a good example of AODH integration with other 
> OpenStack components.
> 
> The key role of creating alarm definition is to set endpoint
> (alarm_actins) which can be receive alarm notification from AODH. How 
> the endpoints can be set in your use case? Those endpoints are 
> configured via virtage API and stored in its DB?

We have a graph database that will include resources and alarms imported from 
few sources of information (including Ceilometer), as well as alarms generated 
by Vitrage. However, we would like our alarms to be stored in AODH as well. If 
I understood you correctly, we will need the endpoints in order to be notified 
on Ceilometer alarms.

> 
> I agree with Gordon, you can use even-alarm with generating "event"
> containing alarming message that can be captured in aodh if vitrage 
> relay the alarm definition to aodh. That is more feasible way rather 
> than creating alarm definition right before triggering alarm 
> notification. The reason is that aodh evaluator may not be aware of 
> new alarm definitions and won't send notification until its alarm 
> definition cache is refreshed in less than 60 sec (default value).

Logically speaking, we would like to create alarms and not events. Our goal is 
to alert when something is wrong. Creating events might work as a workaround, 
but this is not our preferred solution. 

> 
> Having special rule and external evaluator would be 

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread Ryan Rossiter



On 11/24/2015 8:35 AM, Andrew Laski wrote:

On 11/24/15 at 10:26am, Balázs Gibizer wrote:






I think see your point, and it seems like a good way forward. Let's 
turn the black list to
a white list. Now I'm thinking about creating a new Field type 
something like
WhiteListedObjectField which get a type name (as the ObjectField) but 
also get
a white_list that describes which fields needs to be used from the 
original type.
Then this new field serializes only the white listed fields from the 
original type
and only forces a version bump on the parent object if one of the 
white_listed field

changed or a new field added to the white_list.
What it does not solve out of the box is the transitive dependency. 
If today we
Have an o.vo object having a filed to another o.vo object and we want 
to put
the first object into a notification payload but want to white_list 
fields from
the second o.vo then our white list needs to be able to handle not 
just first
level fields but subfields too. I guess this is doable but I'm 
wondering if we
can avoid inventing a syntax expressing something like 
'field.subfield.subsubfield'

in the white list.


Rather than a whitelist/blacklist why not just define the schema of 
the notification within the notification object and then have the 
object code handle pulling the appropriate fields, converting formats 
if necessary, from contained objects.  Something like:


class ServicePayloadObject(NovaObject):
SCHEMA = {'host': ('service', 'host'),
  'binary': ('service', 'binary'),
  'compute_node_foo': ('compute_node', 'foo'),
 }

fields = {
'service': fields.ObjectField('Service'),
'compute_node': fields.ObjectField('ComputeNode'),
}

def populate_schema(self):
self.compute_node = self.service.compute_node
notification = {}
for key, (obj, field) in schema.iteritems():
notification[key] = getattr(getattr(self, obj), field)

Then object changes have no effect on the notifications unless there's 
a major version bump in which case a SCHEMA_VNEXT could be defined if 
necessary.
To be fair, that is basically a whitelist ;) [1]. But if we use this 
method, don't we lose a lot of o.vo's usefulness? When we serialize, we 
have to specifically *not* use the fields because that is the master 
sheet of information that we don't want to expose all of. Either that or 
we have to do the transform as part of the serialization using the 
schema, which you may be aiming at, I might just be looking at the 
snippet too literally.



[1] http://www.smbc-comics.com/index.php?id=3907

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable][keystone] keystonemiddleware release 1.5.3 (kilo)

2015-11-24 Thread Alan Pevec
> keystonemiddleware 1.5.3: Middleware for OpenStack Identity

periodic-ceilometer-python27-kilo started failing after this release
First bad: 
http://logs.openstack.org/periodic-stable/periodic-ceilometer-python27-kilo/40c5453/testr_results.html.gz
test_acl_scenarios failing with 401 Unauthorized

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Morgan Fainberg
On Tue, Nov 24, 2015 at 6:44 AM, Lance Bragstad  wrote:

> I think one of the benefits of the current model was touched on earlier by
> dstanek. If someone is working on something for their organization, they
> typically bounce ideas of others they work with closely. This tends to be
> people within the same organization. The groups developing the feature
> might miss different perspectives on solving that particular problem.
> Bringing in a fresh set of eyes from someone outside that organization can
> be a huge benefit for the overall product.
>
> I don't think that is the sole reason to keep the existing policy, but I
> do think it's a positive side-effect.
>
>
I would assert that this is a side effect of good development practices and
really has not a lot to do with the policy at this point. The policy
doesn't really enforce/drive this. The cores are ultimately the gate
keepers and should be encouraging the continued reviewing by parties not
affiliated with their organization. I don't believe changing this policy
will generally have a negative impact on the cross-team reviewing. Cores
are always can choose to defer (we do this in many cases) when reviewing as
well to encourage the cross org review/view-points.

So, yes these are all positive things outlined, I don't think we'd see a
significant change on these fronts with a well rounded core group (as we
have). Encouraging good development practices is far different than what
the not-same-affiliation-for-cores-as-comitter policy accomplishes. I would
like to refocus the discussion here back towards the policy itself rather
than side-effects that are really a result of a healthy and mature
development community.



> On Tue, Nov 24, 2015 at 6:31 AM, David Chadwick 
> wrote:
>
>> Spot on. This is exactly the point I was trying to make
>>
>> David
>>
>> On 24/11/2015 11:20, Dolph Mathews wrote:
>> > Scenarios I've been personally involved with where the
>> > "distrustful" model either did help or would have helped:
>> >
>> > - Employee is reprimanded by management for not positively reviewing &
>> > approving a coworkers patch.
>> >
>> > - A team of employees is pressured to land a feature with as fast as
>> > possible. Minimal community involvement means a faster path to "merged,"
>> > right?
>> >
>> > - A large group of reviewers from the author's organization repeatedly
>> > throwing *many* careless +1s at a single patch. (These happened to not
>> > be cores, but it's a related organizational behavior taken to an
>> extreme.)
>> >
>> > I can actually think of a few more specific examples, but they are
>> > already described by one of the above.
>> >
>> > It's not cores that I do not trust, its the organizations they operate
>> > within which I have learned not to trust.
>> >
>> > On Monday, November 23, 2015, Morgan Fainberg <
>> morgan.fainb...@gmail.com
>> > > wrote:
>> >
>> > Hi everyone,
>> >
>> > This email is being written in the context of Keystone more than any
>> > other project but I strongly believe that other projects could
>> > benefit from a similar evaluation of the policy.
>> >
>> > Most projects have a policy that prevents the following scenario (it
>> > is a social policy not enforced by code):
>> >
>> > * Employee from Company A writes code
>> > * Other Employee from Company A reviews code
>> > * Third Employee from Company A reviews and approves code.
>> >
>> > This policy has a lot of history as to why it was implemented. I am
>> > not going to dive into the depths of this history as that is the
>> > past and we should be looking forward. This type of policy is an
>> > actively distrustful policy. With exception of a few potentially bad
>> > actors (again, not going to point anyone out here), most of the
>> > folks in the community who have been given core status on a project
>> > are trusted to make good decisions about code and code quality. I
>> > would hope that any/all of the Cores would also standup to their
>> > management chain if they were asked to "just push code through" if
>> > they didn't sincerely think it was a positive addition to the code
>> base.
>> >
>> > Now within Keystone, we have a fair amount of diversity of core
>> > reviewers, but we each have our specialities and in some cases
>> > (notably KeystoneAuth and even KeystoneClient) getting the required
>> > diversity of reviews has significantly slowed/stagnated a number of
>> > reviews.
>> >
>> > What I would like us to do is to move to a trustful policy. I can
>> > confidently say that company affiliation means very little to me
>> > when I was PTL and nominating someone for core. We should explore
>> > making a change to a trustful model, and allow for cores (regardless
>> > of company affiliation) review/approve code. I say this since we
>> > have clear 

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread Andrew Laski

On 11/24/15 at 11:19am, Ryan Rossiter wrote:



On 11/24/2015 8:35 AM, Andrew Laski wrote:

On 11/24/15 at 10:26am, Balázs Gibizer wrote:






I think see your point, and it seems like a good way forward. 
Let's turn the black list to
a white list. Now I'm thinking about creating a new Field type 
something like
WhiteListedObjectField which get a type name (as the ObjectField) 
but also get
a white_list that describes which fields needs to be used from 
the original type.
Then this new field serializes only the white listed fields from 
the original type
and only forces a version bump on the parent object if one of the 
white_listed field

changed or a new field added to the white_list.
What it does not solve out of the box is the transitive 
dependency. If today we
Have an o.vo object having a filed to another o.vo object and we 
want to put
the first object into a notification payload but want to 
white_list fields from
the second o.vo then our white list needs to be able to handle 
not just first
level fields but subfields too. I guess this is doable but I'm 
wondering if we
can avoid inventing a syntax expressing something like 
'field.subfield.subsubfield'

in the white list.


Rather than a whitelist/blacklist why not just define the schema of 
the notification within the notification object and then have the 
object code handle pulling the appropriate fields, converting 
formats if necessary, from contained objects.  Something like:


class ServicePayloadObject(NovaObject):
   SCHEMA = {'host': ('service', 'host'),
 'binary': ('service', 'binary'),
 'compute_node_foo': ('compute_node', 'foo'),
}

   fields = {
   'service': fields.ObjectField('Service'),
   'compute_node': fields.ObjectField('ComputeNode'),
   }

   def populate_schema(self):
   self.compute_node = self.service.compute_node
   notification = {}
   for key, (obj, field) in schema.iteritems():
   notification[key] = getattr(getattr(self, obj), field)

Then object changes have no effect on the notifications unless 
there's a major version bump in which case a SCHEMA_VNEXT could be 
defined if necessary.
To be fair, that is basically a whitelist ;) [1]. 


Heh, fair point :)

But if we use this 
method, don't we lose a lot of o.vo's usefulness? When we serialize, 
we have to specifically *not* use the fields because that is the 
master sheet of information that we don't want to expose all of. 
Either that or we have to do the transform as part of the 
serialization using the schema, which you may be aiming at, I might 
just be looking at the snippet too literally.


I was thinking along the lines of doing the transform as part of 
serialization.  But really I wasn't thinking that serialization as it's 
done for RPC is needed at all.  I'm not expecting notification consumers 
to hydrate Nova objects from whatever is emitted, just receive a 
standard JSON payload that they can use.  So I wouldn't expect 
notification code to call obj_to_primitive() but perhaps a new emit() 
method which will do the transform to the schema.


I think we're all on a similar page though and perhaps we can continue 
the discussion on a review to nail down details.





[1] http://www.smbc-comics.com/index.php?id=3907

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-24 Thread Dmitry Nikishov
Folks, I have updated a spec, please review:
https://review.openstack.org/#/c/243340

On Fri, Nov 20, 2015 at 4:50 PM, Dmitry Nikishov 
wrote:

> Stanislaw,
>
> proposing patches could be a viable option long-term, however, by the time
> these patches will make it upstream, Fuel will use CentOS 7 w/ systemd.
>
> On Fri, Nov 20, 2015 at 4:05 PM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, as we work on opensource - it would be really nice to propose
>> patches to upstream for non-Fuel services. But if it is not an option -
>> using puppet make sense to me.
>>
>> On Fri, Nov 20, 2015 at 11:01 PM, Dmitry Nikishov > > wrote:
>>
>>> Stanislaw,
>>>
>>> I want to clarify: there are 2 types of services, run on the Fuel node:
>>> - Those, which are a part of Fuel (astute, nailgun etc)
>>> - Those, which are not (e.g. atop)
>>>
>>> Capabilities for the former can easily be managed via post-install
>>> scripts, embedded in respective package spec file (since specs are a part
>>> of fuel-* repo). This is a very good idea.
>>> Capabilities for the latter will have to be taken care of via either
>>> a. some external utility (puppet)
>>> b. rebuilding respective package with updated spec
>>>
>>> I'd say that (a) is still more convinient.
>>>
>>> Another option would be to have a fine-grained control only on Fuel
>>> services and leave all the other at their defaults.
>>>
>>> On Fri, Nov 20, 2015 at 1:19 PM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Dmitry, I just propose the way I think is right, because it's strange
 enough - install package from *.deb file and then set any privileges to it
 by third-party utility. Set permissions for app now mostly managed by
 post-install scripts. Moreover - if it isn't - it should, cause if you set
 capabilities by puppet there always will be a gap between installation and
 setting permissions, so you will must bound package installation process
 with setting permissions by puppet - other way you will have no way to use
 your app.

 Setting setuid bits on apps is not a good idea - it is why linux
 capabilities were introduced.

 On Fri, Nov 20, 2015 at 6:40 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Stanislaw,
>
> In my opinion the whole feature shouldn't be in the separate package
> simply because it will actually affect the code of many, if not all,
> components of Fuel.
>
> The only services whose capabilities will have to be managed by puppet
> are those, which are installed from upstream packages (e.g. atop) -- not
> built from fuel-* repos.
>
> Supervisord doesn't seem to use Linux capabilities, id does setuid
> instead:
> https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326
>
> On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I mean whole feature.
>> Btw, why do you want to grant capabilities via puppet? It should be
>> done by post-install package section, I believe.
>>
>> Also I doesn't know if supervisord can bound process capabilities
>> like systemd can - we could use this opportunity too.
>>
>> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> My main concern with using linux capabilities/acls on files is
>>> actually puppet support or, actually, the lack of it. ACLs are possible
>>> AFAIK, but we'd need to write a custom type/provider for capabilities. I
>>> suggest to wait with capabilities support till systemd support.
>>>
>>> On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Stanislaw, do you mean the whole feature, or just a user? Since
 feature would require actually changing puppet code.

 On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
 sbogat...@mirantis.com> wrote:

> Dmitry, I believe it should be done via package spec as a part of
> installation.
>
> On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Hello folks,
>>
>> I have updated the spec, please review and share your thoughts on
>> it: https://review.openstack.org/#/c/243340/
>>
>> Thanks.
>>
>> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Matthew,
>>>
>>> sorry, didn't mean to butcher your name :(
>>>
>>> On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Matther,

 I totally agree that each daemon should have it's own user
 which should 

Re: [openstack-dev] [neutron][ipam][devstack] How to use IPAM reference in devstack

2015-11-24 Thread Sean M. Collins
On Mon, Nov 23, 2015 at 08:34:46PM EST, John Belamaric wrote:
> It currently only supports greenfield deployments. See [1] - we plan to build 
> a migration in the Mitaka time frame.

A DevStack stack.sh run is about as greenfield as it's going to get. So,
this leaves me wondering: how are we testing all this code we have for
the IPAM feature if you can't install it in DevStack?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ipam][devstack] How to use IPAM reference in devstack

2015-11-24 Thread Sean M. Collins
Ignore me, I'm not able to read and comprehend today. Sorry.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-24 Thread Ben Nemec
On 11/23/2015 01:20 AM, Juan Antonio Osorio wrote:
> 
> 
> On Sat, Nov 21, 2015 at 2:05 AM, Ben Nemec  > wrote:
> 
> Thinking about this some more makes me wonder if we need a sample config
> generator like oslo.config.  It would work off something similar to the
> capabilities map, where you would say
> 
> SSL:
>   templates:
> -puppet/extraconfig/tls/tls-cert-inject.yaml
>   output:
> -environments/enable-ssl.yaml
> 
> And the tool would look at that, read all the params from
> tls-cert-inject.yaml and generate the sample env file.  We'd have to be
> able to do a few new things with the params in order for this to work:
> 
> -Need to specify whether a param is intended to be set as a top-level
> param, parameter_defaults (which we informally do today with the Can be
> overridden by parameter_defaults comment), or internal, to define params
> that shouldn't be exposed in the sample config and are only intended as
> an interface between templates.  There wouldn't be any enforcement of
> the internal type, but Python relies on convention for its private
> members so there's precedent. :-)
> 
> Perhaps a convention could be done in a similar fashion to how things
> are done in
> python. Where parameters passed from top-level could be defined as they
> are defined
> now (with camel-case type of definition) and non-top-parameters could be
> defined with
> lowercase and underscores (or with an underscore prefix). That could
> make things
> clearer and allow us to have a more programmatic approach in the future.
> 
> -There would have to be some way to pick out only certain params from a
> template, since I think there are almost certainly features that are
> configured using a subset of say puppet/controller.yaml which obviously
> can't just take the params from an entire file.  Although maybe this is
> an indication that we could/should refactor the templates to move some
> of these optional params into their own separate files (at this point I
> think I should take a moment to mention that this is somewhat of a brain
> dump, so I haven't thought through all of the implications yet and I'm
> not sure it all makes sense).
> 
> The nice thing about generating these programmatically is we would
> formalize the interface of the templates somewhat, and it would be
> easier to keep sample envs in sync with the actual implementation.
> You'd never have to worry about someone adding a param to a file but
> forgetting to update the env (or at least it would be easy to catch and
> fix when they did, just run "tox -e genconfig").
> 
> I think having such a tool is an excellent idea.
> 
> 
> I'm not saying this is a simple or short-term solution, but I'm curious
> what people think about setting this as a longer-term goal, because as I
> think our discussion in Tokyo exposed, we're probably going to have a
> bit of an explosion of sample envs soon and we're going to need some way
> to keep them sane.
> 
> Some more comments inline.
> 
> On 11/19/2015 10:16 AM, Steven Hardy wrote:
> > On Mon, Nov 16, 2015 at 08:15:48PM +0100, Giulio Fidente wrote:
> >> On 11/16/2015 04:25 PM, Steven Hardy wrote:
> >>> Hi all,
> >>>
> >>> I wanted to start some discussion re $subject, because it's been
> apparrent
> >>> that we have a lack of clarity on this issue (and have done ever
> since we
> >>> started using parameter_defaults).
> >>
> >> [...]
> >>
> >>> How do people feel about this example, and others like it, where
> we're
> >>> enabling common, but not mandatory functionality?
> >>
> >> At first I was thinking about something as simple as: "don't use
> top-level
> >> params for resources which the registry doesn't enable by default".
> >>
> >> It seems to be somewhat what we tried to do with the existing
> pluggable
> >> resources.
> >>
> >> Also, not to hijack the thread but I wanted to add another
> question related
> >> to a similar issue:
> >>
> >>   Is there a reason to prefer use of parameters: instead of
> >> parameter_defaults: in the environment files?
> >>
> >> It looks to me that by defaulting to parameter_defaults: users
> won't need to
> >> update their environment files in case the parameter is moved
> from top-level
> >> into a specific nested stack so I'm inclined to prefer this. Are
> there
> >> reasons not to?
> >
> > The main reason is scope - if you use "parameters", you know the
> data flow
> > happens via the parent template (e.g overcloud-without-mergepy)
> and you
> > never have to worry about naming collisions outside of that template.
> >
> > But if you use parameter_defaults, all parameters values 

Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-24 Thread Ben Swartzlander

On 11/23/2015 06:03 AM, Daniel P. Berrange wrote:

On Fri, Nov 20, 2015 at 02:44:17PM -0500, Ben Swartzlander wrote:

On 11/20/2015 01:19 PM, Daniel P. Berrange wrote:

On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:

Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).


A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.


There is a difference between the cinder service and the storage controller
(or software system) that cinder manages. You can give the decryption keys
to the cinder service without allowing the storage controller to see any
plaintext.

As Walt says in the relevant patch [1], expecting cinder to do data
management without ever performing I/O is unrealistic. The scenario where
the compute admin doesn't trust the storage admin is understandable
(although less important than other potential types of attacks IMO) but the
scenario where the guy managing nova doesn't trust the guy managing cinder
makes no sense at all.


So you are implicitly saying here that the cinder admin is different from
the storage admin. While that certainly may often be true, I strugle to
categorically say it is always going to be true.


No! I'm saying that while the cinder admin and the storage admin MAY be 
different people, that the cinder admin and the nova admin are ALWAYS 
the same people. In some cases all 3 are the same, but it strains 
credulity to suggest that clouds exist where cinder and nova are run by 
people who don't trust eachother.


Consider the scenario where Alice administers the cloud (nova and 
cinder) but Bob is the storage admin. Alice may use some storage 
controllers provided by Bob and she may not trust him. In that case she 
can arrange for nova and cinder to have access to the encryption keys, 
while never allowing Bob to have them. As long as encryption/decryption 
happen on an OpenStack node and not the storage controller itself, Bob 
will never see plaintext.


If Alice and Bob are the same person (which would be true in small 
deployments), then the problem vanishes because there are no 
untrustworthy people.



Furthermore it is not only about the trust of the cinder administrator,
but rather trust of the integrity of the cinder service. OpenStack has
a great many components that are open to attack, and it is prudent to
design the system such that successfull security attacks are confined
to as large a degree as possible. From this POV I think it is entirely
reasonable & indeed sensible for Nova to have minimal trust of Cinder
as a whole when it comes to tenant data security.


This is true. Ideally the keys would not need to be in two places. 
However the world we live in is not ideal and it seems unavoidable that 
both nova and cinder will need access to encryption keys unless a 
significant amount of data management code moves from cinder into nova. 
I suggest that the right away to protect against the above (legitimate) 
concern is to harden the cinder node as you would your nova nodes.



Regards,
Daniel




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Please see request for input on review

2015-11-24 Thread Paul Carlton

Sean

Please can you look at cancel live migration nova spec,
I'd like to progress this spec ASAP.

Last Thursday, Jay Commented

@sdague @edleafe: Please see the long conversation about the
RESTful-ness of the API on the pause live migration spec:

https://review.openstack.org/#/c/229040/12/specs/mitaka/approved/pause-vm-during-live-migration.rst

This proposal keeps things in-line with the current os-instance-actions
API -- even though you are correct that it isn't at all REST-full --
with the expectation that the eventual Tasks API will get rid of all
of the instance actions junk.

Are you ok with this?

Thanks


--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova meeting this Thursday

2015-11-24 Thread Zhipeng Huang
+1

On Tue, Nov 24, 2015 at 11:51 PM, John Garbutt  wrote:

> Hi,
>
> I just wanted to share my plan for Thursday's meeting.
>
> As it is thanksgiving in the US, I am assuming no US based people will
> attend.
>
> Given there are lots of non-US people around, and it is our "early"
> slot this week, I think we should still have the meeting. Even if it
> ends up being fairly short.
>
> I hope that works out OK for everyone?
>
> Thanks,
> johnthetubaguy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] process change for closing bugs when patches merge

2015-11-24 Thread Robert Collins
On 25 November 2015 at 05:26, Thierry Carrez  wrote:
...
> So it would be a shame to restrict task tracking possibilities to two
> projects per bug just to preserve accurate change tracking in Launchpad,
> while the plan is to focus Launchpad usage solely on task tracking :)

I don't think it would be a shame, I think it would work better :).
The vast majority of multi-project bugs I see in LP are not
multi-project bugs (defined as one conversation, many work items), but
rather conceptually similar things (many conversations started from
one seed).

Anyhow, thats a different discusion :)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] process change for closing bugs when patches merge

2015-11-24 Thread Thierry Carrez
Robert Collins wrote:
> [...]
> So, if folk do want bugs to sit in fix-committed, we can address the
> timeouts really very easily: one (or two tops) projects per bug. We
> should do that anyway because adding a comment to a bug is also able
> to timeout, so our code is still going to suffer the same fragility
> and need the same robustness if we're trying to guarantee that the
> comment is added.

I don't think we need to guarantee that the comment is added. In the old
system we *needed* to update the milestones and bug statuses because the
resulting milestone page was used as the official summary of everything
in the release (change management). In the new system we use other tools
for change management (git/reno) so we can tolerate omissions or
eventual consistency on the Launchpad side. It's not as much of a big
deal if a comment fails to be added, this is mostly meant as a courtesy
rather than reference material. Also I find bug.addMessage() a lot less
prone to timeouts (never hit one, even on busy bugs).

So it would be a shame to restrict task tracking possibilities to two
projects per bug just to preserve accurate change tracking in Launchpad,
while the plan is to focus Launchpad usage solely on task tracking :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project Meeting SKIPPED, Tue Nov 24th, 21:00 UTC

2015-11-24 Thread Mike Perez
Hi all!

As discussed in one of cross-project meetings [1], we will be skipping meetings
if there are no agenda items to discuss. There are no items to discuss this
time around, but someone can still call for a meeting by adding an agenda item
for December 1st [2].

We also have a new meeting channel which is #openstack-meeting-cp where the
cross-project meeting will now take place at it's usual time Tuesdays at 2100
UTC.

If you're unable to keep up with the Dev list on cross-project initiatives,
there is also the Dev Digest [3].

[1] - 
http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-11-03-21.01.log.html#l-57
[2] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda
[3] - 
http://www.openstack.org/blog/2015/11/openstack-developer-mailing-list-digest-november-20151114/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread Balázs Gibizer
> From: John Garbutt [mailto:j...@johngarbutt.com]
> Sent: November 24, 2015 16:09
> On 24 November 2015 at 15:00, Balázs Gibizer 
> wrote:
> >> From: Andrew Laski [mailto:and...@lascii.com]
> >> Sent: November 24, 2015 15:35
> >> On 11/24/15 at 10:26am, Balázs Gibizer wrote:
> >> >> From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
> >> >> Sent: November 23, 2015 22:33
> >> >> On 11/23/2015 2:23 PM, Andrew Laski wrote:
> >> >> > On 11/23/15 at 04:43pm, Balázs Gibizer wrote:
> >> >> >>> From: Andrew Laski [mailto:and...@lascii.com]
> >> >> >>> Sent: November 23, 2015 17:03
> >> >> >>>
> >> >> >>> On 11/23/15 at 08:54am, Ryan Rossiter wrote:
> >> >> >>> >
> >> >> >>> >
> >> >> >>> >On 11/23/2015 5:33 AM, John Garbutt wrote:
> >> >> >>> >>On 20 November 2015 at 09:37, Balázs Gibizer
> >> >> >>> >> wrote:
> >> >> >>> >>>
> >> >> >>> >>>
> >> >> >> >>
> >> >> >>> >>There is a bit I am conflicted/worried about, and thats when
> >> >> >>> >>we start including verbatim, DB objects into the
> >> >> >>> >>notifications. At least you can now quickly detect if that
> >> >> >>> >>blob is something compatible with your current parsing code.
> >> >> >>> >>My preference is really to keep the Notifications as a
> >> >> >>> >>totally separate object tree, but I am sure there are many
> >> >> >>> >>cases where that ends up being seemingly stupid duplicate
> >> >> >>> >>work. I am not expressing this well in text form :(
> >> >> >>> >Are you saying we don't want to be willy-nilly tossing DB
> >> >> >>> >objects across the wire? Yeah that was part of the
> >> >> >>> >rug-pulling of just having the payload contain an object.
> >> >> >>> >We're automatically tossing everything with the object then,
> >> >> >>> >whether or not some of that was supposed to be a secret. We
> >> >> >>> >could add some sort of property to the field like
> >> >> >>> >dont_put_me_on_the_wire=True (or I guess a
> >> >> >>> >notification_ready() function that helps an object sanitize
> >> >> >>> >itself?) that the notifications will look at to know if it
> >> >> >>> >puts that on the wire-serialized dict, but that's adding a
> >> >> >>> >lot more complexity and work to a pile that's already growing
> rapidly.
> >> >> >>>
> >> >> >>> I don't want to be tossing db objects across the wire.  But I
> >> >> >>> also am not convinced that we should be tossing the current
> >> >> >>> objects over the wire either.
> >> >> >>> You make the point that there may be things in the object that
> >> >> >>> shouldn't be exposed, and I think object version bumps is
> >> >> >>> another thing to watch out for.
> >> >> >>> So far the only object that has been bumped is Instance but in
> >> >> >>> doing so no notifications needed to change.  I think if we
> >> >> >>> just put objects into notifications we're coupling the
> >> >> >>> notification versions to db or RPC changes unnecessarily.
> >> >> >>> Some times they'll move together but other times, like moving
> >> >> >>> flavor into instance_extra, there's no reason to bump
> notifications.
> >> >> >>
> >> >> >>
> >> >> >> Sanitizing existing versioned objects before putting them to
> >> >> >> the wire is not hard to do.
> >> >> >> You can see an example of doing it in
> >> >> >> https://review.openstack.org/#/c/245678/8/nova/objects/service.
> >> >> >> py,
> >> >> >> cm
> >> >> >> L382.
> >> >> >> We don't need extra effort to take care of minor version bumps
> >> >> >> because that does not break a well written consumer. We do have
> >> >> >> to take care of the major version bumps but that is a rare
> >> >> >> event and therefore can be handled one by one in a way John
> >> >> >> suggested, by keep sending the previous major version for a while
> too.
> >> >> >
> >> >> > That review is doing much of what I was suggesting.  There is a
> >> >> > separate notification and payload object.  The issue I have is
> >> >> > that within the ServiceStatusPayload the raw Service object and
> >> >> > version is being dumped, with the filter you point out.  But I
> >> >> > don't think that consumers really care about tracking Service
> >> >> > object versions and dealing with compatibility there, it would
> >> >> > be easier for them to track the ServiceStatusPayload version
> >> >> > which can remain relatively stable even if Service is changing to 
> >> >> > adapt
> to db/RPC changes.
> >> >> Not only do they not really care about tracking the Service object
> >> >> versions, they probably also don't care about what's in that filter 
> >> >> list.
> >> >>
> >> >> But I think you're getting on the right track as to where this
> >> >> needs to go. We can integrate the filtering into the versioning of the
> payload.
> >> >> But instead of a blacklist, we turn the filter into a white list.
> >> >> If the underlying object adds a new field that we don't want/care
> >> >> if people know about, the payload version doesn't have to change.
> >> >> But if we add something (or 

[openstack-dev] [magnum-ui][magnum] Suggestions for Features/Improvements

2015-11-24 Thread Bradley Jones (bradjone)
We have started to compile a list of possible features/improvements for 
Magnum-UI. If you have any suggestions about what you would like to see in the 
plugin please leave them in the etherpad so we can prioritise what we are going 
to work on.

https://etherpad.openstack.org/p/magnum-ui-feature-list

Thanks,
Brad Jones
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Post-release bump after 2014.2.4?

2015-11-24 Thread Matt Riedemann



On 11/23/2015 4:50 PM, Alan Pevec wrote:

2015-11-23 17:31 GMT+01:00 Matt Riedemann :

Or do you also suggest juno-eol != 2014.2.4?

I'd prefer to just remove the version tag in setup.cfg entirely so we switch
to post-versioning, then we can EOL the thing. That's what we do internally
on EOL stable branches (switch to post-versioning since we have our own
release cycles). And I'd be OK with this just to keep stable/kilo moving
until juno is EOL'ed.


Where is stable/kilo blocked because of this?? There shouldn't be
anything merged after 2014.2.4 tag
i.e. juno-eol == 2014.2.4

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I thought we'd be hitting this kind of issue [1] on stable/kilo changes 
that run grenade where the old side is juno, and I though pbr would blow 
up installing those, but so far I haven't seen any job failures like 
that to indicate it's a problem.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-October/076928.html


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Brad Topol

Lance makes some very good points below.  I agree fully with them.

Thanks,

Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Lance Bragstad 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   11/24/2015 09:56 AM
Subject:Re: [openstack-dev] [keystone][all] Move from active
distrusting model to trusting model



I think one of the benefits of the current model was touched on earlier by
dstanek. If someone is working on something for their organization, they
typically bounce ideas of others they work with closely. This tends to be
people within the same organization. The groups developing the feature
might miss different perspectives on solving that particular problem.
Bringing in a fresh set of eyes from someone outside that organization can
be a huge benefit for the overall product.

I don't think that is the sole reason to keep the existing policy, but I do
think it's a positive side-effect.

On Tue, Nov 24, 2015 at 6:31 AM, David Chadwick 
wrote:
  Spot on. This is exactly the point I was trying to make

  David

  On 24/11/2015 11:20, Dolph Mathews wrote:
  > Scenarios I've been personally involved with where the
  > "distrustful" model either did help or would have helped:
  >
  > - Employee is reprimanded by management for not positively reviewing &
  > approving a coworkers patch.
  >
  > - A team of employees is pressured to land a feature with as fast as
  > possible. Minimal community involvement means a faster path to
  "merged,"
  > right?
  >
  > - A large group of reviewers from the author's organization repeatedly
  > throwing *many* careless +1s at a single patch. (These happened to not
  > be cores, but it's a related organizational behavior taken to an
  extreme.)
  >
  > I can actually think of a few more specific examples, but they are
  > already described by one of the above.
  >
  > It's not cores that I do not trust, its the organizations they operate
  > within which I have learned not to trust.
  >
  > On Monday, November 23, 2015, Morgan Fainberg <
  morgan.fainb...@gmail.com
  > > wrote:
  >
  >     Hi everyone,
  >
  >     This email is being written in the context of Keystone more than
  any
  >     other project but I strongly believe that other projects could
  >     benefit from a similar evaluation of the policy.
  >
  >     Most projects have a policy that prevents the following scenario
  (it
  >     is a social policy not enforced by code):
  >
  >     * Employee from Company A writes code
  >     * Other Employee from Company A reviews code
  >     * Third Employee from Company A reviews and approves code.
  >
  >     This policy has a lot of history as to why it was implemented. I am
  >     not going to dive into the depths of this history as that is the
  >     past and we should be looking forward. This type of policy is an
  >     actively distrustful policy. With exception of a few potentially
  bad
  >     actors (again, not going to point anyone out here), most of the
  >     folks in the community who have been given core status on a project
  >     are trusted to make good decisions about code and code quality. I
  >     would hope that any/all of the Cores would also standup to their
  >     management chain if they were asked to "just push code through" if
  >     they didn't sincerely think it was a positive addition to the code
  base.
  >
  >     Now within Keystone, we have a fair amount of diversity of core
  >     reviewers, but we each have our specialities and in some cases
  >     (notably KeystoneAuth and even KeystoneClient) getting the required
  >     diversity of reviews has significantly slowed/stagnated a number of
  >     reviews.
  >
  >     What I would like us to do is to move to a trustful policy. I can
  >     confidently say that company affiliation means very little to me
  >     when I was PTL and nominating someone for core. We should explore
  >     making a change to a trustful model, and allow for cores
  (regardless
  >     of company affiliation) review/approve code. I say this since we
  >     have clear steps to correct any abuses of this policy change.
  >
  >     With all that said, here is the proposal I would like to set forth:
  >
  >     1. Code reviews still need 2x Core Reviewers (no change)
  >     2. Code can be developed by a member of the same company as both
  >     core reviewers (and approvers).
  >     3. If the trust that is being given via this new policy is
  violated,
  >     the code can [if needed], be reverted (we are using git here) and
  >     the actors in question can lose core status (PTL discretion) and
  the
  >     policy can be changed back to the "distrustful" model described
  above.
  >
  >     I hope that 

Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-24 Thread Farr, Kaitlin M.
Hi Lisa,

In regards to your comment about the duplication of key management code in 
Cinder and Nova, there was a long-term plan to replace that code with a shared 
library when the encryption feature was implemented.  The key manager code has 
been moved to its own library, Castellan [1].  The plan to replace the key 
manager code with Castellan has been outlined in a Nova spec [2] and Cinder 
spec [3].  

1. https://github.com/openstack/castellan
2. https://review.openstack.org/#/c/247561/
3. https://review.openstack.org/#/c/247577/

I hope that helps,

Kaitlin Farr

-Original Message-
From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com]
Sent: Monday, November 23, 2015 8:57 PM
To: OpenStack Development Mailing List (not for usage questions); Daniel P. 
Berrange
Subject: Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

Hi,

Except creating encrypted volume from images, uploading encrypted volumes to 
image, as Duncan said there is desire to migrate volumes between encrypted and 
unencrypted type.
https://review.openstack.org/#/c/248593/

And key magagment codes are duplicated in Cinder and Nova:
https://github.com/openstack/cinder/tree/master/cinder/keymgr
https://github.com/openstack/nova/tree/master/nova/keymgr


Best wishes
Lisa


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Post-release bump after 2014.2.4?

2015-11-24 Thread Matt Riedemann



On 11/24/2015 10:42 AM, Matt Riedemann wrote:



On 11/23/2015 4:50 PM, Alan Pevec wrote:

2015-11-23 17:31 GMT+01:00 Matt Riedemann :

Or do you also suggest juno-eol != 2014.2.4?

I'd prefer to just remove the version tag in setup.cfg entirely so we
switch
to post-versioning, then we can EOL the thing. That's what we do
internally
on EOL stable branches (switch to post-versioning since we have our own
release cycles). And I'd be OK with this just to keep stable/kilo moving
until juno is EOL'ed.


Where is stable/kilo blocked because of this?? There shouldn't be
anything merged after 2014.2.4 tag
i.e. juno-eol == 2014.2.4

Cheers,
Alan

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I thought we'd be hitting this kind of issue [1] on stable/kilo changes
that run grenade where the old side is juno, and I though pbr would blow
up installing those, but so far I haven't seen any job failures like
that to indicate it's a problem.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-October/076928.html



I've confirmed that the juno side of kilo grenade is not blowing up [1], 
but I'm not sure why it's not blowing up. Trying to figure that out.


[1] 
http://logs.openstack.org/16/216716/1/check/gate-grenade-dsvm/b9a97bb//logs/old/devstacklog.txt.gz#_2015-11-24_17_17_15_859


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-11-24 Thread Armando M.
On 24 November 2015 at 04:13, Rossella Sblendido 
wrote:

>
>
> On 11/23/2015 06:38 PM, Armando M. wrote:
>
>>
>>
>> On 23 November 2015 at 04:02, Rossella Sblendido > > wrote:
>>
>>
>>
>> On 11/20/2015 03:54 AM, Armando M. wrote:
>>
>>
>>
>> On 19 November 2015 at 18:26, Assaf Muller > 
>> >> wrote:
>>
>>  On Wed, Nov 18, 2015 at 9:14 PM, Armando M.
>> 
>>  >>
>> wrote:
>>  > Hi Neutrites,
>>  >
>>  > We are nearly two weeks away from the end of Mitaka 1.
>>  >
>>  > I am writing this email to invite you to be mindful to
>> what you review,
>>  > especially in the next couple of weeks. Whenever you have
>> the time to review
>>  > code, please consider giving priority to the following:
>>  >
>>  > Patches that target blueprints targeted for Mitaka;
>>  > Patches that target bugs that are either critical or high;
>>  > Patches that target rfe-approved 'bugs';
>>  > Patches that target specs that have followed the most
>> current submission
>>  > process;
>>
>>  Is it possible to create Gerrit dashboards for patches that
>> answer these
>>  criteria, and then persist the links in Neutron's
>> dashboards devref
>>  page?
>> http://docs.openstack.org/developer/neutron/dashboards/index.html
>>  That'd be super useful.
>>
>>
>> We should look into that, but to be perfectly honest I am not
>> sure how
>> easy it would be, since we'd need to cross-reference content
>> that lives
>> into gerrit as well as launchpad. Would that even be possible?
>>
>>
>> To cross-reference we can use the bug ID or the blueprint name.
>>
>> I created a script that queries launchpad to get:
>> 1) Bug number of the bugs tagged with approved-rfe
>> 2) Bug number of the critical/high bugs
>> 3) list of blueprints targeted for the current milestone (mitaka-1)
>>
>> With this info the script builds a .dash file that can be used by
>> gerrit-dash-creator [2] to produce a dashboard url .
>>
>> The script prints also the queries that can be used in gerrit UI
>> directly, e.g.:
>> Critical/High Bugs
>> (topic:bug/1399249 OR topic:bug/1399280 OR topic:bug/1443421 OR
>> topic:bug/1453350 OR topic:bug/1462154 OR topic:bug/1478100 OR
>> topic:bug/1490051 OR topic:bug/1491131 OR topic:bug/1498790 OR
>> topic:bug/1505575 OR topic:bug/1505843 OR topic:bug/1513678 OR
>> topic:bug/1513765 OR topic:bug/1514810)
>>
>>
>> This is the dashboard I get right now [3]
>>
>> I tried in many ways to get Gerrit to filter patches if the commit
>> message contains a bug ID. Something like:
>>
>> (message:"#1399249" OR message:"#1399280" OR message:"#1443421" OR
>> message:"#1453350" OR message:"#1462154" OR message:"#1478100" OR
>> message:"#1490051" OR message:"#1491131" OR message:"#1498790" OR
>> message:"#1505575" OR message:"#1505843" OR message:"#1513678" OR
>> message:"#1513765" OR message:"#1514810")
>>
>> but it doesn't work well, the result of the filter contains patches
>> that have nothing to do with the bugs queried.
>>
>>
>> Try to drop the # and quote the bug number like this:
>>
>> message:"'1399280'"
>>
>> Otherwise I believe gerrit looks for substring matches.
>>
>
> That was my first attempt, it doesn't work unfortunately.
>

That's weird. It works for me:

https://review.openstack.org/#/q/message:%22'1399280'%22,n,z


>
> thanks,
>
> Rossella
>
>
>
>>
>> That's why I had to filter using the topic.
>>
>> CAVEAT: To make the dashboard work, bug fixes must use the topic
>> "bug/ID" and patches implementing a blueprint the topic "bp/name".
>> If a patch is not following this convention it won't be showed in
>> the dashboard, since the topic is used as filter. Most of us use
>> this convention already anyway so I hope it's not too much of a
>> burden.
>>
>> Feedback is appreciated :)
>>
>>
>> Nice one, I'll provide feedback on [1].
>>
>>
>> [1] https://review.openstack.org/248645
>> [2] https://github.com/openstack/gerrit-dash-creator
>> [3] https://goo.gl/sglSbp
>>
>>
>> Btw, I was looking at the current blueprint assignments [1] for
>> Mitaka:
>> there are some blueprints that still need assignee, approver and
>> drafter; we should close the gap. If there are volunteers,
>> please reach
>> out to me.
>>
>> Thanks,
>> Armando
>>
>>   

[openstack-dev] [nova] How were we going to remove soft delete again?

2015-11-24 Thread Matt Riedemann
I know in Vancouver we talked about no more soft delete and in Mitaka 
lxsli decoupled the nova models from the SoftDeleteMixin [1].


From what I remember, the idea is to not add the deleted column to new 
tables, to not expose soft deleted resources in the REST API in new 
ways, and to eventually drop the deleted column from the models.


I bring up the REST API because I was tinkering with the idea of 
allowing non-admins to list/show their (soft) deleted instances [2]. 
Doing that, however, would expose more of the REST API to deleted 
resources which makes it harder to remove from the data model.


My question is, how were we thinking we were going to remove the deleted 
column from the data model in a backward compatible way? A new 
microversion in the REST API isn't going to magically work if we drop 
the column in the data model, since anything before that microversion 
should still work - like listing deleted instances for the admin.


Am I forgetting something? There were a lot of ideas going around the 
room during the session in Vancouver and I'd like to sort out the 
eventual long-term plan so we can document it in the devref about 
policies so that when ideas like [2] come up we can point to the policy 
and say 'no we aren't going to do that and here's why'.


[1] 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/no-more-soft-delete.html
[2] 
https://blueprints.launchpad.net/nova/+spec/non-admin-list-deleted-instances


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-24 Thread Daniel Mellado
Hi All,

As you might already know, within Red Hat's tempest fork, we do have one
tempest configuration script which was built in the past by David Kranz
[1] and that's been actively used in our CI system. Regarding this
topic, I'm aware that quite some effort has been done in the past [2]
and I would like to complete the implementation of this blueprint/spec.

My plan would be to have this script under the /tempest/cmd or
/tempest/tools folder from tempest so it can be used to configure not
the tempest gate but any cloud we'd like to run tempest against.

Adding the configuration script was discussed briefly at the Mitaka
summit in the QA Priorities meting [3]. I propose we use the existing
etherpad to continue the discussion around and tracking of implementing
"tempest config-create" using the downstream config script as a starting
point. [4]

If you have any questions, comments or opinion, please let me know.

Best Regards

Daniel Mellado

---
[1]
https://github.com/redhat-openstack/tempest/blob/master/tools/config_tempest.py
[2] https://blueprints.launchpad.net/tempest/+spec/tempest-config-generator
[3] https://etherpad.openstack.org/p/mitaka-qa-priorities
[4] https://etherpad.openstack.org/p/tempest-cli-improvements

https://github.com/openstack/qa-specs/blob/master/specs/tempest/tempest-cli-improvements.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable][keystone] keystonemiddleware release 1.5.3 (kilo)

2015-11-24 Thread gord chung
mriedem and myself resolved this with keystone folks earlier today[1]. 
should be all better now [2]


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-11-24.log.html#t2015-11-24T14:38:39

[2] https://review.openstack.org/#/c/249268/


On 24/11/15 12:33 PM, Alan Pevec wrote:

keystonemiddleware 1.5.3: Middleware for OpenStack Identity

periodic-ceilometer-python27-kilo started failing after this release
First bad: 
http://logs.openstack.org/periodic-stable/periodic-ceilometer-python27-kilo/40c5453/testr_results.html.gz
test_acl_scenarios failing with 401 Unauthorized

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]Ironic operations on nodes in maintenance mode

2015-11-24 Thread Arkady_Kanevsky
Another use cases for maintenance node are:

* HW component replacement, e.g. NIC, or disk

* FW upgrade/downgrade - we should be able to use ironic FW management 
API/CLI for it.

* HW configuration change. Like re-provision server, like changing RAID 
configuration. Again, we should be able to use ironic FW management API/CLI for 
it.

Thanks,
Arkady

-Original Message-
From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Tuesday, November 24, 2015 9:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ironic]Ironic operations on nodes in maintenance 
mode

On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> Hi,
>
> I would like to know how everyone is using maintenance mode and what
> is expected from admins about nodes in maintenance. The reason I am
> bringing up this topic is because, most of the ironic operations,
> including manual cleaning are not allowed for nodes in maintenance. Thats a 
> problem for us.
>
> The way we use it is as follows:
>
> We allow users to put nodes in maintenance mode (indirectly) if they
> find something wrong with the node. They also provide a maintenance
> reason along with it, which gets stored as "user_reason" under
> maintenance_reason. So basically we tag it as user specified reason.
>
> To debug what happened to the node our operators use manual cleaning
> to re-image the node. By doing this, they can find out all the issues
> related to re-imaging (dhcp, ipmi, image transfer, etc). This
> debugging process applies to all the nodes that were put in
> maintenance either by user, or by system (due to power cycle failure or due 
> to cleaning failure).

Interesting; do you let the node go through cleaning between the user nuking 
the instance and doing this manual cleaning stuff?

At Rackspace, we leverage the fact that maintenance mode will not allow the 
node to proceed through the state machine. If a user reports a hardware issue, 
we maintenance the node on their behalf, and when they delete it, it boots the 
agent for cleaning and begins heartbeating.
Heartbeats are ignored in maintenance mode, which gives us time to investigate 
the hardware, fix things, etc. When the issue is resolved, we remove 
maintenance mode, it goes through cleaning, then back in the pool.

We used to enroll nodes in maintenance mode, back when the API put them in the 
available state immediately, to avoid them being scheduled to until we knew 
they were good to go. The enroll state solved this for us.

Last, we use maintenance mode on available nodes if we want to temporarily pull 
them from the pool for a manual process or some testing. This can also be 
solved by the manageable state.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Rally as A Service?

2015-11-24 Thread Munoz, Obed N
Hi folks, 

Is there any plan or work-in-progress for RaaS (Rally as a Service)?
I saw it mentioned on https://wiki.openstack.org/wiki/Rally  

I’d like to know if there’s an ongoing effort and if we can join that in order 
to make it a reality. 

-- 
Obed N Munoz
Cloud Engineer @ ClearLinux Project
Open Source Technology Center

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable][keystone] keystonemiddleware release 1.5.3 (kilo)

2015-11-24 Thread Morgan Fainberg
Would it be possible to get a bit more detail in (especially if other
projects are mocking like this) what is being done so that I [or anothe
rKeystone dev] can work towards a real mock/test module in
keystonemiddleware so this doesn't occur again due to internal-interface
mocking?

On Tue, Nov 24, 2015 at 11:24 AM, gord chung  wrote:

> mriedem and myself resolved this with keystone folks earlier today[1].
> should be all better now [2]
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-11-24.log.html#t2015-11-24T14:38:39
> [2] https://review.openstack.org/#/c/249268/
>
>
>
> On 24/11/15 12:33 PM, Alan Pevec wrote:
>
>> keystonemiddleware 1.5.3: Middleware for OpenStack Identity
>>>
>> periodic-ceilometer-python27-kilo started failing after this release
>> First bad:
>> http://logs.openstack.org/periodic-stable/periodic-ceilometer-python27-kilo/40c5453/testr_results.html.gz
>> test_acl_scenarios failing with 401 Unauthorized
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> gord
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-11-24 Thread Ryota Mibu
Hi Ifat,


Thank you for starting discussion how AODH can be integrated with Vitrage that 
would be a good example of AODH integration with other OpenStack components.

The key role of creating alarm definition is to set endpoint (alarm_actins) 
which can be receive alarm notification from AODH. How the endpoints can be set 
in your use case? Those endpoints are configured via virtage API and stored in 
its DB?

I agree with Gordon, you can use even-alarm with generating "event" containing 
alarming message that can be captured in aodh if vitrage relay the alarm 
definition to aodh. That is more feasible way rather than creating alarm 
definition right before triggering alarm notification. The reason is that aodh 
evaluator may not be aware of new alarm definitions and won't send notification 
until its alarm definition cache is refreshed in less than 60 sec (default 
value).

Having special rule and external evaluator would be alternative, but it should 
be difficult to catch up latest aodh, since it will be changed faster with 
small code base as result of split from ceilometer.


BR,
Ryota

> -Original Message-
> From: AFEK, Ifat (Ifat) [mailto:ifat.a...@alcatel-lucent.com]
> Sent: Tuesday, November 24, 2015 1:15 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom 
> alarms in AODH
> 
> Hi Gord,
> 
> Please see my answers below.
> 
> Ifat.
> 
> 
> > -Original Message-
> > From: gord chung [mailto:g...@live.ca]
> > Sent: Monday, November 23, 2015 4:57 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising
> > custom alarms in AODH
> >
> > hi Ifat,
> >
> > i added some questions below.
> >
> > On 23/11/2015 7:16 AM, AFEK, Ifat (Ifat) wrote:
> > > Hi,
> > >
> > > We have a couple of questions regarding AODH alarms.
> > >
> > > In Vitrage[1] project we have two use cases that involve Ceilometer:
> > >
> > > 1. Import Ceilometer alarms, as well as alarms and resources from
> > other sources (Nagios, Zabbix, Nova, Heat, etc.), and produce RCA
> > insights about the connection between different alarms.
> > to clarify, Ceilometer alarms is deprecated for Aodh and will be
> > removed very, very soon.
> 
> Right, I meant Aodh alarms.
> 
> >
> > > 2. Raise "deduced alarms". For example, in case we detect a high
> > memory consumption on a host, we would like to raise deduced alarms
> > saying "instance might be suffering due to high memory consumption on
> > the host" on all related instances. Then, we can further deduce that
> > applications running on these instances might also be affected, and
> > raise alarms on them as well.
> > >
> > > Initially we planned to raise these deduced alarms in AODH, so other
> > Openstack components may consume them as well. Then, when we looked at
> > AODH alarms documentation, we noticed that there is currently no way
> > of raising custom alarms. We saw only three types of alarms: threshold
> > alarms, combination alarms and event alarms.
> > >
> > > So, our questions are:
> > >
> > > * Is there an alternative way of raising alarms in AODH?
> > what do we mean by raising alarms? do you want to create a new alarm
> > definition for Aodh or do you want to trigger an action? do you want
> > to have a new non-REST action?
> 
> I guess I would like to do both: create a new alarm definition, then trigger 
> it (call alarm_actions), and possibly
> later on set its state back to OK (call ok_action).
> I understood that currently all alarm triggering is internal in AODH, 
> according to threshold/events/combination alarm
> rules. Would it be possible to add a new kind of rule, that will allow 
> triggering the alarm externally?
> 
> >
> > > * Do you think custom alarms belong in AODH? Are you interested in
> > adding this capability to AODH?
> > >
> > > We would be happy to hear your vision and thoughts about it.
> > >
> > >
> > > Thanks,
> > > Ifat and Alexey.
> > >
> > >
> > > [1] https://wiki.openstack.org/wiki/Vitrage
> > >
> > >
> > >
> > >
> > >
> > __
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > --
> > gord
> >
> >
> > __
> > _
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [Fuel] Changing APIs and API versioning

2015-11-24 Thread Maciej Kwiek
Vitaly,

That's great news. I agree that we need to run fuelclient against every
change in nailgun, but on the other hand - I think we should stick to the
protocol that Igor proposed:

* Announce this change in openstack-dev ML.
* Wait 1 week before approving it, so anyone can prepare.
* Change author has to work with QA in order make sure they are
prepared for this change.

Because some people we may not be aware of can rely on our API behaving in
the way it usually behaves :).

Regards,
Maciej

On Mon, Nov 23, 2015 at 5:46 PM, Vitaly Kramskikh 
wrote:

> Roman,
>
> Sorry for breaking all the stuff again. I've got suggestion to change this
> from one of the core reviewers.
>
> Fortunately, separation of Fuel UI is already in progress (Vladimir
> Kozhukalov may want to provide extra info here), but it won't guarantee
> protection from similar issues in future. What we really need is to run
> fuelclient functional tests against every change in nailgun.
>
> 2015-11-23 22:56 GMT+07:00 Roman Prykhodchenko :
>
>> Folks.
>>
>> This happened again. Nailgun’s API was silently changed [1] breaking
>> python-fuelclient and everything else that was relying on it.
>>
>> I feel like this is the point when just discussing the issue is not
>> enough so I call for a vote: Let’s separate Nailgun from Fuel UI and put
>> them into different repositories now.
>>
>> Please cast your votes (either +1 or -1) to this thread. You can also
>> provide your reasoning and more thoughts.
>>
>>
>> - romcheg
>>
>>
>> 1. https://review.openstack.org/#/c/240234/
>>
>> 26 жовт. 2015 р. о 11:11 Sebastian Kalinowski 
>> написав(ла):
>>
>> 2015-10-23 11:36 GMT+02:00 Igor Kalnitsky :
>>
>>> Roman, Vitaly,
>>>
>>> You're both saying right things, and you guys bring a sore topic up
>>> again.
>>>
>>> The thing is that Nailgun's API isn't the best one.. but we're trying
>>> to improve it step-by-step, from release to release. We have so many
>>> things to reconsider and to fix that it'd require a huge effort to
>>> make backward compatible changes and support it. So we decided ignore
>>> backward compatibility for clients for awhile and consider our API as
>>> unstable.
>>>
>>> I agree with Roman that such changes must not be made secretly, and we
>>> should at least announce about them. Moreover, every core must think
>>> about consequences before approving them.
>>>
>>> So I propose to do the following things when backward incompatible
>>> change to API is required:
>>>
>>> * Announce this change in openstack-dev ML.
>>> * Wait 1 week before approving it, so anyone can prepare.
>>> * Change author has to work with QA in order make sure they are
>>> prepared for this change.
>>>
>>> Any thoughts?
>>>
>>
>>
>> +1.
>>
>> Although there is one thing that you didn't mention (but probably
>> everyone know about it)
>> is to solve the issue with fuelclient not beign tested against changes in
>> nailgun.
>> We need not only run it for every change in nailgun (or for only those
>> that touch files under "api"
>> dir) but also cover more endpoints with fuelclient tests against real
>> API, not mocks, to discover
>> such issues earlier.
>>
>>
>>>
>>> Thanks,
>>> Igor
>>>
>>> On Wed, Oct 21, 2015 at 5:24 PM, Vitaly Kramskikh
>>>  wrote:
>>> > JFYI: (re-)start of this discussion was instigated by merge of this
>>> change
>>> > (and here is revert).
>>> >
>>> > And this is actually not the first time when we make backward
>>> incompatible
>>> > changes in our API. AFAIR, the last time it was also about the network
>>> > configuration handler. We decided not to consider our API frozen, make
>>> the
>>> > changes and break backward compatibility. So now is the time to
>>> reconsider
>>> > this.
>>> >
>>> > API backward compatibility is a must for good software, but we also
>>> need to
>>> > understand that introducing API versioning is not that easy and will
>>> > increase efforts needed to make new changes in nailgun.
>>> >
>>> > I'd go this way:
>>> >
>>> > Estimate the priority of introducing API versioning - maybe we have
>>> other
>>> > things we should invest our resources to
>>> > Estimate all the disadvantages this decision might have
>>> > Fix all the issues in the current API (like this one)
>>> > Implement API versioning support (yes, we don't have this mechanism
>>> yet, so
>>> > we can't just "bump API version" like Sergii suggested in another
>>> thread)
>>> > Consider the current API as frozen v1, so backward incompatible
>>> changes (or
>>> > maybe all the changes?) should go to v2
>>> >
>>> >
>>> > 2015-10-21 20:25 GMT+07:00 Roman Prykhodchenko :
>>> >>
>>> >> Hi folks,
>>> >>
>>> >> I’d like to touch the aspect of Fuel development process that seems
>>> to be
>>> >> as wrong as possible. That aspect is how we change the API.
>>> >>
>>> >> The issue is that in Fuel anyone can change API at any point of time
>>> 

Re: [openstack-dev] [stable] Post-release bump after 2014.2.4?

2015-11-24 Thread Vincent Untz
Le lundi 23 novembre 2015, à 14:00 +0100, Ihar Hrachyshka a écrit :
> Vincent Untz  wrote:
> 
> >Le lundi 23 novembre 2015, à 13:17 +0100, Ihar Hrachyshka a écrit :
> >>Vincent Untz  wrote:
> >>
> >>>Hi,
> >>>
> >>>I know 2014.2.4 is (in theory) the last juno release, but I'd still
> >>>expect to see a post-release bump to 2014.2.5 in git, to avoid any
> >>>confusion as to what lives in git. This is especially useful if people
> >>>build new tarballs from git.
> >>>
> >>>Any objection against this, before I send patches? :-)
> >>>
> >>>Cheers,
> >>>
> >>>Vincent
> >>
> >>I probably miss something, but why do we care about what is in the
> >>branch now that we don’t plan to merge anything more there? Is it to
> >>accommodate for downstream consumers that may want to introduce more
> >>patches on top of the upstream tag?
> >
> >As I said: because people might keep generating tarballs from git, and
> >they'd expect to have a version that is correct. Yes, this is mostly
> >downstreams. (And not necessarily to introduce more patches, but just to
> >reflect the reality).
> >
> >I would also argue that we care because we're leaving git in a state
> >that is kind of wrong (since the version is not correct).
> 
> Can you elaborate why it’s incorrect? It’s 2014.2.4 right? So that’s
> indeed what you get if you generate tarballs from latest commits in
> those branches; it should totally reflect the contents that are in
> official tarballs and hence have the same version.

Oh, indeed, you're correct.

But to illustrate better the issue I'm seeing:
http://tarballs.openstack.org/cinder/cinder-stable-juno.tar.gz contains
a directory cinder-2014.2.4.dev24, which is kind of wrong. That's the
bit that I'd like to see fixed.

Cheers,

Vincent

-- 
Les gens heureux ne sont pas pressés.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Next vitrage meeting

2015-11-24 Thread AFEK, Ifat (Ifat)
Hi,

Vitrage weekly meeting will be tomorrow, Wednesday at 9:00 UTC, on 
#openstack-meeting-3 channel.

Agenda:
* Current status and progress from last week
* Review action items
* Next steps 
* Open Discussion

You are welcome to join.

Vitrage meetings page: https://wiki.openstack.org/wiki/Meetings/Vitrage 
Previous meetings: http://eavesdrop.openstack.org/meetings/vitrage/2015/ 

Thanks, 
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Senlin]Support more complicated scalingscenario

2015-11-24 Thread Qiming Teng
> After debugging,  I found that former result is not overridden by
> another policy.
> http://git.openstack.org/cgit/openstack/senlin/tree/senlin/engine/actions/base.py#n441

I think you are not referring to a correct location in the source code.
That line was only about checking whether there are some policies failed
checking.
 
> >
> >2. if  a cluster attached a scaling policy with event =
> >CLUSTER_SCALE_IN,  when aCLUSTER_SCALING_OUT action is
> >triggered,  the policy also will be checked,  is this reasonable?
> >
> >/ When a ScalingPolicy is defined, you can use 'event'
> >property to specify the action type you want the policy to take
> >effect on, like:
> >http://git.openstack.org/cgit/openstack/senlin/tree/examples/policies/scaling_policy.yaml#n5
> >
> > Although a ScalingPolicy will be checked for both
> >CLUSTER_SCALE_IN and CLUSTER_SCALE_OUT actions, the check routine
> >will return immediately if the action type is not what it is
> >expecting.
> >http://git.openstack.org/cgit/openstack/senlin/tree/senlin/policies/scaling_policy.py#n133/
> 
> Yes  it's not checked in pre_op,  but all ScalingPolicies still will
> be checked whether in cooldown.
> http://git.openstack.org/cgit/openstack/senlin/tree/senlin/engine/actions/base.py#n431*

Each policy instances creates its own policy-binding on a cluster. The
cooldown is recorded and then checked there. I can sense something is
wrong, but so far I'm not quite sure I understand the specific use case
that the current logic fails to support.

> >Your suggestion has a problem when I want different cooldown
> >for each ceilometer/aodh alarms, for following cases, how
> >should I do?
> >1.  15% < cpu_util  < 30%,  scaling_in 1 instance with 300s
> >cooldown time
> >2.   cpu_util < 15%, scaling_in 2 instances with 600s
> > cooldown time
> >
> >For a senlin webhook, could we assign a policy which will be
> >checked ?
> >
> >/   User is not allowed to specify the policy when defining a
> >webhook. The webhook target is decided by target object(cluster or
> >node) and target action type./
> 
> Yes we can define cooldown for each policy, but my meaning is
> that each cluster_scaling_in action is only checked by specified
> scaling_policy like OS::Heat::ScalingPolicy in heat.

This is a misconcept because a Senlin policy is not a Heat
ScalingPolicy.  A Senlin policy is checked before and/or after a
specified action is performed.

> 1) In heat, we could define two scaling_in actions(via define
> two OS::Heat::ScalingPolicy polices ), each scaling_in action is
> checked by one OS::Heat::ScalingPolicy, so each scaling_in action's
> cooldown is only checked in one OS::Heat::ScalingPolicy.
>  2)But in senlin, each scaling_in action will be checked by all
> attached scaling_policies, so all scaling_polices' cooldown will be
> checked.How does senlin support different cooldown time for each
> scaling_in action?

The built-in policy will check if the action is a 'CLUSTER_SCALE_IN'
action or not. The policy is supposed to help you decide the number of
nodes you want to remove from a cluster. If you have a specific
requirement where you want 2 nodes to be removed under one condition, 1
node to be removed under another condition, you will create two webhooks
to explicitly specify that. A scaling policy will respect the 'count'
parameter you speicifed from the webhook.

Each policy has its default cooldown value when created. However, you
can easily override that default value when attaching such a policy to a
cluster. See 'senlin help cluster-policy-attach' for more info.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Bertrand Lallau as Octavia Core

2015-11-24 Thread Stephen Balukoff
+1

On Thu, Nov 19, 2015 at 11:08 AM, Michael Johnson 
wrote:

> +1
>
> On Thu, Nov 19, 2015 at 9:35 AM, Eichberger, German
>  wrote:
> > All,
> >
> >
> >
> > As I said in a previous e-mail I am really excited about the deep talent
> in the Octavia sub-project. So it is my pleasure to propose Bertrand Lallau
> (irc blallau) as a new core for the OpenStack Neutron Octavia sub project.
> His contributions [1] are in line with other cores and he has been an
> active member of our community. Current cores please vote by replying to
> this e-mail.
> >
> > Thanks,
> > German
> >
> >
> > [1] http://stackalytics.com/?project_type=openstack=octavia
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbaluk...@blueboxcloud.com
206-607-0660 x807
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Henry Nash
Good, wide ranging discussion.

From my point of view, this isn’t about trusting cores, rather (as was pointed 
out by others) ensuring people with different customer perspectives be part of 
the approval. Of course, you could argue they could have -1’d it anyway, but I 
think ensuring cross-company approval helps us overall, and so I’m comfortable 
and support with the existing approach.

Henry
> On 24 Nov 2015, at 09:06, Henry Nash  wrote:
> 
> Good, wide ranging discussion.
> 
> From my point of view, this isn’t about trusting cores, rather (as was 
> pointed out by others) ensuring people with different customer perspectives 
> be part of the approval. Of course, you could argue they could have -1’d it 
> anyway, but I think ensuring cross-company approval helps us overall, and so 
> I’m comfortable and support with the existing approach.
> 
> Henry
>> On 24 Nov 2015, at 05:55, Clint Byrum > > wrote:
>> 
>> Excerpts from Adam Young's message of 2015-11-23 20:21:47 -0800:
>>> On 11/23/2015 11:42 AM, Morgan Fainberg wrote:
 Hi everyone,
 
 This email is being written in the context of Keystone more than any 
 other project but I strongly believe that other projects could benefit 
 from a similar evaluation of the policy.
 
 Most projects have a policy that prevents the following scenario (it 
 is a social policy not enforced by code):
 
 * Employee from Company A writes code
 * Other Employee from Company A reviews code
 * Third Employee from Company A reviews and approves code.
 
 This policy has a lot of history as to why it was implemented. I am 
 not going to dive into the depths of this history as that is the past 
 and we should be looking forward. This type of policy is an actively 
 distrustful policy. With exception of a few potentially bad actors 
 (again, not going to point anyone out here), most of the folks in the 
 community who have been given core status on a project are trusted to 
 make good decisions about code and code quality. I would hope that 
 any/all of the Cores would also standup to their management chain if 
 they were asked to "just push code through" if they didn't sincerely 
 think it was a positive addition to the code base.
 
 Now within Keystone, we have a fair amount of diversity of core 
 reviewers, but we each have our specialities and in some cases 
 (notably KeystoneAuth and even KeystoneClient) getting the required 
 diversity of reviews has significantly slowed/stagnated a number of 
 reviews.
 
 What I would like us to do is to move to a trustful policy. I can 
 confidently say that company affiliation means very little to me when 
 I was PTL and nominating someone for core. We should explore making a 
 change to a trustful model, and allow for cores (regardless of company 
 affiliation) review/approve code. I say this since we have clear steps 
 to correct any abuses of this policy change.
 
 With all that said, here is the proposal I would like to set forth:
 
 1. Code reviews still need 2x Core Reviewers (no change)
 2. Code can be developed by a member of the same company as both core 
 reviewers (and approvers).
 3. If the trust that is being given via this new policy is violated, 
 the code can [if needed], be reverted (we are using git here) and the 
 actors in question can lose core status (PTL discretion) and the 
 policy can be changed back to the "distrustful" model described above.
 
 I hope that everyone weighs what it means within the community to 
 start moving to a trusting-of-our-peers model. I think this would be a 
 net win and I'm willing to bet that it will remove noticeable 
 roadblocks [and even make it easier to have an organization work 
 towards stability fixes when they have the resources dedicated to it].
 
 Thanks for your time reading this.
>>> 
>>> So, having been one of the initial architects of said policy, I'd like 
>>> to reiterate what I felt at the time.  The policy is in place as much to 
>>> protect the individual contributors as the project.  If I was put in a 
>>> position where I had to review and approve a coworkers code changes, it 
>>> is is easier for me to push back on a belligerent manager to say "this 
>>> violates project policy."
>>> 
>>> But, even this is a more paranoid rationale than I feel now.  Each of us 
>>> has a perspective based on our customer base.   People make decisions 
>>> based on what they feel to be right, but right for a public cloud 
>>> provider and right for an Enterprise Software vendor will be different.  
>>> Getting a change reviewed by someone outside your organization is for 
>>> perspective.  Treat it as a brake against group think.
>>> 
>>> I know and trust all of the current Keystone core very well.  I 

Re: [openstack-dev] [release] process change for closing bugs when patches merge

2015-11-24 Thread Thierry Carrez
Steve Martinelli wrote:
> So it's only for this time around (Mitaka-1) that I'll have to tag bugs
> as fix-released, because the release automation will just leave a comment?

We'll likely turn all FixCommitted bugs to FixReleased (one last time)
as part of the transition, so you don't really need to do anything.

> Going forward, the bugs will be automatically marked as fix-released, so
> the automation won't change their states when we release?

Right. The automation will just add a comment to Launchpad saying in
which release the bugfix was shipped, as an information bit for whoever
created the task.

Quick background: One issue with Launchpad is that it's trying to be a
task tracker (a before-landing team work planning and organization tool)
and a change tracker (an after-landing precise account of what made it
into which release), all within the same data model. The trick being,
one side would impact the other + the real list of "what made it into
which release" lives in git history, and Launchpad did not have a
totally accurate picture of that. Hence the move to our own tarball
hosting and reno for change tracking. With this last change, we complete
the separation and can use Launchpad purely as a task tracker. So
project teams can use milestone targeting to plan work (or not) however
they see fit, without impacting release management.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Time to increase our IRC Meetings infrastructure?

2015-11-24 Thread Thierry Carrez
Tony Breeds wrote:
> [...]
> I understand that we want to keep the number of parallel meetings to a 
> minimum,
> but is it time to add #openstack-meeting-5?

Some slots are taken by groups that have not been holding meetings for
quite a long time. For good hygiene, I'd like us to clean those up
/before/ we consider adding a new meeting channel. It might well still
be necessary to create a new channel after the cleanup, but we need to
do that first.

It's a bit of a painful work (especially for teams without a clear
meeting_id) so it's been safely staying at the bottom of my TODO list
for a while...

> [...]
> Confusion
> =
> So this is really quite trivial.  As you can see above we don't have an
> #openstack-meeting-2 channel the ...-alt is clearly that, but still people are
> confused.  How would people feel about creating #openstack-meeting-2 and 
> making
> it redirect to #openstack-meeting-alt?

I'd rather standardize and rename -alt to -2 (i.e. redirect -alt to -2
on IRC, and bulk-change all meetings YAML from -alt to -2).

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-24 Thread Prathyusha Guduri
Hi Sean,

Thanks for you kind help.

I did the following.

# apt-get install ubuntu-cloud-keyring
# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu
"
\
"trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
# apt-get update && apt-get dist-upgrade

and then uninstalled the libvirt and qemu that were installed manually and
then ran stack.sh after cleaning and unstacking.
Now fortunately libvirt and qemu satisfy minimum requirements.

$ virsh --version
1.2.12

$ kvm --version
/usr/bin/kvm: line 42: /tmp/qemu.orig: Permission denied
QEMU emulator version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.3~cloud0),
Copyright (c) 2003-2008 Fabrice Bellard


Am using an ubuntu 14.04 system
$ uname -a
Linux ubuntu-Precision-Tower-5810 3.13.0-24-generic #46-Ubuntu SMP Thu Apr
10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

After stack.sh which was successful, tried creating a new instance - which
gave an ERROR again.

$ nova list
+--++++-+---+
| ID   | Name   | Status | Task
State | Power State | Networks
|
+--++++-+---+
| 31a7e160-d04c-4216-91cf-30ce86c2b1fa | demo-instance1 | ERROR  |
-  | NOSTATE | private=10.0.0.3,
fd34:f4c5:412:0:f816:3eff:fea4:b9fe |

$ sudo service ovs-dpdk status
sourcing config
ovs alive
VHOST_CONFIG: bind to /var/run/openvswitch/vhufb8052e5-d3
2015-11-24T10:23:25Z|00126|dpdk|INFO|Socket
/var/run/openvswitch/vhufb8052e5-d3 created for vhost-user port
vhufb8052e5-d3
2015-11-24T10:23:25Z|4|dpif_netdev(pmd18)|INFO|Core 2 processing port
'vhufb8052e5-d3'
2015-11-24T10:23:25Z|2|dpif_netdev(pmd19)|INFO|Core 8 processing port
'dpdk0'
2015-11-24T10:23:25Z|00127|bridge|INFO|bridge br-int: added interface
vhufb8052e5-d3 on port 6
2015-11-24T10:23:25Z|5|dpif_netdev(pmd18)|INFO|Core 2 processing port
'dpdk0'
2015-11-24T10:23:26Z|00128|connmgr|INFO|br-int<->unix: 1 flow_mods in the
last 0 s (1 deletes)
2015-11-24T10:23:26Z|00129|ofp_util|INFO|normalization changed ofp_match,
details:
2015-11-24T10:23:26Z|00130|ofp_util|INFO| pre:
in_port=5,nw_proto=58,tp_src=136
2015-11-24T10:23:26Z|00131|ofp_util|INFO|post: in_port=5
2015-11-24T10:23:26Z|00132|connmgr|INFO|br-int<->unix: 1 flow_mods in the
last 0 s (1 deletes)
2015-11-24T10:23:26Z|00133|connmgr|INFO|br-int<->unix: 1 flow_mods in the
last 0 s (1 deletes)
2015-11-24T10:23:29Z|00134|bridge|WARN|could not open network device
vhufb8052e5-d3 (No such device)
VHOST_CONFIG: socket created, fd:52
VHOST_CONFIG: bind to /var/run/openvswitch/vhufb8052e5-d3
2015-11-24T10:23:29Z|00135|dpdk|INFO|Socket
/var/run/openvswitch/vhufb8052e5-d3 created for vhost-user port
vhufb8052e5-d3
2015-11-24T10:23:29Z|6|dpif_netdev(pmd18)|INFO|Core 2 processing port
'vhufb8052e5-d3'
2015-11-24T10:23:29Z|3|dpif_netdev(pmd19)|INFO|Core 8 processing port
'dpdk0'
2015-11-24T10:23:29Z|00136|bridge|INFO|bridge br-int: added interface
vhufb8052e5-d3 on port 7
2015-11-24T10:23:30Z|7|dpif_netdev(pmd18)|INFO|Core 2 processing port
'dpdk0'
0

I understand that ovs-dpdk is running. The error log of n-cpu.log is

2015-11-24 15:47:27.957 ^[[01;31mERROR nova.compute.manager
[^[[01;36mreq-24fc3f16-ccd5-4e2d-b583-60ade23bc1ed ^[[00;36mNone
None^[[01;31m] ^[[01;35m^[[01;31mNo compute node record for host
ubuntu-Precision-Tower-5810^[[00m
2015-11-24 15:47:27.964 ^[[01;33mWARNING nova.compute.monitors
[^[[01;36mreq-24fc3f16-ccd5-4e2d-b583-60ade23bc1ed ^[[00;36mNone
None^[[01;33m] ^[[01;35m^[[01;33mExcluding nova.compute.monitors.cpu
monitor virt_driver. Not in the list of enabled monitors
(CONF.compute_monitors).^[[00m
2015-11-24 15:47:27.964 ^[[00;36mINFO nova.compute.resource_tracker
[^[[01;36mreq-24fc3f16-ccd5-4e2d-b583-60ade23bc1ed ^[[00;36mNone
None^[[00;36m] ^[[01;35m^[[00;36mAuditing locally available compute
resources for node ubuntu-Precision-Tower-5810^[[00m
2015-11-24 15:47:28.087 ^[[00;32mDEBUG nova.compute.resource_tracker
[^[[01;36mreq-24fc3f16-ccd5-4e2d-b583-60ade23bc1ed ^[[00;36mNone
None^[[00;32m] ^[[01;35m^[[00;32mHypervisor: free VCPUs: 9^[[00m
^[[00;33mfrom (pid=7880) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:588^[[00m

I dont understand where am I missing out now.



On Mon, Nov 23, 2015 at 7:50 PM, Mooney, Sean K 
wrote:

> Hi
>
> qemu version was 2.0.0 does not support mapping hugepage as shared.
> as a result the dpdk implementation of vhost-user cannot function with
> this version.
> Similarly libvirt 1.2.2 has no knowledge of vhost-user.
>
> If you are on fedora 21 then the virt-preview repo packages the required
> Libvirt 

Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Thierry Carrez
Clint Byrum wrote:
> Excerpts from Brad Topol's message of 2015-11-23 13:38:34 -0800:
>> So to avoid the perception of a single company owning a piece of code,  at
>> IBM our policy for major projects like Cinder, Nova and currently many
>> parts of Keystone (except pycadf) is to make sure we do not do the
>> following for major OpenStack projects:
>>* Employee from Company IBM writes code
>>* Other Employee from Company IBM reviews code
>>* Third Employee from Company IBM  reviews and approves code.
>>
>> Again for certain small projects we relax this rule.  And certainly we
>> could discuss relaxing this rule for larger portions of Keystone if needed.
>> But as a general policy we strive to stick to the distrust model as much as
>> we can as the "perception of impropriety" that exists with the  trusting
>> model can result in tangible headaches.
> 
> Thanks Brad. I think this is an interesting example where it's not
> the single organization that is not being trusted by the community,
> but the community that is not being trusted by the single organization.
> Their behavior may be warranted, but IMO a community member refusing to
> respect a contribution because it is from a single organization that
> is not theirs is an example of a community member misbehaving, which
> I feel should be dealt with quietly but swiftly. We should strive to
> understand anyone who feels this way, and do what we can to _address_
> their concerns, instead of avoiding them.

Right. The code is either in and supported by everyone, or out. If some
contributors object to the code being in, that should be resolved when
the code is being proposed (or initially merged), rather than by
assuming some areas of the code are now a specific organization's fault
(and therefore segmenting your code maintenance duties).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Puppet OpenStack Mid-Cycle (Mitaka)

2015-11-24 Thread Emilien Macchi
Hello,

If you're involved in Puppet OpenStack or if you want to be involved,
please look at this poll: http://goo.gl/forms/lsBf55Ru8L

Thanks a lot for your time,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] tempest tests in gate not running

2015-11-24 Thread Anusha Ramineni
Tim,

Previous failure on gate is fixed with this patch
https://review.openstack.org/#/c/247314/
Now congress tempest tests are running, so our tempest plugin worked :)
http://logs.openstack.org/10/237510/5/check/gate-congress-dsvm-api/0bdbb53/console.html

Current gate failures report a different issue which should be solved by
this patch
https://review.openstack.org/#/c/247375/2

Best Regards,
Anusha

On 23 November 2015 at 22:00, Tim Hinrichs  wrote:

> It looks like our tempest tests aren't running in the gate.  Here's a
> recent patch that just merged this morning.
>
> https://review.openstack.org/#/c/242305/
>
> If you go to the testr results, you'll see a list of all the tests that
> were run.
>
>
> http://logs.openstack.org/05/242305/5/gate/gate-congress-dsvm-api/979bc9d/testr_results.html.gz
>
> Two things: (i) there are many tests here that are not Congress-specific
> and (ii) I don't see the Congress tests running at all.  During manual
> testing both these issues were handled by the following lines in our gate
> job config.
>
>
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/congress.yaml#L33-L34
>
> The good news, at least, is that congress is getting properly spun up.
>
>
> http://logs.openstack.org/05/242305/5/gate/gate-congress-dsvm-api/979bc9d/logs/screen-congress.txt.gz
>
> Would someone want to volunteer to dig into this?
>
> Tim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-24 Thread Julien Danjou
On Mon, Nov 23 2015, Jordan Pittier wrote:

> Is the DB the limiting factor of openstack performance ? De we have hard
> evidence of this ? We need numbers before acting otherwise it will be an
> endless discussion.
>
> When I look at the number of race conditions we had/have in OpenStack, it
> seems scary to remove the FK in the DB. FK look like a "guardian" to me and
> we should aim at enforcing more consistency/integrity, not the contrary.
>
> Also, this is an open source project with contributors with different
> skills and experience (beginners, part time contributor etc.). Maybe this
> is something to also consider.

I agree with Jordan here, that this is to me a discussion that fits into
the "premature optimization" category.

Once Nova (and many other OpenStack projects) are correct (e.g. no more
race condition, orphan data, etc), it'd be great to start having this
kind of conversation again.

(and +1 to Mike over good points).

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Julien Danjou
On Mon, Nov 23 2015, Morgan Fainberg wrote:

> What I would like us to do is to move to a trustful policy. I can
> confidently say that company affiliation means very little to me when I was
> PTL and nominating someone for core. We should explore making a change to a
> trustful model, and allow for cores (regardless of company affiliation)
> review/approve code. I say this since we have clear steps to correct any
> abuses of this policy change.

This is yet another case of permission-vs-forgiveness. Trustful is the
model we applied from the beginning the Telemetry team with great
success. And we never had any abuse of this.

In the end it comes down to picking correctly your core reviewers. Not
by setting the bar immensely high, but by building trust step by step,
and bonding, in such a way that the peer trust becomes more important
than the company pressure.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] Remove inactive contributors from cloudkitty-core

2015-11-24 Thread Stéphane Albert
Hi,

Some people in the cloudkitty-core group are not active anymore. They
haven't produced a single contribution during the last cycle. Here is
the list of the two people:

- Adrian Turjak
- François Magimel

I think we should remove them from the group. We've got new faces in the
project, let's get some space for them ready :).

Are you guys OK with this?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][sfc]

2015-11-24 Thread Oguz Yarimtepe

Hi,

Is there any working Devstack configuration for sfc testing? I just saw 
one commit that is waiting review.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2015-11-24 Thread 少合冯
Hi Paul,
Comments inline:

2015-11-23 16:36 GMT+08:00 Paul Carlton :

> John
>
> At the live migration sub team meeting I undertook to look at the issue
> of progress reporting.
>
> The use cases I'm envisaging are...
>
> As a user I want to know how much longer my instance will be migrating
> for.
>
> As an operator I want to identify any migration that are making slow
>  progress so I can expedite their progress or abort them.
>
> The current implementation reports on the instance's migration with
> respect to memory transfer, using the total memory and memory remaining
> fields from libvirt to report the percentage of memory still to be
> transferred.  Due to the instance writing to pages already transferred
> this percentage can go up as well as down.  Daniel has done a good job
> of generating regular log records to report progress and highlight lack
> of progress but from the API all a user/operator can see is the current
> percentage complete.  By observing this periodically they can identify
> instance migrations that are struggling to migrate memory pages fast
> enough to keep pace with the instance's memory updates.
>
> The problem is that at present we have only one field, the instance
> progress, to record progress.  With a live migration there are measures
>

[Shaohe]:

>From this link, OpenStack API ref:
http://developer.openstack.org/api-ref-compute-v2.1.html#listDetailServers
It describe the instance progress: A percentage value of the build progress.
But for libvirt driver it does be migration progress.
For other driver it is building progress.
And there is a spec to propose some change.
https://review.openstack.org/#/c/249086/



> of progress, how much of the ephemeral disks (not needed for shared
> disk setups) have been copied and how much of the memory has been
> copied. Both can go up and down as the instance writes to pages already
> copied causing those pages to need to be copied again.  As Daniel says
> in his comments in the code, the disk size could dwarf the memory so
> reporting both in single percentage number is problematic.
>
> We could add an additional progress item to the instance object, i.e.
> disk progress and memory progress but that seems odd to have an
> additional progress field only for this operation so this is probably
> a non starter!
>
> For operations staff with access to log files we could report disk
> progress as well as memory in the log file, however that does not
> address the needs of users and whilst log files are the right place for
> support staff to look when investigating issues operational tooling
> is much better served by notification messages.
>
> Thus I'd recommend generating periodic notifications during a migration
> to report both memory and disk progress would be useful?  Cloud
> operators are likely to manage their instance migration activity using
> some orchestration tooling which could consume these notifications and
> deduce what challenges the instance migration is encountering and thus
> determine how to address any issues.
>
> The use cases are only partially addressed by the current
> implementation, they can repeatedly get the server details and look at
> the progress percentage to see how quickly (or even if) it is
> increasing and determine how long the instance is likely to be
> migrating for.  However for an instance that has a large disk and/or
> is doing a high rate of disk i/o they may see the percentage complete
> (i.e. memory) repeatedly showing 90%+ but the instance migration does
> not complete.
>
> The nova spec https://review.openstack.org/#/c/248472/ suggests making
> detailed information available via the os-migrations object.  This is
> not a bad idea but I have some issues with the implementation that I
> will share on that spec.
>

[Shaohe]:

About this spec, Daniel has give some comments on it, and we have updated it.
Maybe we can work together on it to make it more better.

I have worked on libvirt multi-thread compress migration for libvirt. and looks
into some live migrations performance optimizations.

and generate an  ideas:
1. Let nova expose more live migration
details, such as the RAM statistics, xbzrle-cache status, also the information
of multi-thread compression in future, and so on.
2. nova can enable auto-converge, tune
the xbzrle-cache and multi-thread compression dynamically.
3. Then other project can make a good
strategy to tune the live migration base on the migration details.


For example:
cache size is a performance key for xbzrle,  the best is that the cache size are
same with the guest total RAM, but this maybe not always available on host.
Multi-thread compress level is higher is better, but it is cpu consume,
Auto converge will slow down the CPU running.
Seems things not always as good as I had expected.

Also we have submit a topic to summit about this idea, but not accepted.
Topic: 
Link: 

Re: [openstack-dev] [Neutron] Security Groups OVS conntrack support

2015-11-24 Thread Tapio Tallgren
Thanks! I got it now: OpenStack already allows all "related" connections,
and you need connection tracking for that. This was not very clear to me
from the documentation...

-Tapio

On Mon, Nov 23, 2015 at 10:14 PM Russell Bryant  wrote:

> On 11/23/2015 02:16 PM, Kevin Benton wrote:
> > Security groups already use connection tracking. It's just done via a
> > linux bridge right now because the versions of OVS shipped with most
> > distros have no native conntrack support.
>
> This post discusses it in the context of OVN, but gets down to showing
> what the flows look like.  It also includes a link to a presentation
> about ovs+conntrack given at the OpenStack Summit in Vancouver.
>
>
> http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/
>
> The most recent talk on this topic was "The State of Stateful Services"
> at the OVS Conference last week:
>
> http://openvswitch.org/support/ovscon2015/16/1620-stringer.pdf
> https://www.youtube.com/watch?v=PV2rxxb6lwQ
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread Balázs Gibizer
> From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
> Sent: November 23, 2015 22:33
> On 11/23/2015 2:23 PM, Andrew Laski wrote:
> > On 11/23/15 at 04:43pm, Balázs Gibizer wrote:
> >>> From: Andrew Laski [mailto:and...@lascii.com]
> >>> Sent: November 23, 2015 17:03
> >>>
> >>> On 11/23/15 at 08:54am, Ryan Rossiter wrote:
> >>> >
> >>> >
> >>> >On 11/23/2015 5:33 AM, John Garbutt wrote:
> >>> >>On 20 November 2015 at 09:37, Balázs Gibizer
> >>> >> wrote:
> >>> >>>
> >>> >>>
> >> >>
> >>> >>There is a bit I am conflicted/worried about, and thats when we
> >>> >>start including verbatim, DB objects into the notifications. At
> >>> >>least you can now quickly detect if that blob is something
> >>> >>compatible with your current parsing code. My preference is really
> >>> >>to keep the Notifications as a totally separate object tree, but I
> >>> >>am sure there are many cases where that ends up being seemingly
> >>> >>stupid duplicate work. I am not expressing this well in text form
> >>> >>:(
> >>> >Are you saying we don't want to be willy-nilly tossing DB objects
> >>> >across the wire? Yeah that was part of the rug-pulling of just
> >>> >having the payload contain an object. We're automatically tossing
> >>> >everything with the object then, whether or not some of that was
> >>> >supposed to be a secret. We could add some sort of property to the
> >>> >field like dont_put_me_on_the_wire=True (or I guess a
> >>> >notification_ready() function that helps an object sanitize
> >>> >itself?) that the notifications will look at to know if it puts
> >>> >that on the wire-serialized dict, but that's adding a lot more
> >>> >complexity and work to a pile that's already growing rapidly.
> >>>
> >>> I don't want to be tossing db objects across the wire.  But I also
> >>> am not convinced that we should be tossing the current objects over
> >>> the wire either.
> >>> You make the point that there may be things in the object that
> >>> shouldn't be exposed, and I think object version bumps is another
> >>> thing to watch out for.
> >>> So far the only object that has been bumped is Instance but in doing
> >>> so no notifications needed to change.  I think if we just put
> >>> objects into notifications we're coupling the notification versions
> >>> to db or RPC changes unnecessarily.  Some times they'll move
> >>> together but other times, like moving flavor into instance_extra,
> >>> there's no reason to bump notifications.
> >>
> >>
> >> Sanitizing existing versioned objects before putting them to the wire
> >> is not hard to do.
> >> You can see an example of doing it in
> >> https://review.openstack.org/#/c/245678/8/nova/objects/service.py,cm
> >> L382.
> >> We don't need extra effort to take care of minor version bumps
> >> because that does not break a well written consumer. We do have to
> >> take care of the major version bumps but that is a rare event and
> >> therefore can be handled one by one in a way John suggested, by keep
> >> sending the previous major version for a while too.
> >
> > That review is doing much of what I was suggesting.  There is a
> > separate notification and payload object.  The issue I have is that
> > within the ServiceStatusPayload the raw Service object and version is
> > being dumped, with the filter you point out.  But I don't think that
> > consumers really care about tracking Service object versions and
> > dealing with compatibility there, it would be easier for them to track
> > the ServiceStatusPayload version which can remain relatively stable
> > even if Service is changing to adapt to db/RPC changes.
> Not only do they not really care about tracking the Service object versions,
> they probably also don't care about what's in that filter list.
> 
> But I think you're getting on the right track as to where this needs to go. We
> can integrate the filtering into the versioning of the payload.
> But instead of a blacklist, we turn the filter into a white list. If the 
> underlying
> object adds a new field that we don't want/care if people know about, the
> payload version doesn't have to change. But if we add something (or if we're
> changing the existing fields) that we want to expose, we then assert that we
> need to update the version of the payload, so the consumer can look at the
> payload and say "oh, in 1.x, now I get ___" and can add the appropriate
> checks/compat. Granted with this you can get into rebase nightmares ([1]
> still haunts me in my sleep), but I don't see us frantically changing the
> exposed fields all too often. This way gives us some form of pseudo-pinning
> of the subobject. Heck, in this method, we could even pass the whitelist on
> the wire right? That way we tell the consumer explicitly what's available to
> them (kinda like a fake schema).

I think see your point, and it seems like a good way forward. Let's turn the 
black list to 
a white list. Now I'm thinking about creating a new Field type something 

Re: [openstack-dev] [devstack]Question about using Devstack

2015-11-24 Thread Young Yang
@Bob Ball
My goal is that I want to reboot devstack DomU without reinstall devstack.

Thanks to @ALL

I finally reboot devstack successfully without reinstalling.in my case by
this way.


First
I add  `exit 0` before the first line of stack.sh to stop openstack
running.

Then i  use these commands to start the services.


cd /etc/apache2/sites-enabled && sudo ln -s ../sites-available/keystone.conf
sudo  service apache2 restart
sudo service rabbitmq-server restart
sudo service mysql restart
sudo service tgt restart
truncate -s 10737418240 /opt/stack/data/stack-volumes-default-backing-file
truncate -s
10737418240 /opt/stack/data/stack-volumes-lvmdriver-1-backing-file
sudo losetup -f --show /opt/stack/data/stack-volumes-default-backing-file
CINDER_DEVICE=`sudo losetup -f
--show /opt/stack/data/stack-volumes-lvmdriver-1-backing-file`
sudo pvcreate $CINDER_DEVICE
sudo vgcreate  stack-volumes-lvmdriver-1 $CINDER_DEVICE
cd /opt/stack/devstack/   && ./rejoin-stack.sh


Some data are lost such as the data in lvm volume used by cinder. But  most
of my data are kept.




On Mon, Nov 23, 2015 at 8:51 PM, Bob Ball  wrote:

> Hi Young,
>
>
>
> venv create: /opt/stack/tempest/.tox/venv
> venv installdeps: -r/opt/stack/tempest/requirements.txt,
> -r/opt/stack/tempest/test-requirements.txt
> ERROR: invocation failed (exit code 1), logfile:
> /opt/stack/tempest/.tox/venv/log/venv-1.log
> ERROR: actionid: venv
> msg: getenv
> cmdargs: [local('/opt/stack/tempest/.tox/venv/bin/pip'), 'install', '-U',
> '-r/opt/stack/tempest/requirements.txt',
> '-r/opt/stack/tempest/test-requirements.txt']
>
>
>
> My interpretation is that because tox is using pip directly, which is now
> outside of devstack and knows nothing about the OFFLINE flag.
>
> This is in the “install_tempest” function from devstack/lib/tempest.  A
> code change would be needed to stop tox from running if OFFLINE was
> specified.  At first glance, I believe this is probably a devstack bug.
>
>
>
> I suspect that, since Tempest creates flavors, you will not be able to use
> tempest in OFFLINE=true mode, but you may be able to use other services if
> you remove tempest from ENABLED_SERVICES.
>
>
>
> @jordan.pittier  @Bob Ball
>
> If I can ensure my IP address , localrc and everything else are not
> changed, is there any way I can achieve my goal?
>
>
>
> I don’t think I understand what your goal is – perhaps if you clarify
> exactly what you’re trying to achieve then we can answer the question.
>
>
>
> Thanks,
>
>
>
> Bob
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-24 Thread Mooney, Sean K
Hi would you be able to attach the
n-cpu log form the computenode  and  the
n-sch and q-svc logs for the controller so we can see if there is a stack trace 
relating to the
vm boot.

Also can you confirm ovs-dpdk is running correctly on the compute node by 
running
sudo service ovs-dpdk status

the neutron and networking-ovs-dpdk commits are from their respective 
stable/kilo branches so they should be compatible
provided no breaking changes have been merged to either branch.

regards
sean.

From: Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
Sent: Tuesday, November 24, 2015 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi Przemek,

Thanks For the response,

Here are the commit ids for Neutron and networking-ovs-dpdk

[stack@localhost neutron]$ git log --format="%H" -n 1
026bfc6421da796075f71a9ad4378674f619193d
[stack@localhost neutron]$ cd ..
[stack@localhost ~]$ cd networking-ovs-dpdk/
[stack@localhost networking-ovs-dpdk]$  git log --format="%H" -n 1
90dd03a76a7e30cf76ecc657f23be8371b1181d2

The Neutron agents are up and running in compute node.

Thanks
Praveen


On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw 
> wrote:
Hi Praveen,

There’s been some changes recently to networking-ovs-dpdk, it no longer host’s 
a mech driver as the openviswitch mech driver in Neutron supports vhost-user 
ports.
I guess something went wrong and the version of Neutron is not matching 
networking-ovs-dpdk. Can you post commit ids of Neutron and networking-ovs-dpdk.

The other possibility is that the Neutron agent is not running/died on the 
compute node.
Check with:
neutron agent-list

Przemek

From: Praveen MANKARA RADHAKRISHNAN 
[mailto:praveen.mank...@6wind.com]
Sent: Tuesday, November 24, 2015 12:18 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi,

Am trying to set up an open stack (kilo) installation using ovs-dpdk through 
devstack installation.

I have followed the " 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/getstarted.rst
 " documentation.

I used the same versions as in documentation (fedora21, with right kernel).

My openstack installation is successful in both controller and compute.
I have used example local.conf given in the documentation.
But if i try to spawn the VM. I am having the following error.

"NovaException: Unexpected vif_type=binding_failed"

It would be really helpful if you can point out how to debug and fix this error.

Thanks
Praveen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceph] DevStack plugin for Ceph required for Mitaka-1 and beyond?

2015-11-24 Thread Ramana Raja
Hi,

I was trying to figure out the state of DevStack plugin
for Ceph, but couldn't find its source code and ran into
the following doubt. At Mitaka 1, i.e. next week, wouldn't
Ceph related Jenkins gates (e.g. Cinder's gate-tempest-dsvm-full-ceph)
that still use extras.d's hook script instead of a plugin, stop working?
For reference,
https://github.com/openstack-dev/devstack/commit/1de9e330de9fd509fcdbe04c4722951b3acf199c
[Deepak, thanks for reminding me about the deprecation of extra.ds.]

The patch that seeks to integrate Ceph DevStack plugin with Jenkins
gates is under review,
https://review.openstack.org/#/c/188768/
It's outdated as the devstack-ceph-plugin it seeks to integrate
seem to be in the now obsolete namespace, 'stackforge/', and hasn't seen
activity for quite sometime.

Even if I'm mistaken about all of this can someone please point me to
the Ceph DevStack plugin's source code? I'm interested to know whether
the plugin would be identical to the current Ceph hook script,
extras.d/60-ceph.sh ?

Thanks,
Ramana



 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] how to start heat through devstack

2015-11-24 Thread Sergey Kraynev
Hi, P. Ramanjaneya Reddy.

Please read Heat documentation page:
http://docs.openstack.org/developer/heat/getting_started/on_devstack.html#configure-devstack-to-enable-heat

In your case it looks like you use localrc for new version of
devstack, so you need use local.conf file instead.

If it will not help you will need provide some tracebacks from
corresponding Heat services with mentioned errors (tracebacks
mentioned by you just say, that it's not running without any
additional information about root cause).

Also I suggest to use https://ask.openstack.org/en/questions/ for such
questions ;)
Thank you.

On 24 November 2015 at 14:50, P. Ramanjaneya Reddy  wrote:
> Hi, How to start heat using devstack, i've added below services in localrc
> but throwing an error
>
> localrc:
>
> ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
> IMAGE_URLS+=",http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2;
>
> Error:
> 2015-11-24 11:19:28.716 | Error: Service h-api-cfn is not running
> 2015-11-24 11:19:28.717 | Error: Service h-api-cw is not running
> 2015-11-24 11:19:28.718 | Error: Service h-api is not running
>
>
> any input, how to make heat work.?
>
> Thanks & Regards,
> Ramanji.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] process change for closing bugs when patches merge

2015-11-24 Thread Doug Hellmann
Excerpts from Armando M.'s message of 2015-11-23 17:37:47 -0800:
> On 23 November 2015 at 13:58, Doug Hellmann  wrote:
> 
> > As part of completing the release automation work and deprecating our
> > use of Launchpad for release content tracking, we also want make some
> > changes to the way patches associated with bugs are handled.
> >
> > Right now, when a patch with Closes-Bug in the commit message merges,
> > the bug status is updated to "Fix Committed" to indicate that the change
> > is in git, but not a formal release, and we rely on the release tools to
> > update the bug status to "Fix Released" when the release is  made. This
> > is one of the most error prone areas of the release process, due to
> > Launchpad service timeouts and other similar issues. The fact that we
> > cannot reliably automate this step is the main reason we want to stop
> > using Launchpad's release content tracking capabilities in the first
> > place.
> >
> > To make the release automation reliable, we are going to change the
> > release scripts to comment on bugs, but not update their status, when a
> > release is cut. Unfortunately, that leaves the bugs with "Fix Committed"
> > status, which is still considered "open" and so those bugs clutter up
> > the list of bugs for folks who are looking for ways to help. So, we
> > would like to change the default behavior of our CI and review system so
> > that when a patch with Closes-Bug in the commit message merges the bug
> > status is updated to "Fix Released" instead of "Fix Committed".
> >
> > We already have quite a few projects set up this way, using the
> > direct-release option to jeepyb (configured in the gerrit settings in
> > the project-config repository). I'm proposing that we change jeepyb's
> > behavior, rather than applying that flag to all of our projects. We will
> > also add a 'delay-release' flag to jeepyb for projects that want to
> > revert to the old behavior.
> >
> > Please let me know if this change would represent a significant
> > regression to your project's workflow.
> >
> 
> When would this change be in effect?

I would like to get it in place by next week, but I need to coordinate
with the infra team to make sure my proposed changes are technically
correct. I'll announce the change when it does go into effect, and
before we do the bulk update of existing tickets that Thierry
mentioned.

Doug

> 
> >
> > Doug
> >
> > The infra spec related to this work is:
> > https://review.openstack.org/#/c/245907/
> > The jeepyb change is: https://review.openstack.org/248922
> > The project-config change to remove the direct-release option from
> > projects: https://review.openstack.org/#/c/248923
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Random ssh errors in gate check jobs

2015-11-24 Thread Major Hayden
On 11/23/2015 06:32 AM, Jesse Pretorius wrote:
> Thanks for digging into this Major. It is a royal pain and will likely be 
> resolved with the release of Ansible 2, but for now we're stuck with having 
> to work around the issue with what we have.
> 
> I wonder, is there a difference in results or performance between using 
> paramiko or turning ssh pipelining off?

I tried running some jobs with pipelining on and off, but the errors still 
appeared.  It seems like the ssh client itself is part of the problem.  I 
haven't looked to see if Ubuntu has updated sshd recently in 14.04.

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread Andrew Laski

On 11/24/15 at 10:26am, Balázs Gibizer wrote:

From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
Sent: November 23, 2015 22:33
On 11/23/2015 2:23 PM, Andrew Laski wrote:
> On 11/23/15 at 04:43pm, Balázs Gibizer wrote:
>>> From: Andrew Laski [mailto:and...@lascii.com]
>>> Sent: November 23, 2015 17:03
>>>
>>> On 11/23/15 at 08:54am, Ryan Rossiter wrote:
>>> >
>>> >
>>> >On 11/23/2015 5:33 AM, John Garbutt wrote:
>>> >>On 20 November 2015 at 09:37, Balázs Gibizer
>>> >> wrote:
>>> >>>
>>> >>>
>> >>
>>> >>There is a bit I am conflicted/worried about, and thats when we
>>> >>start including verbatim, DB objects into the notifications. At
>>> >>least you can now quickly detect if that blob is something
>>> >>compatible with your current parsing code. My preference is really
>>> >>to keep the Notifications as a totally separate object tree, but I
>>> >>am sure there are many cases where that ends up being seemingly
>>> >>stupid duplicate work. I am not expressing this well in text form
>>> >>:(
>>> >Are you saying we don't want to be willy-nilly tossing DB objects
>>> >across the wire? Yeah that was part of the rug-pulling of just
>>> >having the payload contain an object. We're automatically tossing
>>> >everything with the object then, whether or not some of that was
>>> >supposed to be a secret. We could add some sort of property to the
>>> >field like dont_put_me_on_the_wire=True (or I guess a
>>> >notification_ready() function that helps an object sanitize
>>> >itself?) that the notifications will look at to know if it puts
>>> >that on the wire-serialized dict, but that's adding a lot more
>>> >complexity and work to a pile that's already growing rapidly.
>>>
>>> I don't want to be tossing db objects across the wire.  But I also
>>> am not convinced that we should be tossing the current objects over
>>> the wire either.
>>> You make the point that there may be things in the object that
>>> shouldn't be exposed, and I think object version bumps is another
>>> thing to watch out for.
>>> So far the only object that has been bumped is Instance but in doing
>>> so no notifications needed to change.  I think if we just put
>>> objects into notifications we're coupling the notification versions
>>> to db or RPC changes unnecessarily.  Some times they'll move
>>> together but other times, like moving flavor into instance_extra,
>>> there's no reason to bump notifications.
>>
>>
>> Sanitizing existing versioned objects before putting them to the wire
>> is not hard to do.
>> You can see an example of doing it in
>> https://review.openstack.org/#/c/245678/8/nova/objects/service.py,cm
>> L382.
>> We don't need extra effort to take care of minor version bumps
>> because that does not break a well written consumer. We do have to
>> take care of the major version bumps but that is a rare event and
>> therefore can be handled one by one in a way John suggested, by keep
>> sending the previous major version for a while too.
>
> That review is doing much of what I was suggesting.  There is a
> separate notification and payload object.  The issue I have is that
> within the ServiceStatusPayload the raw Service object and version is
> being dumped, with the filter you point out.  But I don't think that
> consumers really care about tracking Service object versions and
> dealing with compatibility there, it would be easier for them to track
> the ServiceStatusPayload version which can remain relatively stable
> even if Service is changing to adapt to db/RPC changes.
Not only do they not really care about tracking the Service object versions,
they probably also don't care about what's in that filter list.

But I think you're getting on the right track as to where this needs to go. We
can integrate the filtering into the versioning of the payload.
But instead of a blacklist, we turn the filter into a white list. If the 
underlying
object adds a new field that we don't want/care if people know about, the
payload version doesn't have to change. But if we add something (or if we're
changing the existing fields) that we want to expose, we then assert that we
need to update the version of the payload, so the consumer can look at the
payload and say "oh, in 1.x, now I get ___" and can add the appropriate
checks/compat. Granted with this you can get into rebase nightmares ([1]
still haunts me in my sleep), but I don't see us frantically changing the
exposed fields all too often. This way gives us some form of pseudo-pinning
of the subobject. Heck, in this method, we could even pass the whitelist on
the wire right? That way we tell the consumer explicitly what's available to
them (kinda like a fake schema).


I think see your point, and it seems like a good way forward. Let's turn the 
black list to
a white list. Now I'm thinking about creating a new Field type something like
WhiteListedObjectField which get a type name (as the ObjectField) but also get
a white_list that describes which fields 

Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Lance Bragstad
I think one of the benefits of the current model was touched on earlier by
dstanek. If someone is working on something for their organization, they
typically bounce ideas of others they work with closely. This tends to be
people within the same organization. The groups developing the feature
might miss different perspectives on solving that particular problem.
Bringing in a fresh set of eyes from someone outside that organization can
be a huge benefit for the overall product.

I don't think that is the sole reason to keep the existing policy, but I do
think it's a positive side-effect.

On Tue, Nov 24, 2015 at 6:31 AM, David Chadwick 
wrote:

> Spot on. This is exactly the point I was trying to make
>
> David
>
> On 24/11/2015 11:20, Dolph Mathews wrote:
> > Scenarios I've been personally involved with where the
> > "distrustful" model either did help or would have helped:
> >
> > - Employee is reprimanded by management for not positively reviewing &
> > approving a coworkers patch.
> >
> > - A team of employees is pressured to land a feature with as fast as
> > possible. Minimal community involvement means a faster path to "merged,"
> > right?
> >
> > - A large group of reviewers from the author's organization repeatedly
> > throwing *many* careless +1s at a single patch. (These happened to not
> > be cores, but it's a related organizational behavior taken to an
> extreme.)
> >
> > I can actually think of a few more specific examples, but they are
> > already described by one of the above.
> >
> > It's not cores that I do not trust, its the organizations they operate
> > within which I have learned not to trust.
> >
> > On Monday, November 23, 2015, Morgan Fainberg  > > wrote:
> >
> > Hi everyone,
> >
> > This email is being written in the context of Keystone more than any
> > other project but I strongly believe that other projects could
> > benefit from a similar evaluation of the policy.
> >
> > Most projects have a policy that prevents the following scenario (it
> > is a social policy not enforced by code):
> >
> > * Employee from Company A writes code
> > * Other Employee from Company A reviews code
> > * Third Employee from Company A reviews and approves code.
> >
> > This policy has a lot of history as to why it was implemented. I am
> > not going to dive into the depths of this history as that is the
> > past and we should be looking forward. This type of policy is an
> > actively distrustful policy. With exception of a few potentially bad
> > actors (again, not going to point anyone out here), most of the
> > folks in the community who have been given core status on a project
> > are trusted to make good decisions about code and code quality. I
> > would hope that any/all of the Cores would also standup to their
> > management chain if they were asked to "just push code through" if
> > they didn't sincerely think it was a positive addition to the code
> base.
> >
> > Now within Keystone, we have a fair amount of diversity of core
> > reviewers, but we each have our specialities and in some cases
> > (notably KeystoneAuth and even KeystoneClient) getting the required
> > diversity of reviews has significantly slowed/stagnated a number of
> > reviews.
> >
> > What I would like us to do is to move to a trustful policy. I can
> > confidently say that company affiliation means very little to me
> > when I was PTL and nominating someone for core. We should explore
> > making a change to a trustful model, and allow for cores (regardless
> > of company affiliation) review/approve code. I say this since we
> > have clear steps to correct any abuses of this policy change.
> >
> > With all that said, here is the proposal I would like to set forth:
> >
> > 1. Code reviews still need 2x Core Reviewers (no change)
> > 2. Code can be developed by a member of the same company as both
> > core reviewers (and approvers).
> > 3. If the trust that is being given via this new policy is violated,
> > the code can [if needed], be reverted (we are using git here) and
> > the actors in question can lose core status (PTL discretion) and the
> > policy can be changed back to the "distrustful" model described
> above.
> >
> > I hope that everyone weighs what it means within the community to
> > start moving to a trusting-of-our-peers model. I think this would be
> > a net win and I'm willing to bet that it will remove noticeable
> > roadblocks [and even make it easier to have an organization work
> > towards stability fixes when they have the resources dedicated to
> it].
> >
> > Thanks for your time reading this.
> >
> > Regards,
> > --Morgan
> > PTL Emeritus, Keystone
> >
> >
> >
> >
> 

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread Balázs Gibizer
> From: Andrew Laski [mailto:and...@lascii.com]
> Sent: November 24, 2015 15:35
> On 11/24/15 at 10:26am, Balázs Gibizer wrote:
> >> From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
> >> Sent: November 23, 2015 22:33
> >> On 11/23/2015 2:23 PM, Andrew Laski wrote:
> >> > On 11/23/15 at 04:43pm, Balázs Gibizer wrote:
> >> >>> From: Andrew Laski [mailto:and...@lascii.com]
> >> >>> Sent: November 23, 2015 17:03
> >> >>>
> >> >>> On 11/23/15 at 08:54am, Ryan Rossiter wrote:
> >> >>> >
> >> >>> >
> >> >>> >On 11/23/2015 5:33 AM, John Garbutt wrote:
> >> >>> >>On 20 November 2015 at 09:37, Balázs Gibizer
> >> >>> >> wrote:
> >> >>> >>>
> >> >>> >>>
> >> >> >>
> >> >>> >>There is a bit I am conflicted/worried about, and thats when we
> >> >>> >>start including verbatim, DB objects into the notifications. At
> >> >>> >>least you can now quickly detect if that blob is something
> >> >>> >>compatible with your current parsing code. My preference is
> >> >>> >>really to keep the Notifications as a totally separate object
> >> >>> >>tree, but I am sure there are many cases where that ends up
> >> >>> >>being seemingly stupid duplicate work. I am not expressing this
> >> >>> >>well in text form :(
> >> >>> >Are you saying we don't want to be willy-nilly tossing DB
> >> >>> >objects across the wire? Yeah that was part of the rug-pulling
> >> >>> >of just having the payload contain an object. We're
> >> >>> >automatically tossing everything with the object then, whether
> >> >>> >or not some of that was supposed to be a secret. We could add
> >> >>> >some sort of property to the field like
> >> >>> >dont_put_me_on_the_wire=True (or I guess a
> >> >>> >notification_ready() function that helps an object sanitize
> >> >>> >itself?) that the notifications will look at to know if it puts
> >> >>> >that on the wire-serialized dict, but that's adding a lot more
> >> >>> >complexity and work to a pile that's already growing rapidly.
> >> >>>
> >> >>> I don't want to be tossing db objects across the wire.  But I
> >> >>> also am not convinced that we should be tossing the current
> >> >>> objects over the wire either.
> >> >>> You make the point that there may be things in the object that
> >> >>> shouldn't be exposed, and I think object version bumps is another
> >> >>> thing to watch out for.
> >> >>> So far the only object that has been bumped is Instance but in
> >> >>> doing so no notifications needed to change.  I think if we just
> >> >>> put objects into notifications we're coupling the notification
> >> >>> versions to db or RPC changes unnecessarily.  Some times they'll
> >> >>> move together but other times, like moving flavor into
> >> >>> instance_extra, there's no reason to bump notifications.
> >> >>
> >> >>
> >> >> Sanitizing existing versioned objects before putting them to the
> >> >> wire is not hard to do.
> >> >> You can see an example of doing it in
> >> >> https://review.openstack.org/#/c/245678/8/nova/objects/service.py,
> >> >> cm
> >> >> L382.
> >> >> We don't need extra effort to take care of minor version bumps
> >> >> because that does not break a well written consumer. We do have to
> >> >> take care of the major version bumps but that is a rare event and
> >> >> therefore can be handled one by one in a way John suggested, by
> >> >> keep sending the previous major version for a while too.
> >> >
> >> > That review is doing much of what I was suggesting.  There is a
> >> > separate notification and payload object.  The issue I have is that
> >> > within the ServiceStatusPayload the raw Service object and version
> >> > is being dumped, with the filter you point out.  But I don't think
> >> > that consumers really care about tracking Service object versions
> >> > and dealing with compatibility there, it would be easier for them
> >> > to track the ServiceStatusPayload version which can remain
> >> > relatively stable even if Service is changing to adapt to db/RPC changes.
> >> Not only do they not really care about tracking the Service object
> >> versions, they probably also don't care about what's in that filter list.
> >>
> >> But I think you're getting on the right track as to where this needs
> >> to go. We can integrate the filtering into the versioning of the payload.
> >> But instead of a blacklist, we turn the filter into a white list. If
> >> the underlying object adds a new field that we don't want/care if
> >> people know about, the payload version doesn't have to change. But if
> >> we add something (or if we're changing the existing fields) that we
> >> want to expose, we then assert that we need to update the version of
> >> the payload, so the consumer can look at the payload and say "oh, in
> >> 1.x, now I get ___" and can add the appropriate checks/compat.
> >> Granted with this you can get into rebase nightmares ([1] still
> >> haunts me in my sleep), but I don't see us frantically changing the
> >> exposed fields all too often. This way 

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread John Garbutt
On 24 November 2015 at 15:00, Balázs Gibizer
 wrote:
>> From: Andrew Laski [mailto:and...@lascii.com]
>> Sent: November 24, 2015 15:35
>> On 11/24/15 at 10:26am, Balázs Gibizer wrote:
>> >> From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
>> >> Sent: November 23, 2015 22:33
>> >> On 11/23/2015 2:23 PM, Andrew Laski wrote:
>> >> > On 11/23/15 at 04:43pm, Balázs Gibizer wrote:
>> >> >>> From: Andrew Laski [mailto:and...@lascii.com]
>> >> >>> Sent: November 23, 2015 17:03
>> >> >>>
>> >> >>> On 11/23/15 at 08:54am, Ryan Rossiter wrote:
>> >> >>> >
>> >> >>> >
>> >> >>> >On 11/23/2015 5:33 AM, John Garbutt wrote:
>> >> >>> >>On 20 November 2015 at 09:37, Balázs Gibizer
>> >> >>> >> wrote:
>> >> >>> >>>
>> >> >>> >>>
>> >> >> >>
>> >> >>> >>There is a bit I am conflicted/worried about, and thats when we
>> >> >>> >>start including verbatim, DB objects into the notifications. At
>> >> >>> >>least you can now quickly detect if that blob is something
>> >> >>> >>compatible with your current parsing code. My preference is
>> >> >>> >>really to keep the Notifications as a totally separate object
>> >> >>> >>tree, but I am sure there are many cases where that ends up
>> >> >>> >>being seemingly stupid duplicate work. I am not expressing this
>> >> >>> >>well in text form :(
>> >> >>> >Are you saying we don't want to be willy-nilly tossing DB
>> >> >>> >objects across the wire? Yeah that was part of the rug-pulling
>> >> >>> >of just having the payload contain an object. We're
>> >> >>> >automatically tossing everything with the object then, whether
>> >> >>> >or not some of that was supposed to be a secret. We could add
>> >> >>> >some sort of property to the field like
>> >> >>> >dont_put_me_on_the_wire=True (or I guess a
>> >> >>> >notification_ready() function that helps an object sanitize
>> >> >>> >itself?) that the notifications will look at to know if it puts
>> >> >>> >that on the wire-serialized dict, but that's adding a lot more
>> >> >>> >complexity and work to a pile that's already growing rapidly.
>> >> >>>
>> >> >>> I don't want to be tossing db objects across the wire.  But I
>> >> >>> also am not convinced that we should be tossing the current
>> >> >>> objects over the wire either.
>> >> >>> You make the point that there may be things in the object that
>> >> >>> shouldn't be exposed, and I think object version bumps is another
>> >> >>> thing to watch out for.
>> >> >>> So far the only object that has been bumped is Instance but in
>> >> >>> doing so no notifications needed to change.  I think if we just
>> >> >>> put objects into notifications we're coupling the notification
>> >> >>> versions to db or RPC changes unnecessarily.  Some times they'll
>> >> >>> move together but other times, like moving flavor into
>> >> >>> instance_extra, there's no reason to bump notifications.
>> >> >>
>> >> >>
>> >> >> Sanitizing existing versioned objects before putting them to the
>> >> >> wire is not hard to do.
>> >> >> You can see an example of doing it in
>> >> >> https://review.openstack.org/#/c/245678/8/nova/objects/service.py,
>> >> >> cm
>> >> >> L382.
>> >> >> We don't need extra effort to take care of minor version bumps
>> >> >> because that does not break a well written consumer. We do have to
>> >> >> take care of the major version bumps but that is a rare event and
>> >> >> therefore can be handled one by one in a way John suggested, by
>> >> >> keep sending the previous major version for a while too.
>> >> >
>> >> > That review is doing much of what I was suggesting.  There is a
>> >> > separate notification and payload object.  The issue I have is that
>> >> > within the ServiceStatusPayload the raw Service object and version
>> >> > is being dumped, with the filter you point out.  But I don't think
>> >> > that consumers really care about tracking Service object versions
>> >> > and dealing with compatibility there, it would be easier for them
>> >> > to track the ServiceStatusPayload version which can remain
>> >> > relatively stable even if Service is changing to adapt to db/RPC 
>> >> > changes.
>> >> Not only do they not really care about tracking the Service object
>> >> versions, they probably also don't care about what's in that filter list.
>> >>
>> >> But I think you're getting on the right track as to where this needs
>> >> to go. We can integrate the filtering into the versioning of the payload.
>> >> But instead of a blacklist, we turn the filter into a white list. If
>> >> the underlying object adds a new field that we don't want/care if
>> >> people know about, the payload version doesn't have to change. But if
>> >> we add something (or if we're changing the existing fields) that we
>> >> want to expose, we then assert that we need to update the version of
>> >> the payload, so the consumer can look at the payload and say "oh, in
>> >> 1.x, now I get ___" and can add the appropriate checks/compat.
>> >> 

Re: [openstack-dev] [ceph] DevStack plugin for Ceph required for Mitaka-1 and beyond?

2015-11-24 Thread Sebastien Han
Hi Ramana,

I’ll resurrect the infra patch and put the project under the right namespace.
There is no plugin at the moment.
I’ve figured out that this is quite urgent and we need to solve this asap since 
devstack-ceph is used by the gate as well :-/

I don’t think there is much changes to do on the plugin itself.
Let’s see if we can make all of this happen before Mitaka-1… I highly doubt but 
we’ll see…

> On 24 Nov 2015, at 15:31, Ramana Raja  wrote:
> 
> Hi,
> 
> I was trying to figure out the state of DevStack plugin
> for Ceph, but couldn't find its source code and ran into
> the following doubt. At Mitaka 1, i.e. next week, wouldn't
> Ceph related Jenkins gates (e.g. Cinder's gate-tempest-dsvm-full-ceph)
> that still use extras.d's hook script instead of a plugin, stop working?
> For reference,
> https://github.com/openstack-dev/devstack/commit/1de9e330de9fd509fcdbe04c4722951b3acf199c
> [Deepak, thanks for reminding me about the deprecation of extra.ds.]
> 
> The patch that seeks to integrate Ceph DevStack plugin with Jenkins
> gates is under review,
> https://review.openstack.org/#/c/188768/
> It's outdated as the devstack-ceph-plugin it seeks to integrate
> seem to be in the now obsolete namespace, 'stackforge/', and hasn't seen
> activity for quite sometime.
> 
> Even if I'm mistaken about all of this can someone please point me to
> the Ceph DevStack plugin's source code? I'm interested to know whether
> the plugin would be identical to the current Ceph hook script,
> extras.d/60-ceph.sh ?
> 
> Thanks,
> Ramana
> 
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Senior Cloud Architect

"Always give 100%. Unless you're giving blood."

Mail: s...@redhat.com
Address: 11 bis, rue Roquépine - 75008 Paris



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][infra][bugs] Grafana Dashboard for Bugs

2015-11-24 Thread Markus Zoeller
Background
==
We have "grafana" for data visualization [1] and I'd like to introduce
a dashboard which shows data from our bug tracker Launchpad. Based on
ttx's code [2] for the "bugsquashing day stats" [3] I created a PoC 
(screenshot at [4]). The code which collects the data from Launchpad 
and pushes it into a statsd deamon is available at [5]*. Thanks to 
jeblair who showed me all the necessary pieces. I have chosen the
following namespace hierarchy for the metrics for statsd:

Metrics
  |- stats
   |- gauges
|- launchpad
 |- bugs
  |- nova
   |- new-by-tag
   |- not-in-progress-by-importance
   |- open-by-importance
   |- open-by-status

The two reasons I've chosen it this way are:
1) specify "launchpad" in case we will have multiple issue trackers
   at the same time and want to differ between those two
2) specify "nova" to seperate between the OpenStack projects

The code [5] I've written doesn't care about project specifics and can 
be used for the other projects (Neutron, Cinder, Glance, ...) as well
without any changes. Only the "config.js" file has to be changed if
a project wants to opt in.

Open points
===
* Any feedback if the data [4] I've chosen would be helpfull to you?
* Which OpenStack project has the right scope for the code [5]?
* I still have to create a grafyaml [6] file for that. I've build the
  PoC dashboard with grafana's GUI.
* I haven't yet run the code for the novaclient project, that's why
  there is a "N/A" in the screenshot.
* I would need an infra-contact who would help me to have this script
  executed repeatedly in a (tbd) interval (30mins?).

References
==
[1] http://grafana.openstack.org/
[2] 
http://git.openstack.org/cgit/openstack-infra/bugdaystats/tree/bugdaystats.py
[3] http://status.openstack.org/bugday/
[4] Screenshort of the PoC nova bugs dashboard (expires on 2015-12-20):
http://www.tiikoni.com/tis/view/?id=7f3f191
[5] https://gist.github.com/anonymous/4368eb69059f11286fe9
[6] http://docs.openstack.org/infra/grafyaml/

Footnotes
=
* you can set ``target="syso"`` to print the data to the stdout without 
  the need to have a statsd deamon running

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-24 Thread Andreas Scheuring
Please have a look at your neutron server log and neutron agent log and
provide this information if you have troubles interpreting the messages.

Probably you'll find the reason there!


-- 
Andreas
(IRC: scheuran)



On Di, 2015-11-24 at 12:17 +0100, Praveen MANKARA RADHAKRISHNAN wrote:
> Hi,
> 
> 
> Am trying to set up an open stack (kilo) installation using ovs-dpdk
> through devstack installation. 
> 
> 
> I have followed the "
> https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/getstarted.rst
>  " documentation. 
> 
> 
> I used the same versions as in documentation (fedora21, with right
> kernel). 
> 
> 
> My openstack installation is successful in both controller and
> compute. 
> I have used example local.conf given in the documentation. 
> But if i try to spawn the VM. I am having the following error. 
> 
> 
> "NovaException: Unexpected vif_type=binding_failed" 
> 
> 
> 
> It would be really helpful if you can point out how to debug and fix
> this error. 
> 
> 
> Thanks
> Praveen
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS-7 Transition Plan

2015-11-24 Thread Vladimir Kozhukalov
In fact, we (I and Dmitry) are on the same page of how to merge these two
features (Centos7 and Docker removal). We agreed that Dmitry's feature is
much more complicated and of higher priority. So, Centos 7 should be merged
first and then I'll rebase my patches (mostly supervisor -> systemd).





Vladimir Kozhukalov

On Tue, Nov 24, 2015 at 1:57 AM, Igor Kalnitsky 
wrote:

> Hey Dmitry,
>
> Thank you for your effort. I believe it's a huge step forward that
> opens number of possibilities.
>
> > Every container runs systemd as PID 1 process instead of
> > supervisord or application / daemon.
>
> Taking into account that we're going to drop Docker containers, I
> think it was unnecessary complication of your work.
>
> Please sync-up with Vladimir Kozhukalov, he's working on getting rid
> of containers.
>
> > Every service inside a container is a systemd unit. Container build
> > procedure was modified, scripts setup.sh and start.sh were introduced
> > to be running during building and configuring phases respectively.
>
> Ditto. :)
>
> Thanks,
> Igor
>
> P.S: I wrote the mail and forgot to press "send" button. It looks like
> Oleg is already pointed out that I wanted to.
>
> On Mon, Nov 23, 2015 at 2:37 PM, Oleg Gelbukh 
> wrote:
> > Please, take into account the plan to drop the containerization of Fuel
> > services:
> >
> > https://review.openstack.org/#/c/248814/
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Tue, Nov 24, 2015 at 12:25 AM, Dmitry Teselkin <
> dtesel...@mirantis.com>
> > wrote:
> >>
> >> Hello,
> >>
> >> We've been working for some time on bringing CentOS-7 to master node,
> >> and now is the time to share and discuss the transition plan.
> >>
> >> First of all, what have been changed:
> >> * Master node itself runs on CentOS-7. Since all the containers share
> >>   the same repo as master node they all have been migrated to CentOS-7
> >>   too. Every container runs systemd as PID 1 process instead of
> >>   supervisord or application / daemon.
> >> * Every service inside a container is a systemd unit. Container build
> >>   procedure was modified, scripts setup.sh and start.sh were introduced
> >>   to be running during building and configuring phases respectively.
> >>   The main reason for this was the fact that many puppet manifests use
> >>   service management commands that require systemd daemon running. This
> >>   also allowed to simplify Dockerfiles by removing all actions to
> >>   setup.sh file.
> >> * We managed to find some bugs in various parts that were fixed too.
> >> * Bootstrap image is also CentOS-7 based. It was updated to better
> >>   support it - some services converted to systemd units and fixes to
> >>   support new network naming schema were made.
> >> * ISO build procedure was updated to reflect changes in CentOS-7
> >>   distribution and to support changes in docker build procedure.
> >> * Many applications was updated (puppet, docker, openstack
> >>   components).
> >> * Docker containers moved to LVM volume to improve performance and get
> >>   rid of annoying warning messages during master node deployment.
> >>   bootstrap_admin_node.sh script was updated to fix some deployment
> >>   issues (e.g. dracut behavior when there are multiple network
> >>   interfaces available) and simplified by removing outdated
> >>   functionality. It was also converted to a "run once" logon script
> >>   instead of being run as a service, primarily because of a way it's
> >>   used.
> >>
> >> As you can see there are a lot of changes were made. Some of them might
> >> be merged into current master if surrounded by conditionals to be
> >> compatible with current master node, but some of them simply can't.
> >>
> >> To simplify the code review process we've splitted CRs that we were
> >> using during active development to  a set of smaller CRs and assigned
> >> the same topic centos7-master-nod to all of them [0].
> >>
> >> So, here is the plan:
> >> * We will put a mark 'Breaks' in every commit message indicating if the
> >>   CR is compatible with current master node. E.g. 'Breaks: centos-6'
> >>   means it can't be merged without breaking things, but 'Breaks:
> >>   nothing' means it OK to merge.
> >> * All the CRs should be reviewed, regardless of their 'breaks' label,
> >>   and voted. We will not merge breaking CRs accidentally, only those
> >>   that are safe will be merged.
> >> * While code review is in progress we will work on passing our custom
> >>   ISO BVT and scale lab tests. When these tests pass - we will run
> >>   swarm on top of this custom ISO.
> >> * In the meantime our QA infrastructure will be updated to support
> >>   CentOS-7 master node - it should be compatible in most cases,
> >>   however, there are some places that are not. We plan to make changes
> >>   compatible with current ISO.
> >> * As soon as ISO becomes good enough we should take a deep breath and
> >>   turn the switch by merging 

[openstack-dev] [all][oslo] qpidd driver has been removed from oslo.messaging. RIP qpidd driver

2015-11-24 Thread Flavio Percoco

Greetings,

The qpidd driver had been marked as deprecated in Liberty[0] and the
planned removed has now been done[1].

This change removes the *qpidd* driver, which is different from the
amqp1 one, which depends on `python-qpid-proton`.

If there are questions, please, don't hesitate to ask.
Flavio

[0] https://review.openstack.org/#/c/193804/
[1] https://review.openstack.org/#/c/241420/

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hyper-v] oslo.privsep vs Windows

2015-11-24 Thread Claudiu Belu
Hello,

Thanks Dims for raising the concern and Angus for reaching out. :)

Most of the time, python development on Windows is not too far off from Linux. 
But the two systems are quite different, which imply different modules (fcntl, 
pwd, grp modules do not exist in Windows) or different implementations of some 
modules (multiprocessing uses Popen instead of os.fork, os module is quite 
different) or some socket options and signals are different in Windows.

1.a. As I've said earlier, some modules do not exist in Windows. All, or at 
least most standard modules document the fact that they are strictly for Linux. 
[1][2][3]
b. At the very least, running the unit tests in a Windows environment can at 
least detect simple problems (e.g. imports). Secondly, there is a Hyper-V / 
Windows CI running on some of the OpenStack projects (nova, neutron, 
networking_hyperv, cinder) that can be checked before merging.

2. This is a bit more complicated question. Well, for functions, you could have 
separate modules for Linux specific functions and Windows specific functions. 
This has been done before: [4] As for object-oriented implementations, I'd 
suggest having the system-specific calls be done in private methods, which will 
be overriden by Windows / Linux subclasses with their specific implementations. 
We've done something like this before, but solutions were pretty much 
straight-forward; it might not be as simple for oslo_privsep, since it is very 
Linux-specific.

3. Typically, the OpenStack services on Hyper-V / Windows are run with users 
that have enough privileges to do their job. For example, the nova-compute 
service is run with a user that has Hyper-V Admin privileges and is not 
necessarily in the "Administrator" user group. We haven't used rootwrap in our 
usecases, it is disabled by default, plus, oslo.rootwrap imports pwd, which 
doesn't exist in Windows.

[1] https://docs.python.org/2/library/fcntl.html
[2] https://docs.python.org/2/library/pwd.html
[3] https://docs.python.org/2/library/grp.html
[4] 
https://github.com/openstack/neutron/blob/master/neutron/agent/common/utils.py
[5] 
https://github.com/openstack/oslo.rootwrap/blob/master/oslo_rootwrap/wrapper.py#L19

If you have any further questions, feel free to ask. :)

Best regards,
Claudiu Belu



From: Angus Lees [g...@inodes.org]
Sent: Tuesday, November 24, 2015 6:18 AM
To: OpenStack Development Mailing List (not for usage questions); Claudiu Belu
Subject: [hyper-v] oslo.privsep vs Windows

Dims has just raised[1] the excellent concern that oslo.privsep will need to at 
least survive on Windows, because hyper-v.  I have no real experience coding on 
windows (I wrote a windows C program once, but I only ever ran it under wine ;) 
and certainly none within an OpenStack/python context:

1) How can I test whatever I'm working on to see if I have mistakenly 
introduced something Linux-specific?  Surely this is a challenge common across 
every project in the nova/oslo/hyper-v stack.

2) What predicate should I use to guard the inevitable Linux-specific or 
Windows-specific code branches?

and I guess:
3) What does a typical OpenStack/hyper-v install even look like? Do we run 
rootwrap with some sudo-like-thing, or just run everything as the superuser?
What _should_ oslo.privsep do for this environment?

[1] 
https://review.openstack.org/#/c/244984

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-24 Thread Praveen MANKARA RADHAKRISHNAN
Hi,

Am trying to set up an open stack (kilo) installation using ovs-dpdk
through devstack installation.

I have followed the "
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/getstarted.rst
" documentation.

I used the same versions as in documentation (fedora21, with right kernel).

My openstack installation is successful in both controller and compute.
I have used example local.conf given in the documentation.
But if i try to spawn the VM. I am having the following error.

"NovaException: Unexpected vif_type=binding_failed"

It would be really helpful if you can point out how to debug and fix this
error.

Thanks
Praveen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Duncan Thomas
On 24 November 2015 at 06:21, Adam Young  wrote:

>
> So, having been one of the initial architects of said policy, I'd like to
> reiterate what I felt at the time.  The policy is in place as much to
> protect the individual contributors as the project.  If I was put in a
> position where I had to review and approve a coworkers code changes, it is
> is easier for me to push back on a belligerent manager to say "this
> violates project policy."
>
> But, even this is a more paranoid rationale than I feel now.  Each of us
> has a perspective based on our customer base.   People make decisions based
> on what they feel to be right, but right for a public cloud provider and
> right for an Enterprise Software vendor will be different.  Getting a
> change reviewed by someone outside your organization is for perspective.
> Treat it as a brake against group think.
>
>

I don't think cinder has ever formalised on this policy, and I don't
necessarily think it should, but having it there as strong guidance is
definitely useful in order to push back against internal management
pressure when needed. It isn't a matter of trust, or even group think
(though that can definitely be a problem) but one of giving developers (iin
this case cores) the tools they need to push back against pressure inside
their own companies.

In a similar vein, many well meaning and hard working engineers hit massive
problems trying to get resources for CI until we started removing drivers
from tree. Companies, particularly big ones, are slow moving, difficult to
steer behemoths at times, and giving cores the tools to protect themselves,
or devs more tools to help them get their job done, is definitely something
we need to keep in mind.

Personally, my biggest indicator of a 'forced' review is time - if
something has been open for two weeks with zero negative feedback, on an
uncontentious topic, then I really don't care too much who approves it. If
something lands in four hours during the European night on a topic that has
been bounced around a lot on IRC/email then I get more worried, regardless
of the organisation(s) of the cores who merged it. No amount of rules will
fix that though, only discussion and trust of cores. Giving them a rule to
stand behind when saying 'no' to their management chain is a great help
though.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] how to start heat through devstack

2015-11-24 Thread P. Ramanjaneya Reddy
Hi, How to start heat using devstack, i've added below services in localrc
but throwing an error

localrc:

ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
IMAGE_URLS+=",
http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
"

Error:
2015-11-24 11:19:28.716 | Error: Service h-api-cfn is not running
2015-11-24 11:19:28.717 | Error: Service h-api-cw is not running
2015-11-24 11:19:28.718 | Error: Service h-api is not running


any input, how to make heat work.?

Thanks & Regards,
Ramanji.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-11-24 Thread Rossella Sblendido



On 11/23/2015 06:42 PM, Armando M. wrote:



On 23 November 2015 at 09:22, Carl Baldwin > wrote:

On Mon, Nov 23, 2015 at 5:02 AM, Rossella Sblendido
> wrote:
> To cross-reference we can use the bug ID or the blueprint name.
>
> I created a script that queries launchpad to get:
> 1) Bug number of the bugs tagged with approved-rfe
> 2) Bug number of the critical/high bugs
> 3) list of blueprints targeted for the current milestone (mitaka-1)

Rossella, this is a great start!  In principle, I have wanted
something like this for a long time now.  The process of scraping
launchpad and gerrit manually for the most important stuff to review
is labor intensive and sometimes I just want to take a little bit of
extra time and review something.  It is very tempting to go straight
to gerrit and pick something.  I think it is very important for us to
finally crack this nut and have a dashboard which can help us easily
find the most important reviews.  I think your time on this is very
well spent and will positively affect the whole team.


Thanks a lot Carl! I really hope it will be useful :)



 > With this info the script builds a .dash file that can be used by
 > gerrit-dash-creator [2] to produce a dashboard url .
 >
 > The script prints also the queries that can be used in gerrit UI
directly,
 > e.g.:
 > Critical/High Bugs
 > (topic:bug/1399249 OR topic:bug/1399280 OR topic:bug/1443421 OR
 > topic:bug/1453350 OR topic:bug/1462154 OR topic:bug/1478100 OR
 > topic:bug/1490051 OR topic:bug/1491131 OR topic:bug/1498790 OR
 > topic:bug/1505575 OR topic:bug/1505843 OR topic:bug/1513678 OR
 > topic:bug/1513765 OR topic:bug/1514810)
 >
 > This is the dashboard I get right now [3]
 >
 > I tried in many ways to get Gerrit to filter patches if the
commit message
 > contains a bug ID. Something like:
 >
 > (message:"#1399249" OR message:"#1399280" OR message:"#1443421" OR
 > message:"#1453350" OR message:"#1462154" OR message:"#1478100" OR
 > message:"#1490051" OR message:"#1491131" OR message:"#1498790" OR
 > message:"#1505575" OR message:"#1505843" OR message:"#1513678" OR
 > message:"#1513765" OR message:"#1514810")
 >
 > but it doesn't work well, the result of the filter contains
patches that
 > have nothing to do with the bugs queried.
 > That's why I had to filter using the topic.
 >
 > CAVEAT: To make the dashboard work, bug fixes must use the topic
"bug/ID"
 > and patches implementing a blueprint the topic "bp/name". If a
patch is not
 > following this convention it won't be showed in the dashboard,
since the
 > topic is used as filter. Most of us use this convention already
anyway so I
 > hope it's not too much of a burden.
 >
 > Feedback is appreciated :)

I looked for the address scopes blueprint [1] which is targeted for
Mitaka-1 [2] and there are 6 (or 5, one is in the gate) patches on the
bp/address-scopes topic [3].  It isn't obvious to me yet why it didn't
get picked up on the dashboard.  I've only started to look in to this
and may not have much time right now.  I wondered if you could easily
tell why it didn't get picked up.


Isn't it missing bp/ ? From the URL I can only see topic:address-scope,
which isn't the right one.


Yes Armando is right. I fixed that. Another reason is that I am 
filtering out patches that are WIP or that failed Jenkins tests. This 
can be changed anyway. This is what I get now (after fixing the missing 
'bp/') [1]


cheers,

Rossella

[1] goo.gl/C7UjdD




Given that I only see one review in
the blueprints section, I suspect there are other blueprints which are
in the same situation.



[1] https://blueprints.launchpad.net/neutron/+spec/address-scopes
[2] https://launchpad.net/neutron/+milestone/mitaka-1
[3]
https://review.openstack.org/#/q/status:open+topic:bp/address-scopes,n,z

Thanks!
Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

  1   2   >