Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Gal Sagie
To me, and i could got it wrong, the stadium means two main things: (At
this point in time)

1) Remove/ease the burden of OpenStack governance and extra job for
projects/drivers that implement Neutron and are "relatively small"
This saves the projects that just want to implement Neutron to be
managed with the same infrastructure but not deal
with a lot of extra stuff (That same extra stuff you are complaining
about and i totally understand where you coming from..)

2) Be able to set a standard of "quality" (and this needs to be better
defined) for all the drivers that implement Neutron, and
also set a standard for development process (specs, bugs, priorities,
CI, testing)

With this definition, it first means to me, as Russell suggested, that
Kuryr should be an independent project.
Regarding Dragonflow and Octavia i am not sure yet but lean to the same
conclusion as Russell.

In order to solve some of the problems you mention, I suggest the following:

1) Define a set of responsibilities/guidelines for the sub-projects
lieutenants in order to comply with the "quality" standard
If they fail to do it with no good explanation for X cycles, the
project should be removed from the stadium.

2) As suggested, delegate and increase the team size that is responsible to
verify and help these projects with the extra work.
I am sure there are people willing to volunteer and help with these
tasks, and test periods could be applied for trust issues.
I believe we all want to see Neutron and OpenStack succeed.

I dont see how just moving this work to the TC or any other centralized
group in OpenStack is going to help, i think we
want to strive to group common work to parents projects, especially in this
case (in my opinion anyway).

I think this can be very handy when we will want our processes (at least in
the Neutron world) to be similar and
complimenting.

Just the way i see things right now..

Gal.




On Tue, Dec 1, 2015 at 9:10 AM, Armando M.  wrote:

>
>
> On 30 November 2015 at 20:11, Russell Bryant  wrote:
>
>> Some additional context: there are a few proposals for additional git
>> repositories for Neutron that have been put on hold while we sort this
>> out.
>>
>> Add networking-bagpipe:
>>   https://review.openstack.org/#/c/244736/
>>
>> Add the Astara driver:
>>   https://review.openstack.org/#/c/230699/
>>
>> Add tap-as-a-service:
>>   https://review.openstack.org/#/c/229869/
>>
>> On 11/30/2015 07:56 PM, Armando M. wrote:
>> > I would like to suggest that we evolve the structure of the Neutron
>> > governance, so that most of the deliverables that are now part of the
>> > Neutron stadium become standalone projects that are entirely
>> > self-governed (they have their own core/release teams, etc). In order to
>> > denote the initiatives that are related to Neutron I would like to
>> > present two new tags that projects can choose to label themselves with:
>> >
>> >   * 'is-neutron-subsystem': this means that the project provides
>> > networking services by implementing an integral part (or parts) of
>> > an end-to-end neutron system. Examples are: a service plugin, an ML2
>> > mech driver, a monolithic plugin, an agent etc. It's something an
>> > admin has to use in order to deploy Neutron in a certain
>> configuration.
>> >   * 'use-neutron-system': this means that the project provides
>> > networking services by using a pre-deployed end-to-end neutron
>> > system as is. No modifications whatsoever.
>>
>> I just want to clarify the proposal.  IIUC, you propose splitting most
>> of what is currently separately deliverables of the Neutron team and
>> making them separate projects in terms of OpenStack governance.  When I
>> originally proposed including networking-ovn under Neutron (and more
>> generally, making room for all drivers to be included), making them
>> separate projects was one of the options on the table, but it didn't
>> seem best at the time.  For reference, that thread was here:
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-April/062310.html
>>
>> When I was originally proposing this, I was only thinking about Neutron
>> drivers, the stuff that connects Neutron to some other system to make
>> Neutron do something.  The list has grown to include other things, as
>> well.
>>
>> I'm not sure where you propose the line to be, but for the sake of
>> discussion, let's assume every deliverable in the governance definition
>> for Neutron is under consideration for being split out with the
>> exception of neutron, neutron-specs, and python-neutronclient.  The
>> remaining deliverables are:
>>
>> dragonflow:
>> kuryr:
>> networking-ale-omniswitch:
>> networking-arista:
>> networking-bgpvpn:
>> networking-calico:
>> networking-cisco:
>> networking-fortinet:
>> networking-hpe:
>> networking-hyperv:
>> networking-infoblox:
>> networking-fujitsu:
>> networking-l2gw:
>> networking-lenovo:
>>

[openstack-dev] [ironic] Install Time Too Long Ironic in devstack

2015-11-30 Thread Zhi Chang
hi, all
I want to install Ironic in my devstack by the document 
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html. During the 
install process, my console displays:
2015-12-01 07:08:44.390 | + PACKAGES=
2015-12-01 07:08:44.391 | ++ find /tmp/in_target.d/install.d -maxdepth 1 -name 
'package-installs-*'
2015-12-01 07:08:44.393 | + '[' -n '' ']'
2015-12-01 07:08:44.393 | + package-installs-v2 --phase install.d 
/tmp/package-installs.json
2015-12-01 07:08:44.461 | Map file for ubuntu element does not exist.
2015-12-01 07:08:44.492 | Map file for ubuntu element does not exist.
2015-12-01 07:08:44.526 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.558 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.595 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.633 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.668 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.703 | Map file for deploy-ironic element does not exist.
2015-12-01 07:08:44.815 | Map file for deploy-tgtadm element does not exist.
2015-12-01 07:08:44.857 | Map file for deploy-tgtadm element does not exist.



I wait this a very very long time, does it right? And my devstack's local.conf 
at: http://paste.openstack.org/show/480462/


Could someone help me?


Thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove nova-network as a deployment option in Fuel?

2015-11-30 Thread Mike Scherbakov
Aleksey, can you clarify it? Why it can't be deployed? According to what I
see at our fakeUI install [1], wizard allows to choose nova-network only in
case if you choose vcenter.

Do we support Neutron for vCenter already? If so - we could safely remove
nova-network altogether.

[1] http://demo.fuel-infra.org:8000/

On Mon, Nov 30, 2015 at 4:27 AM Aleksey Kasatkin 
wrote:

> This remains unclear.
> Now, for 8.0, the Environment with Nova-Network can be created but cannot
> be deployed (and its creation is tested in UI integration tests).
> AFAIC, we should either remove the ability of creation of environments
> with Nova-Network in 8.0 or return it back into working state.
>
>
> Aleksey Kasatkin
>
>
> On Fri, Oct 23, 2015 at 3:42 PM, Sheena Gregson 
> wrote:
>
>> As a reminder: there are no individual networking options that can be
>> used with both vCenter and KVM/QEMU hypervisors once we deprecate
>> nova-network.
>>
>>
>>
>> The code for vCenter as a stand-alone deployment may be there, but the
>> code for the component registry (
>> https://blueprints.launchpad.net/fuel/+spec/component-registry) is still
>> not complete.  The component registry is required for a multi-HV
>> environment, because it provides compatibility information for Networking
>> and HVs.  In theory, landing this feature will enable us to configure DVS +
>> vCenter and Neutron with GRE/VxLAN + KVM/QEMU in the same environment.
>>
>>
>>
>> While Andriy Popyvich has made considerable progress on this story, I
>> personally feel very strongly against deprecating nova-network until we
>> have confirmed that we can support *all current use cases* with the
>> available code base.
>>
>>
>>
>> Are we willing to lose the multi-HV functionality if something prevents
>> the component registry work from landing in its entirety before the next
>> release?
>>
>>
>>
>> *From:* Sergii Golovatiuk [mailto:sgolovat...@mirantis.com]
>> *Sent:* Friday, October 23, 2015 6:30 AM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [Fuel] Remove nova-network as a
>> deployment option in Fuel?
>>
>>
>>
>> Hi,
>>
>>
>>
>> As far as I know neutron code for VCenter is ready. Guys are still
>> testing it. Keep patience... There will be announce soon.
>>
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Feature Freeze is soon

2015-11-30 Thread Mike Scherbakov
Hi Fuelers,
we are couple of days away from FF [1]. I have not noticed any request for
feature freeze exception, so I assume that we pretty much decided what is
going into 8.0 and what is not.

If there are items which we'd like to ask exception for, I'd like us to
have this requested now - so that we all can spend some time on analysis of
what is done and what is left, and on risks assessment. I'd suggest to not
consider any exception requests on the day of FF, as it doesn't leave us
time to spend on it.

To make a formal checkpoint of what is in and what is out, I suggest to get
together on FF day, Wednesday, and go over all the items we have been
working on in 8.0. What do you think folks? For instance, in #fuel-dev IRC
at 8am PST (4pm UTC)?

[1] https://wiki.openstack.org/wiki/Fuel/8.0_Release_Schedule
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Project approved specs

2015-11-30 Thread Andreas Jaeger

On 2015-12-01 07:41, Steve Martinelli wrote:

actually, looks like i spoke too soon. i guess only integrated and
incubated projects get published to specs.openstack.org, my bad. source:
https://github.com/openstack-infra/project-config/blob/bc32ea6f2133a95b38b21a7e08b92b0b6d843478/zuul/layout.yaml#L537-L555



That's correct, only teams mentioned in governance publish on 
specs.openstack.org,


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Armando M.
On 30 November 2015 at 20:11, Russell Bryant  wrote:

> Some additional context: there are a few proposals for additional git
> repositories for Neutron that have been put on hold while we sort this out.
>
> Add networking-bagpipe:
>   https://review.openstack.org/#/c/244736/
>
> Add the Astara driver:
>   https://review.openstack.org/#/c/230699/
>
> Add tap-as-a-service:
>   https://review.openstack.org/#/c/229869/
>
> On 11/30/2015 07:56 PM, Armando M. wrote:
> > I would like to suggest that we evolve the structure of the Neutron
> > governance, so that most of the deliverables that are now part of the
> > Neutron stadium become standalone projects that are entirely
> > self-governed (they have their own core/release teams, etc). In order to
> > denote the initiatives that are related to Neutron I would like to
> > present two new tags that projects can choose to label themselves with:
> >
> >   * 'is-neutron-subsystem': this means that the project provides
> > networking services by implementing an integral part (or parts) of
> > an end-to-end neutron system. Examples are: a service plugin, an ML2
> > mech driver, a monolithic plugin, an agent etc. It's something an
> > admin has to use in order to deploy Neutron in a certain
> configuration.
> >   * 'use-neutron-system': this means that the project provides
> > networking services by using a pre-deployed end-to-end neutron
> > system as is. No modifications whatsoever.
>
> I just want to clarify the proposal.  IIUC, you propose splitting most
> of what is currently separately deliverables of the Neutron team and
> making them separate projects in terms of OpenStack governance.  When I
> originally proposed including networking-ovn under Neutron (and more
> generally, making room for all drivers to be included), making them
> separate projects was one of the options on the table, but it didn't
> seem best at the time.  For reference, that thread was here:
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-April/062310.html
>
> When I was originally proposing this, I was only thinking about Neutron
> drivers, the stuff that connects Neutron to some other system to make
> Neutron do something.  The list has grown to include other things, as well.
>
> I'm not sure where you propose the line to be, but for the sake of
> discussion, let's assume every deliverable in the governance definition
> for Neutron is under consideration for being split out with the
> exception of neutron, neutron-specs, and python-neutronclient.  The
> remaining deliverables are:
>
> dragonflow:
> kuryr:
> networking-ale-omniswitch:
> networking-arista:
> networking-bgpvpn:
> networking-calico:
> networking-cisco:
> networking-fortinet:
> networking-hpe:
> networking-hyperv:
> networking-infoblox:
> networking-fujitsu:
> networking-l2gw:
> networking-lenovo:
> networking-midonet:
> networking-odl:
> networking-ofagent:
> networking-onos:
> networking-ovn:
> networking-plumgrid:
> networking-powervm:
> networking-sfc:
> networking-vsphere:
> octavia:
> python-neutron-pd-driver:
> vmware-nsx:
>
> I think it's helpful to break these into categories, because the answer
> may be different for each group.  Here's my attempt at breaking this
> list into some categories:
>
> 1) A consumer of Neutron
>
> kuryr
>
> IIUC, kuryr is a consumer of Neutron.  Its interaction with Neutron is
> via using Neutron's REST APIs.  You could think of kuryr's use of
> Neutron as architecturally similar to how Nova uses Neutron.
>
> I think this project makes a ton of sense to become independent.
>
> 2) Implementation of a networking technology
>
> dragonflow
>
> The dragonflow repo includes a couple of things.  It includes dragonflow
> itself, and the Neutron driver to connect to it.  Using Astara as an
> example to follow, dragonflow itself could be an independent project.
>
> Following that, the built-in ML2/ovs or ML2/lb control plane could be
> separate, too, though that's much more painful and complex in practice.
>
> octavia
>
> Octavia also seems to fall into this category, just for LBaaS.  It's not
> just a driver, it's a LBaaS service VM orchestrator (which is in part
> what Astara is, too).
>
> It seems reasonable to propose these as independent projects.
>
> 3) New APIs
>
> There are some repos that are implementing new REST APIs for Neutron.
> They're independent enough to need their own driver layer, but coupled
> with Neutron enough to still need to run inside of Neutron as they can't
> do everything they need to do by only interfacing with Neutron REST APIs
> (today, at least).
>
> networking-l2gw:
> networking-sfc:
>
> Here things start to get less clear to me.  Unless the only interaction
> with Neutron is via its REST API, then it seems like it should be part
> of Neutron.  Put another way, if the API runs as a part of the
> neutron-ser

Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-11-30 Thread Steve Martinelli
Trying to summarize here...

  - There isn't much interest in keeping eventlet around.
  - Folks are OK with running keystone in a WSGI server, but feel they are
constrained by Apache.
  - uWSGI could help to support multiple web servers.

My opinion:

  - Adding support for uWSGI definitely sounds like it's worth
investigating, but not achievable in this release (unless someone already
has something cooked up).
  - I'm tempted to let eventlet stick around another release, since it's
causing pain on some of our operators.
  - Other folks have managed to run keystone in a web server (and hopefully
not feel pain when doing so!), so it might be worth getting technical
details on just how it was accomplished. If we get an OK from the operator
community later on in mitaka, I'd still be OK with removing eventlet, but I
don't want to break folks.

stevemar

From:   John Dewey 
100% agree.

We should look at uwsgi as the reference architecture.  Nginx/Apache/etc
should be interchangeable, and up to the operator which they choose to use.
Hell, with tcp load balancing now in opensource Nginx, I could get rid of
Apache and HAProxy by utilizing uwsgi.

John


On November 30, 2015 at 1:05:26 PM, Paul Czarkowski (pczarkowski
+openstack...@bluebox.net) wrote:


  I don't have a problem with eventlet itself going away, but I do feel
  that keystone should pick a python based web server capable of
  running WSGI apps ( such as uWSGI ) for the reference implementation
  rather than Apache which can be declared appropriately in the
  requirements.txt of the project.   I feel it is important to allow
  the operator to make choices based on their organization's skill sets
  ( i.e. apache vs nginx ) to help keep complexity low.

  I understand there are some newer features that rely on Apache
  ( federation, etc )  but we should allow the need for those features
  inform the operators choice of web server rather than force it for
  everybody.

  Having a default implementation using uWSGI is also more inline with
  the 12 factor way of writing applications and will run a lot more
  comfortably in [application] containers than apache would which is
  probably an important consideration given how many people are focused
  on being able to run openstack projects inside containers.

  On Mon, Nov 30, 2015 at 2:36 PM, Jesse Keating 
  wrote:
I have an objection to eventlet going away. We have problems with
running Apache and mod_wsgi with multiple python virtual
environments. In some of our stacks we're running both Horizon and
Keystone. Each get their own virtual environment. Apache mod_wsgi
doesn't really work that way, so we'd have to do some ugly hacks to
expose the python environments of both to Apache at the same time.

I believe we spoke about this at Summit. Have you had time to look
into this scenario and have suggestions?


- jlk

On Mon, Nov 30, 2015 at 10:26 AM, Steve Martinelli <
steve...@ca.ibm.com> wrote:
 This post is being sent again to the operators mailing list, and i
 apologize if it's duplicated for some folks. The original thread
 is here:
 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080816.html


 In the Mitaka release, the keystone team will be removing
 functionality that was marked for deprecation in Kilo, and marking
 certain functions as deprecated in Mitaka (that may be removed in
 at least 2 cycles).

 removing deprecated functionality
 =

 This is not a full list, but these are by and large the most
 contentious topics.

 * Eventlet support: This was marked as deprecated back in Kilo and
 is currently scheduled to be removed in Mitaka in favor of running
 keystone in a WSGI server. This is currently how we test keystone
 in the gate, and based on the feedback we received at the summit,
 a lot of folks have moved to running keystone under Apache since
 we’ve announced this change. OpenStack's CI is configured to
 mainly test using this deployment model. See [0] for when we
 started to issue warnings.

 * Using LDAP to store assignment data: Like eventlet support, this
 feature was also deprecated in Kilo and scheduled to be removed in
 Mitaka. To store assignment data (role assignments) we suggest
 using an SQL based backend rather than LDAP. See [1] for when we
 started to issue warnings.

 * Using LDAP to store project and domain data: The same as above,
 see [2] for when we started to issue warnings.

 * for a complete list:
 https://blueprints.launchpad.net/keystone/+spec/removed-as-of-mitaka


 functions deprecated as of mitaka
 =

 

Re: [openstack-dev] [openstack-infra] Project approved specs

2015-11-30 Thread GROSZ, Maty (Maty)
Thanks

From: Steve Martinelli
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Tuesday, 1 December 2015 at 08:34
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [openstack-infra] Project approved specs


you probably want to propose a change to this file: 
https://github.com/openstack-infra/project-config/blob/master/specs/specs.yaml

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead

[Inactive hide details for "GROSZ, Maty (Maty)" ---2015/12/01 01:21:47 
AM---Hey, All vitrage specs are written and kept in the v]"GROSZ, Maty (Maty)" 
---2015/12/01 01:21:47 AM---Hey, All vitrage specs are written and kept in the 
vitrage-specs repository.

From: "GROSZ, Maty (Maty)" 
mailto:maty.gr...@alcatel-lucent.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 2015/12/01 01:21 AM
Subject: [openstack-dev] [openstack-infra] Project approved specs






Hey,

All vitrage specs are written and kept in the vitrage-specs repository.
According to the developers guide, all approved specs appear here: 
http://specs.openstack.org/
How do the approved specs get to appear in the link above?
Is there a writing convention that need to be followed? Any configuration?

Thanks

Maty__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Next vitrage meeting

2015-11-30 Thread AFEK, Ifat (Ifat)
Hi,

Vitrage next weekly meeting will be tomorrow, Wednesday at 9:00 UTC, on 
#openstack-meeting-3 channel.

Agenda:

* Current status and progress from last week
* Review action items
* Next steps 
* Open Discussion

You are welcome to join.

Thanks, 
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Project approved specs

2015-11-30 Thread Steve Martinelli

actually, looks like i spoke too soon. i guess only integrated and
incubated projects get published to specs.openstack.org, my bad. source:
https://github.com/openstack-infra/project-config/blob/bc32ea6f2133a95b38b21a7e08b92b0b6d843478/zuul/layout.yaml#L537-L555

stevemar



From:   Steve Martinelli/Toronto/IBM@IBMCA
To: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date:   2015/12/01 01:36 AM
Subject:Re: [openstack-dev] [openstack-infra] Project approved specs



you probably want to propose a change to this file:
https://github.com/openstack-infra/project-config/blob/master/specs/specs.yaml


Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead

Inactive hide details for "GROSZ, Maty (Maty)" ---2015/12/01 01:21:47
AM---Hey, All vitrage specs are written and kept in the v"GROSZ, Maty
(Maty)" ---2015/12/01 01:21:47 AM---Hey, All vitrage specs are written and
kept in the vitrage-specs repository.

From: "GROSZ, Maty (Maty)" 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 2015/12/01 01:21 AM
Subject: [openstack-dev] [openstack-infra] Project approved specs




Hey,

All vitrage specs are written and kept in the vitrage-specs repository.
According to the developers guide, all approved specs appear here:
http://specs.openstack.org/
How do the approved specs get to appear in the link above?
Is there a writing convention that need to be followed? Any configuration?

Thanks

Maty
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Project approved specs

2015-11-30 Thread Steve Martinelli

you probably want to propose a change to this file:
https://github.com/openstack-infra/project-config/blob/master/specs/specs.yaml

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   "GROSZ, Maty (Maty)" 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   2015/12/01 01:21 AM
Subject:[openstack-dev] [openstack-infra] Project approved specs




Hey,

All vitrage specs are written and kept in the vitrage-specs repository.
According to the developers guide, all approved specs appear here:
http://specs.openstack.org/
How do the approved specs get to appear in the link above?
Is there a writing convention that need to be followed? Any configuration?

Thanks

Maty
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]Different projects authentication strategy

2015-11-30 Thread 1021710773
Every Developers,


Hello. I here would like to ask some questions about policy rules.
Now the policy rules of openstack in keystone and other projects are set in 
policy.json, in other words, the policy rules are equal
to each projects. And the common ways to enforce are in decorative function 
like protected(). And in keystone project, it manage the users, projects,  
roles and other resources. Now, some particular projects(tenants) may have its 
own enforce rules, not just like the policy.json, and in that ways, could we 
update the usual decorative function of enforce to realize the authentification 
of projects? And now, the policy model appears in keystone project. Could we 
use it to create association between projects and policy? 
Hope to hear from you. Thanks!




Weiwei Yang

yangwei...@cmss.chinamobile.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [glance] Proposal to add Abhishek to Glance core team

2015-11-30 Thread Nikhil Komawar
Hi,

As the requested (re-voting) on [1] seemed to conflict with the thread
title, I am __init__ing a new thread for the sake of clarity, closure
and ease of vote.

Please do provide feedback on the proposal by me on this thread [1].
Other reference links are [2] and [3].

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-November/thread.html#80279
[2]
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-10-01-14.01.log.html#l-70

[3] https://launchpad.net/~abhishek-kekane

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra] Project approved specs

2015-11-30 Thread GROSZ, Maty (Maty)

Hey,

All vitrage specs are written and kept in the vitrage-specs repository.
According to the developers guide, all approved specs appear here: 
http://specs.openstack.org/
How do the approved specs get to appear in the link above?
Is there a writing convention that need to be followed? Any configuration?

Thanks

Maty
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bandit] [glance] [openstack-ansible] [searchlight] Resigning from core reviewers teams

2015-11-30 Thread Nikhil Komawar
Hi Ian,

Sorry to see you go on hiatus from these duties. I am glad to see you on
channels every now and then however, I do miss having your invaluable
input on all the important stuff. I do hope that you are considering
returning back sooner than later and that these projects have motivated
you for the same.

While I truly support your decisions and choices that you have made
regarding your status here, on the other hand I am truly sorry that this
sort of thing has happened. I think we as a community should be able to
come up with ways that support disjoint efforts, asynchronous
conversations and delegating relaxed efforts to those who are getting
less time upstream. Also, I hope that companies would find it in their
better interest to let great developers work more upstream.

On 10/28/15 8:30 PM, Kelsey, Timothy John wrote:
> On 28/10/2015 09:35, "Tripp, Travis S"  wrote:
>
>>
>> On 10/28/15, 11:43 AM, "Flavio Percoco"  wrote:
>>
>>> On 26/10/15 17:20 +, Ian Cordasco wrote:
 Hi everyone,


Today I'm removing myself from the core reviewer (and driver)
 teams for the


following projects:

- Bandit (bandit-core and transitively security-specs-core)

- Glance (glance-core and glance-specs-core)

- OpenStack Ansible (openstack-ansible-core)

- Searchlight (searchlight-core)

Recent events both in my position at Rackspace and my personal
 life mean I no longer have sufficient time to properly act as a core
 reviewer for
 These projects. My personal life has suffered from attempts to continue
 to uphold my responsibilities to OpenStack and the other open source
 projects I
 develop, maintain, and direct. Changing responsibilities in my current
 role
 in the last 8-10 months mean that I don't have sufficient time during
 the
 normal 0900-1700 period to accurately and thoroughly conduct reviews.
 I'm confident
 this is only a temporary hiatus and that sometime next year, I will be
 able to fully rejoin the OpenStack community.

>>> Ian,
>>>
>>> I can't stress enough how sorry I am to see you go from the team. Your
>>> reviews, comments and contributions have always been excelent and of
>>> huge help to our community.
>>>
>>> I look forward for that time when you'll be able to come back and as a
>>> *member* of the community I'd be more than happy to have you back.
>>>
>>> Thanks for all the time you've spent on these projectes, for your
>>> honest and clear email on your situation and availability.
>>>
>>> Take good care and, please, come back :)
>>> Flavio
>>>
>>> -- 
>>> @flaper87
>>> Flavio Precook
>>
>> Ian,
>>
>> Your core status on all of these projects is a true testament to your
>> abilities, dedication, and character.  All of which we greatly
>> appreciate about having you on the searchlight team and are why
>> you will always be welcome on our team. In the meantime, I fully
>> support you making decisions that are best for you and your family.
>> I hope that you will be able to find a balance which works for you.
>>
>>
>> Thank you for all that you have done and contributed!
>>
>> -Travis
> Thank you for your time and dedication Ian, your input will be missed.
> Finding a good work/life balance is important and I for one would hate
> to see you negatively effected by the time you have generously given to
> all the projects your involved in. This is a good move and I¹m sure we
> will all look forward to your eventual return.
>
> Thanks again for your hard work.
>
> - Tim
>
> (IRC: tkelsey)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan

2015-11-30 Thread Bhandaru, Malini K
+1 on Sabari!  :-)

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Monday, November 23, 2015 12:21 PM
To: openstack-dev@lists.openstack.org
Cc: Sabari Kumar Murugesan 
Subject: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan 


Greetings,

I'd like to propose adding Sabari Kumar Murugesan to the glance-core team. 
Sabari has been contributing for quite a bit to the project with great reviews 
and he's also been providing great feedback in matters related to the design of 
the service, libraries and other areas of the team.

I believe he'd be a great addition to the glance-core team as he has 
demonstrated a good knowledge of the code, service and project's priorities.

If Sabari accepts to join and there are no objections from other members of the 
community, I'll proceed to add Sabari to the team in a week from now.

Thanks,
Flavio

--
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan

2015-11-30 Thread Nikhil Komawar
HI,

Great to see him (Sabari) finally being proposed after new cycle
settling wait. I agree he has always done good work and shows great
awareness and expertise on the store (data transfer) fronts. Definitely
going to be a great core.

(For context) Like my previous shout out [1] to (Sabari &) Abhishek [2],
I would like to see both of them in the core team. Abhishek has shown
good awareness for cross project stuff and has provided some really good
reviews (gotchas, security oops, etc), filed and resolved important
bugs, helped the team with knowledge on scarcely used glance (and
import) code.

I think Glance team couldn't hurt with some extra reviewers especially
across the timezones with Fei Long being more busy with his other
commitments. So, firstly apologies for late reply and secondly I hope to
see some more (re-)voting on this adjustment.

[1]
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-10-01-14.01.log.html#l-70
[2] https://launchpad.net/~abhishek-kekane


On 11/25/15 6:53 AM, Kuvaja, Erno wrote:
>> -Original Message-
>> From: Flavio Percoco [mailto:fla...@redhat.com]
>> Sent: Monday, November 23, 2015 8:21 PM
>> To: openstack-dev@lists.openstack.org
>> Cc: Sabari Kumar Murugesan
>> Subject: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan
>> 
>>
>> Greetings,
>>
>> I'd like to propose adding Sabari Kumar Murugesan to the glance-core team.
>> Sabari has been contributing for quite a bit to the project with great 
>> reviews
>> and he's also been providing great feedback in matters related to the design
>> of the service, libraries and other areas of the team.
>>
>> I believe he'd be a great addition to the glance-core team as he has
>> demonstrated a good knowledge of the code, service and project's priorities.
>>
>> If Sabari accepts to join and there are no objections from other members of
>> the community, I'll proceed to add Sabari to the team in a week from now.
>>
>> Thanks,
>> Flavio
>>
>> --
>> @flaper87
>> Flavio Percoco
> +2
> - Erno
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Doug Wiegley

> On Nov 30, 2015, at 10:15 PM, Armando M.  wrote:
> 
> 
> 
> On 30 November 2015 at 20:47, Doug Wiegley  > wrote:
> 
>> On Nov 30, 2015, at 5:56 PM, Armando M. > > wrote:
>> 
>> Hi folks,
>> 
>> The stadium concept was introduced more or less formally since April of this 
>> year. At the time it was introduced (see [1]), the list of deliverables 
>> included neutron, client, specs and *-aas services. As you may be well 
>> aware, 6+ months is a long time in the OpenStack world, and lots of things 
>> happened since then. The list has grown to [2]. 
>> 
>> When I think about what 'deliverables' are, I am inclined to think that all 
>> of the projects that are part of the list will have to behave and follow the 
>> same rules, provided that there is flexibility given by the tags. However, 
>> reality has proven us that rules are somewhat difficult to follow and 
>> enforce, and some boundaries may be too strict for some initiatives to 
>> comply with. This is especially true if we go from a handful of projects 
>> that we had when this had started to the nearly the two dozens we have now.
>> As a result, there is quite an effort imposed on the PTL, the various 
>> liaisons (release, infra, docs, testing, etc) and the core team to help 
>> manage the existing relationships and to ensure that the picture stays 
>> coherent over time. Sometimes the decision of being part of this list is 
>> even presented before one can see any code, and that defeats the whole point 
>> of the deliverable association. I have experienced first hand that this has 
>> become a burden, and I fear that the stadium might be an extra layer of 
>> governance/complexity that could even interfere with the existing 
>> responsibilities of the TC and of OpenStack infra.
>> 
>> So my question is: would revisiting/clarifying the concept be due after some 
>> time we have seen it in action? I would like to think so. To be fair, I am 
>> not sure what the right answer is, but I know for a fact that some 
>> iterations are in order, and I like to make a proposal:
>> 
>> I would like to suggest that we evolve the structure of the Neutron 
>> governance, so that most of the deliverables that are now part of the 
>> Neutron stadium become standalone projects that are entirely self-governed 
>> (they have their own core/release teams, etc). In order to denote the 
>> initiatives that are related to Neutron I would like to present two new tags 
>> that projects can choose to label themselves with:
>> 
> 
> Interesting proposal, and I’m just thinking out loud here. I’m generally in 
> favor of separating the governance as we separate the dependencies, just 
> because at some point what we’re doing doesn’t scale. To provide a little 
> context, there are two points worth keeping in mind:
> 
> - The neutron stadium actually slightly pre-dates the big tent, and works 
> around some earlier governance friction. So it may make less sense now in 
> light of those changes.
> 
> If my memory doesn't fail me, the first time I recall of talks about the big 
> tent is September 2014, in this email thread:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2014-September/046437.html 
> 
> 
> So, no I don't think we were a novelty :)
>  
> 
> - Many of the neutron subprojects *MUST RELEASE IN LOCKSTEP WITH NEUTRON* to 
> be useful. These items are less useful to be considered standalone, as they 
> need general oversight, co-gating, and such, to stay sane. As we break the 
> massive coupling that exists, this point will get less and less relevant. 
> 
> I don't think that requires the oversight of a single individual, but you're 
> right that this point will fade away over time.
>  
> 
> - I think that part of the initial intent was that these small subprojects 
> would have their own core teams, but be able to take advantage of the 
> infrastructure that exists around neutron as a whole (specs, release team, 
> stable team, co-gates, mentors).
> 
> For your proposal, are you suggesting:
> 
> 1. That these projects are fully separate, with their own PTLs and 
> everything, and just have tags that imply their neutron dependency?  OR,
> 
> My point is: the project decides what's best, I don't have to have an opinion 
> :)

I don’t think you *have* to have an opinion in either scheme. That sounds 
self-imposed. Delegate it.

> 
> If they want to signal a relationship with Neutron they can do so by choosing 
> one of the two tags being proposed.
>  
> 2. That they stay stadium projects, but we use tags to differentiate them? 
> Many already have different core teams and their own specs process.
> 
> Must there be a place where we are together that is not OpenStack already? 
> And what would that together mean exactly? That's the conundrum I am trying 
> to move away from
>  
> 
> Are there particular projects that ad

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Armando M.
On 30 November 2015 at 20:47, Doug Wiegley 
wrote:

>
> On Nov 30, 2015, at 5:56 PM, Armando M.  wrote:
>
> Hi folks,
>
> The stadium concept was introduced more or less formally since April of
> this year. At the time it was introduced (see [1]), the list of
> deliverables included neutron, client, specs and *-aas services. As you may
> be well aware, 6+ months is a long time in the OpenStack world, and lots of
> things happened since then. The list has grown to [2].
>
> When I think about what 'deliverables' are, I am inclined to think that
> all of the projects that are part of the list will have to behave and
> follow the same rules, provided that there is flexibility given by the
> tags. However, reality has proven us that rules are somewhat difficult to
> follow and enforce, and some boundaries may be too strict for some
> initiatives to comply with. This is especially true if we go from a handful
> of projects that we had when this had started to the nearly the two dozens
> we have now.
>
> As a result, there is quite an effort imposed on the PTL, the various
> liaisons (release, infra, docs, testing, etc) and the core team to help
> manage the existing relationships and to ensure that the picture stays
> coherent over time. Sometimes the decision of being part of this list is
> even presented before one can see any code, and that defeats the whole
> point of the deliverable association. I have experienced first hand that
> this has become a burden, and I fear that the stadium might be an extra
> layer of governance/complexity that could even interfere with the existing
> responsibilities of the TC and of OpenStack infra.
>
> So my question is: would revisiting/clarifying the concept be due after
> some time we have seen it in action? I would like to think so. To be
> fair, I am not sure what the right answer is, but I know for a fact that
> some iterations are in order, and I like to make a proposal:
>
> I would like to suggest that we evolve the structure of the Neutron
> governance, so that most of the deliverables that are now part of the
> Neutron stadium become standalone projects that are entirely
> self-governed (they have their own core/release teams, etc). In order to
> denote the initiatives that are related to Neutron I would like to present
> two new tags that projects can choose to label themselves with:
>
> Interesting proposal, and I’m just thinking out loud here. I’m generally
> in favor of separating the governance as we separate the dependencies, just
> because at some point what we’re doing doesn’t scale. To provide a little
> context, there are two points worth keeping in mind:
>
> - The neutron stadium actually slightly pre-dates the big tent, and works
> around some earlier governance friction. So it may make less sense now in
> light of those changes.
>

If my memory doesn't fail me, the first time I recall of talks about the
big tent is September 2014, in this email thread:

http://lists.openstack.org/pipermail/openstack-dev/2014-September/046437.html

So, no I don't think we were a novelty :)


>
> - Many of the neutron subprojects *MUST RELEASE IN LOCKSTEP WITH NEUTRON*
> to be useful. These items are less useful to be considered standalone, as
> they need general oversight, co-gating, and such, to stay sane. As we break
> the massive coupling that exists, this point will get less and less
> relevant.
>

I don't think that requires the oversight of a single individual, but
you're right that this point will fade away over time.


>
> - I think that part of the initial intent was that these small subprojects
> would have their own core teams, but be able to take advantage of the
> infrastructure that exists around neutron as a whole (specs, release team,
> stable team, co-gates, mentors).
>
> For your proposal, are you suggesting:
>
> 1. That these projects are fully separate, with their own PTLs and
> everything, and just have tags that imply their neutron dependency?  OR,
>

My point is: the project decides what's best, I don't have to have an
opinion :)

If they want to signal a relationship with Neutron they can do so by
choosing one of the two tags being proposed.


> 2. That they stay stadium projects, but we use tags to differentiate them?
> Many already have different core teams and their own specs process.
>

Must there be a place where we are together that is not OpenStack already?
And what would that together mean exactly? That's the conundrum I am trying
to move away from


>
> Are there particular projects that add more overhead? Does your proposal
> make it easier to get code to our user base? Does it add a bunch of
> makework to fit into a new model (same question, really). Is the PTL
> overhead too high currently? Is the shed pink or blue?  :-)
>

Not sure what you're asking: this isn't just about overhead to a single or
a small group of people. It's about arranging ourselves the best way
possible in order to deal with growth: if there were to be an upper

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Doug Wiegley

> On Nov 30, 2015, at 9:11 PM, Russell Bryant  wrote:
> 
> Some additional context: there are a few proposals for additional git
> repositories for Neutron that have been put on hold while we sort this out.
> 
> Add networking-bagpipe:
>  https://review.openstack.org/#/c/244736/
> 
> Add the Astara driver:
>  https://review.openstack.org/#/c/230699/
> 
> Add tap-as-a-service:
>  https://review.openstack.org/#/c/229869/
> 
> On 11/30/2015 07:56 PM, Armando M. wrote:
>> I would like to suggest that we evolve the structure of the Neutron
>> governance, so that most of the deliverables that are now part of the
>> Neutron stadium become standalone projects that are entirely
>> self-governed (they have their own core/release teams, etc). In order to
>> denote the initiatives that are related to Neutron I would like to
>> present two new tags that projects can choose to label themselves with:
>> 
>>  * 'is-neutron-subsystem': this means that the project provides
>>networking services by implementing an integral part (or parts) of
>>an end-to-end neutron system. Examples are: a service plugin, an ML2
>>mech driver, a monolithic plugin, an agent etc. It's something an
>>admin has to use in order to deploy Neutron in a certain configuration.
>>  * 'use-neutron-system': this means that the project provides
>>networking services by using a pre-deployed end-to-end neutron
>>system as is. No modifications whatsoever.
> 
> I just want to clarify the proposal.  IIUC, you propose splitting most
> of what is currently separately deliverables of the Neutron team and
> making them separate projects in terms of OpenStack governance.  When I
> originally proposed including networking-ovn under Neutron (and more
> generally, making room for all drivers to be included), making them
> separate projects was one of the options on the table, but it didn't
> seem best at the time.  For reference, that thread was here:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-April/062310.html
> 
> When I was originally proposing this, I was only thinking about Neutron
> drivers, the stuff that connects Neutron to some other system to make
> Neutron do something.  The list has grown to include other things, as well.
> 
> I'm not sure where you propose the line to be, but for the sake of
> discussion, let's assume every deliverable in the governance definition
> for Neutron is under consideration for being split out with the
> exception of neutron, neutron-specs, and python-neutronclient.  The
> remaining deliverables are:
> 
>dragonflow:
>kuryr:
>networking-ale-omniswitch:
>networking-arista:
>networking-bgpvpn:
>networking-calico:
>networking-cisco:
>networking-fortinet:
>networking-hpe:
>networking-hyperv:
>networking-infoblox:
>networking-fujitsu:
>networking-l2gw:
>networking-lenovo:
>networking-midonet:
>networking-odl:
>networking-ofagent:
>networking-onos:
>networking-ovn:
>networking-plumgrid:
>networking-powervm:
>networking-sfc:
>networking-vsphere:
>octavia:
>python-neutron-pd-driver:
>vmware-nsx:
> 
> I think it's helpful to break these into categories, because the answer
> may be different for each group.  Here's my attempt at breaking this
> list into some categories:
> 
> 1) A consumer of Neutron
> 
>kuryr
> 
> IIUC, kuryr is a consumer of Neutron.  Its interaction with Neutron is
> via using Neutron's REST APIs.  You could think of kuryr's use of
> Neutron as architecturally similar to how Nova uses Neutron.
> 
> I think this project makes a ton of sense to become independent.
> 
> 2) Implementation of a networking technology
> 
>dragonflow
> 
> The dragonflow repo includes a couple of things.  It includes dragonflow
> itself, and the Neutron driver to connect to it.  Using Astara as an
> example to follow, dragonflow itself could be an independent project.
> 
> Following that, the built-in ML2/ovs or ML2/lb control plane could be
> separate, too, though that's much more painful and complex in practice.
> 
>octavia
> 
> Octavia also seems to fall into this category, just for LBaaS.  It's not
> just a driver, it's a LBaaS service VM orchestrator (which is in part
> what Astara is, too).

From the perspective of our users, I tend to consider neutron-lbaas and octavia 
as a unit, technical distinctions aside.

> 
> It seems reasonable to propose these as independent projects.
> 
> 3) New APIs
> 
> There are some repos that are implementing new REST APIs for Neutron.
> They're independent enough to need their own driver layer, but coupled
> with Neutron enough to still need to run inside of Neutron as they can't
> do everything they need to do by only interfacing with Neutron REST APIs
> (today, at least).
> 
>networking-l2gw:
>networking-sfc:
> 
> Here things start to get less clear to me.  Unless the only interaction
> with Neutron is via its REST API, then it seems like i

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Brandon Logan
On Mon, 2015-11-30 at 23:11 -0500, Russell Bryant wrote:
> Some additional context: there are a few proposals for additional git
> repositories for Neutron that have been put on hold while we sort this out.
> 
> Add networking-bagpipe:
>   https://review.openstack.org/#/c/244736/
> 
> Add the Astara driver:
>   https://review.openstack.org/#/c/230699/
> 
> Add tap-as-a-service:
>   https://review.openstack.org/#/c/229869/
> 
> On 11/30/2015 07:56 PM, Armando M. wrote:
> > I would like to suggest that we evolve the structure of the Neutron
> > governance, so that most of the deliverables that are now part of the
> > Neutron stadium become standalone projects that are entirely
> > self-governed (they have their own core/release teams, etc). In order to
> > denote the initiatives that are related to Neutron I would like to
> > present two new tags that projects can choose to label themselves with:
> > 
> >   * 'is-neutron-subsystem': this means that the project provides
> > networking services by implementing an integral part (or parts) of
> > an end-to-end neutron system. Examples are: a service plugin, an ML2
> > mech driver, a monolithic plugin, an agent etc. It's something an
> > admin has to use in order to deploy Neutron in a certain configuration.
> >   * 'use-neutron-system': this means that the project provides
> > networking services by using a pre-deployed end-to-end neutron
> > system as is. No modifications whatsoever.
> 
> I just want to clarify the proposal.  IIUC, you propose splitting most
> of what is currently separately deliverables of the Neutron team and
> making them separate projects in terms of OpenStack governance.  When I
> originally proposed including networking-ovn under Neutron (and more
> generally, making room for all drivers to be included), making them
> separate projects was one of the options on the table, but it didn't
> seem best at the time.  For reference, that thread was here:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-April/062310.html
> 
> When I was originally proposing this, I was only thinking about Neutron
> drivers, the stuff that connects Neutron to some other system to make
> Neutron do something.  The list has grown to include other things, as well.
> 
> I'm not sure where you propose the line to be, but for the sake of
> discussion, let's assume every deliverable in the governance definition
> for Neutron is under consideration for being split out with the
> exception of neutron, neutron-specs, and python-neutronclient.  The
> remaining deliverables are:
> 
> dragonflow:
> kuryr:
> networking-ale-omniswitch:
> networking-arista:
> networking-bgpvpn:
> networking-calico:
> networking-cisco:
> networking-fortinet:
> networking-hpe:
> networking-hyperv:
> networking-infoblox:
> networking-fujitsu:
> networking-l2gw:
> networking-lenovo:
> networking-midonet:
> networking-odl:
> networking-ofagent:
> networking-onos:
> networking-ovn:
> networking-plumgrid:
> networking-powervm:
> networking-sfc:
> networking-vsphere:
> octavia:
> python-neutron-pd-driver:
> vmware-nsx:
> 
> I think it's helpful to break these into categories, because the answer
> may be different for each group.  Here's my attempt at breaking this
> list into some categories:
> 
> 1) A consumer of Neutron
> 
> kuryr
> 
> IIUC, kuryr is a consumer of Neutron.  Its interaction with Neutron is
> via using Neutron's REST APIs.  You could think of kuryr's use of
> Neutron as architecturally similar to how Nova uses Neutron.
> 
> I think this project makes a ton of sense to become independent.
> 
> 2) Implementation of a networking technology
> 
> dragonflow
> 
> The dragonflow repo includes a couple of things.  It includes dragonflow
> itself, and the Neutron driver to connect to it.  Using Astara as an
> example to follow, dragonflow itself could be an independent project.
> 
> Following that, the built-in ML2/ovs or ML2/lb control plane could be
> separate, too, though that's much more painful and complex in practice.
> 
> octavia
> 
> Octavia also seems to fall into this category, just for LBaaS.  It's not
> just a driver, it's a LBaaS service VM orchestrator (which is in part
> what Astara is, too).

Actually I would put Octavia in #1 as it only interacts with neutron
through its REST API.  There is a neutron-lbaas octavia driver that
simply just calls the Octavia REST API, but it lives in the
neutron-lbaas tree.  Octavia is standalone and consumes all openstack
services through their REST APIs.

> 
> It seems reasonable to propose these as independent projects.
> 
> 3) New APIs
> 
> There are some repos that are implementing new REST APIs for Neutron.
> They're independent enough to need their own driver layer, but coupled
> with Neutron enough to still need to run inside of Neutron as they can't
> do everything they need to do by only in

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Doug Wiegley

> On Nov 30, 2015, at 5:56 PM, Armando M.  wrote:
> 
> Hi folks,
> 
> The stadium concept was introduced more or less formally since April of this 
> year. At the time it was introduced (see [1]), the list of deliverables 
> included neutron, client, specs and *-aas services. As you may be well aware, 
> 6+ months is a long time in the OpenStack world, and lots of things happened 
> since then. The list has grown to [2]. 
> 
> When I think about what 'deliverables' are, I am inclined to think that all 
> of the projects that are part of the list will have to behave and follow the 
> same rules, provided that there is flexibility given by the tags. However, 
> reality has proven us that rules are somewhat difficult to follow and 
> enforce, and some boundaries may be too strict for some initiatives to comply 
> with. This is especially true if we go from a handful of projects that we had 
> when this had started to the nearly the two dozens we have now.
> As a result, there is quite an effort imposed on the PTL, the various 
> liaisons (release, infra, docs, testing, etc) and the core team to help 
> manage the existing relationships and to ensure that the picture stays 
> coherent over time. Sometimes the decision of being part of this list is even 
> presented before one can see any code, and that defeats the whole point of 
> the deliverable association. I have experienced first hand that this has 
> become a burden, and I fear that the stadium might be an extra layer of 
> governance/complexity that could even interfere with the existing 
> responsibilities of the TC and of OpenStack infra.
> 
> So my question is: would revisiting/clarifying the concept be due after some 
> time we have seen it in action? I would like to think so. To be fair, I am 
> not sure what the right answer is, but I know for a fact that some iterations 
> are in order, and I like to make a proposal:
> 
> I would like to suggest that we evolve the structure of the Neutron 
> governance, so that most of the deliverables that are now part of the Neutron 
> stadium become standalone projects that are entirely self-governed (they have 
> their own core/release teams, etc). In order to denote the initiatives that 
> are related to Neutron I would like to present two new tags that projects can 
> choose to label themselves with:
> 
Interesting proposal, and I’m just thinking out loud here. I’m generally in 
favor of separating the governance as we separate the dependencies, just 
because at some point what we’re doing doesn’t scale. To provide a little 
context, there are two points worth keeping in mind:

- The neutron stadium actually slightly pre-dates the big tent, and works 
around some earlier governance friction. So it may make less sense now in light 
of those changes.

- Many of the neutron subprojects *MUST RELEASE IN LOCKSTEP WITH NEUTRON* to be 
useful. These items are less useful to be considered standalone, as they need 
general oversight, co-gating, and such, to stay sane. As we break the massive 
coupling that exists, this point will get less and less relevant. 

- I think that part of the initial intent was that these small subprojects 
would have their own core teams, but be able to take advantage of the 
infrastructure that exists around neutron as a whole (specs, release team, 
stable team, co-gates, mentors).

For your proposal, are you suggesting:

1. That these projects are fully separate, with their own PTLs and everything, 
and just have tags that imply their neutron dependency?  OR,
2. That they stay stadium projects, but we use tags to differentiate them? Many 
already have different core teams and their own specs process.

Are there particular projects that add more overhead? Does your proposal make 
it easier to get code to our user base? Does it add a bunch of makework to fit 
into a new model (same question, really). Is the PTL overhead too high 
currently? Is the shed pink or blue?  :-)

Thanks,
doug




> 
> 'is-neutron-subsystem': this means that the project provides networking 
> services by implementing an integral part (or parts) of an end-to-end neutron 
> system. Examples are: a service plugin, an ML2 mech driver, a monolithic 
> plugin, an agent etc. It's something an admin has to use in order to deploy 
> Neutron in a certain configuration.
> 'use-neutron-system': this means that the project provides networking 
> services by using a pre-deployed end-to-end neutron system as is. No 
> modifications whatsoever.
> 
> As a result, there is no oversight by the Neutron core team, the PTL or 
> liasons, but that should not stop people from being involved if they choose 
> to. We would not lose the important piece of information which is the 
> association to Neutron, and at the same time that would relieve some of us 
> from the onus of dealing with initiatives for which we lack enough context 
> and ability of providing effective guidance.
> 
> In the process, the core team should stay focused on br

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Russell Bryant
Some additional context: there are a few proposals for additional git
repositories for Neutron that have been put on hold while we sort this out.

Add networking-bagpipe:
  https://review.openstack.org/#/c/244736/

Add the Astara driver:
  https://review.openstack.org/#/c/230699/

Add tap-as-a-service:
  https://review.openstack.org/#/c/229869/

On 11/30/2015 07:56 PM, Armando M. wrote:
> I would like to suggest that we evolve the structure of the Neutron
> governance, so that most of the deliverables that are now part of the
> Neutron stadium become standalone projects that are entirely
> self-governed (they have their own core/release teams, etc). In order to
> denote the initiatives that are related to Neutron I would like to
> present two new tags that projects can choose to label themselves with:
> 
>   * 'is-neutron-subsystem': this means that the project provides
> networking services by implementing an integral part (or parts) of
> an end-to-end neutron system. Examples are: a service plugin, an ML2
> mech driver, a monolithic plugin, an agent etc. It's something an
> admin has to use in order to deploy Neutron in a certain configuration.
>   * 'use-neutron-system': this means that the project provides
> networking services by using a pre-deployed end-to-end neutron
> system as is. No modifications whatsoever.

I just want to clarify the proposal.  IIUC, you propose splitting most
of what is currently separately deliverables of the Neutron team and
making them separate projects in terms of OpenStack governance.  When I
originally proposed including networking-ovn under Neutron (and more
generally, making room for all drivers to be included), making them
separate projects was one of the options on the table, but it didn't
seem best at the time.  For reference, that thread was here:

http://lists.openstack.org/pipermail/openstack-dev/2015-April/062310.html

When I was originally proposing this, I was only thinking about Neutron
drivers, the stuff that connects Neutron to some other system to make
Neutron do something.  The list has grown to include other things, as well.

I'm not sure where you propose the line to be, but for the sake of
discussion, let's assume every deliverable in the governance definition
for Neutron is under consideration for being split out with the
exception of neutron, neutron-specs, and python-neutronclient.  The
remaining deliverables are:

dragonflow:
kuryr:
networking-ale-omniswitch:
networking-arista:
networking-bgpvpn:
networking-calico:
networking-cisco:
networking-fortinet:
networking-hpe:
networking-hyperv:
networking-infoblox:
networking-fujitsu:
networking-l2gw:
networking-lenovo:
networking-midonet:
networking-odl:
networking-ofagent:
networking-onos:
networking-ovn:
networking-plumgrid:
networking-powervm:
networking-sfc:
networking-vsphere:
octavia:
python-neutron-pd-driver:
vmware-nsx:

I think it's helpful to break these into categories, because the answer
may be different for each group.  Here's my attempt at breaking this
list into some categories:

1) A consumer of Neutron

kuryr

IIUC, kuryr is a consumer of Neutron.  Its interaction with Neutron is
via using Neutron's REST APIs.  You could think of kuryr's use of
Neutron as architecturally similar to how Nova uses Neutron.

I think this project makes a ton of sense to become independent.

2) Implementation of a networking technology

dragonflow

The dragonflow repo includes a couple of things.  It includes dragonflow
itself, and the Neutron driver to connect to it.  Using Astara as an
example to follow, dragonflow itself could be an independent project.

Following that, the built-in ML2/ovs or ML2/lb control plane could be
separate, too, though that's much more painful and complex in practice.

octavia

Octavia also seems to fall into this category, just for LBaaS.  It's not
just a driver, it's a LBaaS service VM orchestrator (which is in part
what Astara is, too).

It seems reasonable to propose these as independent projects.

3) New APIs

There are some repos that are implementing new REST APIs for Neutron.
They're independent enough to need their own driver layer, but coupled
with Neutron enough to still need to run inside of Neutron as they can't
do everything they need to do by only interfacing with Neutron REST APIs
(today, at least).

networking-l2gw:
networking-sfc:

Here things start to get less clear to me.  Unless the only interaction
with Neutron is via its REST API, then it seems like it should be part
of Neutron.  Put another way, if the API runs as a part of the
neutron-server process, it should be considered part of Neutron if it
exists at all.

4) Neutron plugins/drivers

This is the biggest category.  It's all the glue code for connecting
Neutron to other pieces of software/hardware that implement some piece
of networking.

networking-ale-

[openstack-dev] [midonet] midonet weekly irc meeting

2015-11-30 Thread Ryu Ishimoto
Hi All!

We would like to start holding a weekly IRC meeting for midonet, and
we want to propose the meeting time of 9:00 UTC on Tuesdays.

Please let me know if anyone prefers different time.

Best,
Ryu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-11-30 Thread Steve Baker

On 01/12/15 15:39, Steve Baker wrote:

On 01/12/15 10:28, Steven Hardy wrote:

On Tue, Dec 01, 2015 at 08:47:20AM +1300, Steve Baker wrote:

On 30/11/15 23:21, Steven Hardy wrote:

On Mon, Nov 30, 2015 at 10:03:29AM +0100, Lennart Regebro wrote:

I'm tasked to implement a command that shows error messages when a
deployment has failed. I have a vague memory of having seen scripts
that do something like this, if that exists, can somebody point me in
teh right direction?

I wrote a super simple script and put it in a blog post a while back:

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-3-cluster.html 



All it does is find the failed SoftwareDeployment resources, then 
do heat
deployment-show on the resource, so you can see the stderr 
associated with

the failure.

Having tripleoclient do that by default would be useful.


Any opinions on what that should do, specifically? Traverse failed
resources to find error messages, I assume. Anything else?
Yeah, but I think for this to be useful, we need to go a bit deeper 
than
just showing the resource error - there are a number of typical 
failure

modes, and I end up repeating the same steps to debug every time.

1. SoftwareDeployment failed (mentioned above).  Every time, you 
need to

see the name of the SoftwareDeployment which failed, figure out if it
failed on one or all of the servers, then look at the stderr for 
clues.


2. A server failed to build (OS::Nova::Server resource is FAILED), 
here we
need to check both nova and ironic, looking first to see if ironic 
has the

node(s) in the wrong state for scheduling (e.g nova gave us a no valid
host error), and then if they are OK in ironic, do nova show on the 
failed

host to see the reason nova gives us for it failing to go ACTIVE.

3. A stack timeout happened.  IIRC when this happens, we currently 
fail
with an obscure keystone related backtrace due to the token 
expiring.  We

should instead catch this error and show the heat stack status_reason,
which should say clearly the stack timed out.

If we could just make these three cases really clear and easy to 
debug, I
think things would be much better (IME the above are a high 
proportion of
all failures), but I'm sure folks can come up with other ideas to 
add to

the list.

I'm actually drafting a spec which includes a command which does 
this. I

hope to submit it soon, but here is the current state of that command's
description:

Diagnosing resources in a FAILED state
--

One command will be implemented:
- openstack overcloud failed list

This will print a yaml tree showing the hierarchy of nested stacks 
until it
gets to the actual failed resource, then it will show information 
regarding

the
failure. For most resource types this information will be the 
status_reason,
but for software-deployment resources the deploy_stdout, 
deploy_stderr and

deploy_status code will be printed.

In addition to this stand-alone command, this output will also be 
printed

when
an ``openstack overcloud deploy`` or ``openstack overcloud update`` 
command

results in a stack in a FAILED state.

This sounds great!

The spec is here.

I mean _here_

https://review.openstack.org/#/c/251587/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-11-30 Thread Steve Baker

On 01/12/15 10:28, Steven Hardy wrote:

On Tue, Dec 01, 2015 at 08:47:20AM +1300, Steve Baker wrote:

On 30/11/15 23:21, Steven Hardy wrote:

On Mon, Nov 30, 2015 at 10:03:29AM +0100, Lennart Regebro wrote:

I'm tasked to implement a command that shows error messages when a
deployment has failed. I have a vague memory of having seen scripts
that do something like this, if that exists, can somebody point me in
teh right direction?

I wrote a super simple script and put it in a blog post a while back:

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-3-cluster.html

All it does is find the failed SoftwareDeployment resources, then do heat
deployment-show on the resource, so you can see the stderr associated with
the failure.

Having tripleoclient do that by default would be useful.


Any opinions on what that should do, specifically? Traverse failed
resources to find error messages, I assume. Anything else?

Yeah, but I think for this to be useful, we need to go a bit deeper than
just showing the resource error - there are a number of typical failure
modes, and I end up repeating the same steps to debug every time.

1. SoftwareDeployment failed (mentioned above).  Every time, you need to
see the name of the SoftwareDeployment which failed, figure out if it
failed on one or all of the servers, then look at the stderr for clues.

2. A server failed to build (OS::Nova::Server resource is FAILED), here we
need to check both nova and ironic, looking first to see if ironic has the
node(s) in the wrong state for scheduling (e.g nova gave us a no valid
host error), and then if they are OK in ironic, do nova show on the failed
host to see the reason nova gives us for it failing to go ACTIVE.

3. A stack timeout happened.  IIRC when this happens, we currently fail
with an obscure keystone related backtrace due to the token expiring.  We
should instead catch this error and show the heat stack status_reason,
which should say clearly the stack timed out.

If we could just make these three cases really clear and easy to debug, I
think things would be much better (IME the above are a high proportion of
all failures), but I'm sure folks can come up with other ideas to add to
the list.


I'm actually drafting a spec which includes a command which does this. I
hope to submit it soon, but here is the current state of that command's
description:

Diagnosing resources in a FAILED state
--

One command will be implemented:
- openstack overcloud failed list

This will print a yaml tree showing the hierarchy of nested stacks until it
gets to the actual failed resource, then it will show information regarding
the
failure. For most resource types this information will be the status_reason,
but for software-deployment resources the deploy_stdout, deploy_stderr and
deploy_status code will be printed.

In addition to this stand-alone command, this output will also be printed
when
an ``openstack overcloud deploy`` or ``openstack overcloud update`` command
results in a stack in a FAILED state.

This sounds great!

The spec is here.

Another piece of low-hanging-fruit in the meantime is we should actually
print the stack_status_reason on failure:

https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/overcloud_deploy.py#L280

The DeploymentError raised could include the stack_status_reason vs the
unqualified "Heat Stack create failed".

I guess your event listing partially overlaps with this, as you can now
derive the stack_status_reason from the last event, but it's still be good
to loudly output it so folks can see more quickly when things such as
timeouts happen that are clearly displayed in the top-level stack status.


Yes, this would be a trivially implemented quick win.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][aodh] The purpose of notification about alarm updating

2015-11-30 Thread liusheng

Hi folks,

Currently, a notification message will be emitted when updating an alarm 
(state transition, attribute updating, creation),  this functionality 
was added by change[1], but the change didn't describe any purpose. So I 
wonder whether there is any usage of this type of notification, we can 
get the whole details about alarm change by alarm-history API. the 
notification is implicitly ignored by default, because the 
"notification_driver"config option won't be configured by default.  if 
we enable this option in aodh.conf and enable the "store_events" in 
ceilometer.conf, this type of notifications will be stored as events. so 
maybe some users want to aggregate this with events ? what's your opinion ?


I have made a change try to deprecate this notification, see [2].

[1] https://review.openstack.org/#/c/48949/
[2] https://review.openstack.org/#/c/246727/

BR
Liu sheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project meeting, Tue Dec 1st, 21:00 UTC & *New Location*

2015-11-30 Thread Mike Perez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting December 1st at 21:00 UTC in the NEW
#openstack-meeting-cp IRC channel, with the following agenda: 

* Review past action items
* Team announcements (horizontal, vertical, diagonal)
* backwards compat of libraries and clients [1]
* Cross-project Liaisons [2]
* Open discussion

If you're from a horizontal team (Release management, QA, Infra, Docs,
Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and
have something to communicate to the other teams, feel free to abuse the
relevant sections of that meeting and make sure it gets #info-ed by the
meetbot in the meeting summary.

See you there!

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

[1] - https://review.openstack.org/226157
[2] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-December/080869.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-30 Thread hao wang
Thank you Walter for those information. Yes, I knew Smaug, actually I
was one of speakers in this session :).
This project was started by my colleague, Eran Gampel.
AFAIK, resources in Cinder(volumes, etc.) is considered as one of
protection objects in it.

So we can see there are more and more attention to DR,  as discussed
here, it's a complex issue, so I feel cinder will
provide basic ability to support DR, and Smaug or DRagon, will use
those APIs(and Nova's APIs, Heat's APIs, etc.) to
implement the DR goal for OpenStack resources.

2015-12-01 0:50 GMT+08:00 Walter A. Boring IV :
> As a side note to the DR discussion here, there was a session in Tokyo that
> talked about a new
> DR project called Smaug.   You can see their mission statement here:
> https://launchpad.net/smaug
>
> https://github.com/openstack/smaug
>
> There is another service in the making called DRagon:
> https://www.youtube.com/watch?v=upCzuFnswtw
> http://www.slideshare.net/AlonMarx/dragon-and-cinder-v-brownbag-54639869
>
> Yes that's 2 DR like service starting in OpenStack that are related to
> dragons.
>
> Walt
>
>
>> Sean and Michal,
>>
>> In fact, there is a reason that I ask this question. Recently I have a
>> confusion about if cinder should provide the ability of Disaster
>> Recovery to storage resources, like volume. I mean we have volume
>> replication v1&v2, but for DR, specially DR between two independent
>> OpenStack sites(production and DR site), I feel we still need more
>> features to support it, for example consistency group for replication,
>> etc. I'm not sure if those features belong in Cinder or some new
>> project for DR.
>>
>> BR
>> WangHao
>>
>> 2015-11-30 3:02 GMT+08:00 Sean McGinnis :
>>>
>>> On Sun, Nov 29, 2015 at 11:44:19AM +, Dulko, Michal wrote:

 On Sat, 2015-11-28 at 10:56 +0800, hao wang wrote:
>
> Hi guys,
>
> I notice nova have a clarification of project scope:
> http://docs.openstack.org/developer/nova/project_scope.html
>
> I want to find cinder's, but failed,  do you know where to find it?
>
> It's important to let developers know what feature should be
> introduced into cinder and what shouldn't.
>
> BR
> Wang Hao

 I believe Nova team needed to formalize the scop to have an explanation
 for all the "this doesn't belong in Nova" comments on feature requests.
 Does Cinder suffer from similar problems? From my perspective it's not
 critically needed.
>>>
>>> I agree. I haven't seen a need for something like that with Cinder. Wang
>>> Hao, is there a reason you feel you need that?
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> .
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project] Cross-project Liaisons

2015-11-30 Thread Mike Perez
Hello all,

Currently for cross-project specs, the author of the spec spends the time to
explain why a certain feature makes sense to be across multiple projects. This
also includes giving technical solutions for it working with a variety of
services and making sure everyone is happy.

Today we have the following problems:

* Authors of specs can't progress forward with specs because of lack of
  attention. Eventually getting frustrated and giving up.
* Some projects could miss a cross-project spec being approved by the TC.

It has been expressed to me at the previous Cross-Project Communication Tokyo
summit session that PTLs don't have time for cross-project issues. I agree, as
being a previous PTL your time is thin. However, I do think someone from each
project needs to be aware and involved with cross-project initiatives.

I would like to propose cross-project liaisons which would have the following
duties:

* Watching the cross-project spec repo [1].
  - Comment on specs that involve your project. +1 to carry forward for TC
approval.
-- If you're not able to provide technical guidance on certain specs for
   your project, it's up to you to get the right people involved.
-- Assuming you get someone else involved, it's up to you to make sure they
   keep up with communication.
  - Communicate back to your project's meeting on certain cross-project specs
when necessary. This is also good for the previous bullet point of sourcing
who would have technical knowledge for certain specs.
* Attend the cross-project meeting when it's called for [2].


[1] - 
https://review.openstack.org/#/q/project:+openstack/openstack-specs+status:+open,n,z
[2] - https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Third Party CI Deadlines for Mitaka and N

2015-11-30 Thread Anita Kuno
On 11/30/2015 08:00 PM, Mike Perez wrote:
> On October 28th 2015 at the Ironic Third Party CI summit session [1], there 
> was
> consensus by the Ironic core and participating vendors that the set of
> deadlines will be:
> 
> * Mitaka-2ː Driver teams will have registered their intent to run CI by 
> creating
> system accounts and identifying a point of contact for their CI team in the
> Third party CI wiki [2].
> * Mitaka Feature Freezeː All driver systems show the ability to receive events
> and post comments in the sandbox.
> * N release feature freezeː Per patch testing and posting comments.
> 
> There are requirements set for OpenStack Third Party CI's [3]. In addition
> Ironic third party CI's must:
> 
> 1) Test all drivers your company has integrated in Ironic.
> 
> For example, if your company has two drivers in Ironic, you would need to have
> a CI that tests against the two and reports the results for each, for every
> Ironic upstream patch. The tests come from a Devstack Gate job template [4], 
> in
> which you just need to switch the "deploy_driver" to your driver.
> 
> To get started, read OpenStack's third party testing documentation [5]. There
> are efforts by OpenStack Infra to allow others to run third party CI similar 
> to
> the OpenStack upstream CI using Puppet [6] and instruction are available [7].
> Don't forget to register your CI in the wiki [2], there is no need to announce
> about it on any mailing list.
> 
> OpenStack Infra also provides third party CI help via meetings [8], and the
> Ironic team has designated people to answer questions with setting up a third
> party CI in the #openstack-ironic room [9].
> 
> If a solution does not have a CI watching for events and posting comments to
> the sandbox [10] by the Mitaka feature freeze, it'll be assumed the driver is
> not active, and can be removed from the Ironic repository as of the Mitaka
> release.

Thanks Mike, great post.

One point of clarification, the sandbox repo for third-party ci systems
is called ci-sandbox:
https://review.openstack.org/#/q/project:+openstack-dev/ci-sandbox,n,z
also found here: http://git.openstack.org/cgit/openstack-dev/ci-sandbox/

The sandbox linked in the original post is for developers to experiment
with Gerrit not for ci systems.

Thank you,
Anita.

> 
> If a solution is not being tested in a CI system and reporting to OpenStack
> gerrit Ironic patches by the deadline of the N release feature freeze, an
> Ironic driver could be removed from the Ironic repository. Without a CI 
> system,
> Ironic core is unable to verify your driver works in the N release of Ironic.
> 
> If there is something not clear about this email, please email me *directly*
> with your question. You can also reach me as thingee on Freenode IRC in the
> #openstack-ironic channel. Again I want you all to be successful in this, and
> take advantage of this testing you will have with your product. Please
> communicate with me and reach out to the team for help.
> 
> [1] - https://etherpad.openstack.org/p/summit-mitaka-ironic-third-party-ci
> [2] - https://wiki.openstack.org/wiki/ThirdPartySystems
> [3] - 
> http://docs.openstack.org/infra/system-config/third_party.html#requirements
> [4] - 
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L961
> [5] - http://docs.openstack.org/infra/system-config/third_party.html
> [6] - https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/
> [7] - 
> https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
> [8] - https://wiki.openstack.org/wiki/Meetings/ThirdParty
> [9] - https://wiki.openstack.org/wiki/Ironic/Testing#Questions
> [10] - https://review.openstack.org/#/q/project:+openstack-dev/sandbox,n,z
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Third Party CI Deadlines for Mitaka and N

2015-11-30 Thread Mike Perez
On October 28th 2015 at the Ironic Third Party CI summit session [1], there was
consensus by the Ironic core and participating vendors that the set of
deadlines will be:

* Mitaka-2ː Driver teams will have registered their intent to run CI by creating
system accounts and identifying a point of contact for their CI team in the
Third party CI wiki [2].
* Mitaka Feature Freezeː All driver systems show the ability to receive events
and post comments in the sandbox.
* N release feature freezeː Per patch testing and posting comments.

There are requirements set for OpenStack Third Party CI's [3]. In addition
Ironic third party CI's must:

1) Test all drivers your company has integrated in Ironic.

For example, if your company has two drivers in Ironic, you would need to have
a CI that tests against the two and reports the results for each, for every
Ironic upstream patch. The tests come from a Devstack Gate job template [4], in
which you just need to switch the "deploy_driver" to your driver.

To get started, read OpenStack's third party testing documentation [5]. There
are efforts by OpenStack Infra to allow others to run third party CI similar to
the OpenStack upstream CI using Puppet [6] and instruction are available [7].
Don't forget to register your CI in the wiki [2], there is no need to announce
about it on any mailing list.

OpenStack Infra also provides third party CI help via meetings [8], and the
Ironic team has designated people to answer questions with setting up a third
party CI in the #openstack-ironic room [9].

If a solution does not have a CI watching for events and posting comments to
the sandbox [10] by the Mitaka feature freeze, it'll be assumed the driver is
not active, and can be removed from the Ironic repository as of the Mitaka
release.

If a solution is not being tested in a CI system and reporting to OpenStack
gerrit Ironic patches by the deadline of the N release feature freeze, an
Ironic driver could be removed from the Ironic repository. Without a CI system,
Ironic core is unable to verify your driver works in the N release of Ironic.

If there is something not clear about this email, please email me *directly*
with your question. You can also reach me as thingee on Freenode IRC in the
#openstack-ironic channel. Again I want you all to be successful in this, and
take advantage of this testing you will have with your product. Please
communicate with me and reach out to the team for help.

[1] - https://etherpad.openstack.org/p/summit-mitaka-ironic-third-party-ci
[2] - https://wiki.openstack.org/wiki/ThirdPartySystems
[3] - 
http://docs.openstack.org/infra/system-config/third_party.html#requirements
[4] - 
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L961
[5] - http://docs.openstack.org/infra/system-config/third_party.html
[6] - https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/
[7] - 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
[8] - https://wiki.openstack.org/wiki/Meetings/ThirdParty
[9] - https://wiki.openstack.org/wiki/Ironic/Testing#Questions
[10] - https://review.openstack.org/#/q/project:+openstack-dev/sandbox,n,z

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Evolving the stadium concept

2015-11-30 Thread Armando M.
Hi folks,

The stadium concept was introduced more or less formally since April of
this year. At the time it was introduced (see [1]), the list of
deliverables included neutron, client, specs and *-aas services. As you may
be well aware, 6+ months is a long time in the OpenStack world, and lots of
things happened since then. The list has grown to [2].

When I think about what 'deliverables' are, I am inclined to think that all
of the projects that are part of the list will have to behave and follow
the same rules, provided that there is flexibility given by the tags. However,
reality has proven us that rules are somewhat difficult to follow and
enforce, and some boundaries may be too strict for some initiatives to
comply with. This is especially true if we go from a handful of projects
that we had when this had started to the nearly the two dozens we have now.

As a result, there is quite an effort imposed on the PTL, the various
liaisons (release, infra, docs, testing, etc) and the core team to help
manage the existing relationships and to ensure that the picture stays
coherent over time. Sometimes the decision of being part of this list is
even presented before one can see any code, and that defeats the whole
point of the deliverable association. I have experienced first hand that
this has become a burden, and I fear that the stadium might be an extra
layer of governance/complexity that could even interfere with the existing
responsibilities of the TC and of OpenStack infra.

So my question is: would revisiting/clarifying the concept be due after
some time we have seen it in action? I would like to think so. To be fair,
I am not sure what the right answer is, but I know for a fact that some
iterations are in order, and I like to make a proposal:

I would like to suggest that we evolve the structure of the Neutron
governance, so that most of the deliverables that are now part of the
Neutron stadium become standalone projects that are entirely self-governed
(they have their own core/release teams, etc). In order to denote the
initiatives that are related to Neutron I would like to present two new
tags that projects can choose to label themselves with:


   - 'is-neutron-subsystem': this means that the project provides
   networking services by implementing an integral part (or parts) of an
   end-to-end neutron system. Examples are: a service plugin, an ML2 mech
   driver, a monolithic plugin, an agent etc. It's something an admin has to
   use in order to deploy Neutron in a certain configuration.
   - 'use-neutron-system': this means that the project provides networking
   services by using a pre-deployed end-to-end neutron system as is. No
   modifications whatsoever.

As a result, there is no oversight by the Neutron core team, the PTL or
liasons, but that should not stop people from being involved if they choose
to. We would not lose the important piece of information which is the
association to Neutron, and at the same time that would relieve some of us
from the onus of dealing with initiatives for which we lack enough context
and ability of providing effective guidance.

In the process, the core team should stay focused on breaking the coupling
that still affects Neutron so that projects that depend on it can do so
more reliably, and yet innovate more independently. If that means
revisiting the Neutron's mission statement, we can discuss that too.

I am sure this hardly covers all the questions you may have at this point,
but I would like to take the opportunity to start the conversation to see
where people stand. Whichever the outcome, I think that we should strive
for decentralizing responsibilities as much as we can in order to be
scalable as a project and I think that the current arrangement prevents us
from doing that.

Thanks for reading.

Armando

[1]
https://github.com/openstack/governance/blob/april-2015-elections/reference/projects.yaml#L141
[2]
https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2000
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2015-11-30 Thread Tan, Lin
Hi guys
I recently play around with 'x-openstack-request-id' header but have a 
dump question about how it works. At beginning, I thought an action across 
different services should use a same request-id but it looks like this is not 
the true. 

First I read the spec: 
https://blueprints.launchpad.net/nova/+spec/cross-service-request-id which said 
"This ID and the request ID of the other service will be logged at service 
boundaries". and I see cinder/neutron/glance will attach its context's 
request-id as the value of "x-openstack-request-id" header to its response 
while nova use X-Compute-Request-Id. This is easy to understand. So It looks 
like each service should generate its own request-id and attach to its 
response, that's all.

But then I see glance read 'X-Openstack-Request-ID' to generate the request-id 
while cinder/neutron/nova read 'openstack.request_id' when using with keystone. 
It is try to reuse the request-id from keystone.

This totally confused me. It would be great if you can correct me or point me 
some reference. Thanks a lot

Best Regards,

Tan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-30 Thread hao wang
Thanks Xing Yang, I have noticed this spec, it's glad to see you to
start this work.

2015-11-30 23:10 GMT+08:00 yang, xing :
> Hi Wang Hao,
>
> Here¹s a Cinder spec in review on replicating a group of volumes:
>
> https://review.openstack.org/#/c/229722/
>
> It is a not an easy problem to solve.  Not that we should rush on this
> problem, but we should start thinking about how to solve this as some
> backends can only replicate a CG or a pool of volumes.
>
> Thanks,
> Xing
>
>
>
> On 11/30/15, 4:51 AM, "hao wang"  wrote:
>
>>Hi, Duncan
>>
>>2015-11-30 15:54 GMT+08:00 Duncan Thomas :
>>> Hi WangHao
>>>
>>> This was quite thoroughly discussed during the early discussions on
>>> replication. The general statement was 'not yet'. Getting any kind of
>>> workable replication API has proven to be very, very difficult to get
>>>right
>>> - we won't know for another full cycle whether we've actually gotten it
>>> somewhere near right, as operators start to deploy it. Piling more
>>>feature
>>> in the replication API before a) it has been used by operators and b)
>>> storage vendors have implemented what we already has would IMO be a
>>>mistake.
>>
>>I agree with you, in my mind, using replication what we have is first
>>thing we should done,
>>improve it much better is second thing, and then we will add another
>>new features
>>one by one stably.
>>
>>> None of this means that more DR interfaces don't belong in cinder, just
>>>that
>>> getting them right, getting them universal and getting them useful is
>>>quite
>>> a hard problem, and not one we should be in a rush to solve.
>>>Particularly as
>>> DR and replication is still a niche area of cinder, and we still have
>>>major
>>> issues in our basic functionality.
>>
>>Yes, this will convince me about DR in Cinder, very clearly, thanks.
>>>
>>> On 30 November 2015 at 03:45, hao wang  wrote:

 Sean and Michal,

 In fact, there is a reason that I ask this question. Recently I have a
 confusion about if cinder should provide the ability of Disaster
 Recovery to storage resources, like volume. I mean we have volume
 replication v1&v2, but for DR, specially DR between two independent
 OpenStack sites(production and DR site), I feel we still need more
 features to support it, for example consistency group for replication,
 etc. I'm not sure if those features belong in Cinder or some new
 project for DR.

 BR
 WangHao

 2015-11-30 3:02 GMT+08:00 Sean McGinnis :
 > On Sun, Nov 29, 2015 at 11:44:19AM +, Dulko, Michal wrote:
 >> On Sat, 2015-11-28 at 10:56 +0800, hao wang wrote:
 >> > Hi guys,
 >> >
 >> > I notice nova have a clarification of project scope:
 >> > http://docs.openstack.org/developer/nova/project_scope.html
 >> >
 >> > I want to find cinder's, but failed,  do you know where to find
it?
 >> >
 >> > It's important to let developers know what feature should be
 >> > introduced into cinder and what shouldn't.
 >> >
 >> > BR
 >> > Wang Hao
 >>
 >> I believe Nova team needed to formalize the scop to have an
explanation
 >> for all the "this doesn't belong in Nova" comments on feature
requests.
 >> Does Cinder suffer from similar problems? From my perspective it's
not
 >> critically needed.
 >
 > I agree. I haven't seen a need for something like that with Cinder.
Wang
 > Hao, is there a reason you feel you need that?
 >
 >
 >
 >

__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> --
>>> --
>>> Duncan Thomas
>>>
>>>
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org

Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-30 Thread hao wang
Thanks very much, Anita, very usefully information, I will check it.

2015-11-30 23:36 GMT+08:00 Anita Kuno :
> On 11/29/2015 02:02 PM, Sean McGinnis wrote:
>> On Sun, Nov 29, 2015 at 11:44:19AM +, Dulko, Michal wrote:
>>> On Sat, 2015-11-28 at 10:56 +0800, hao wang wrote:
 Hi guys,

 I notice nova have a clarification of project scope:
 http://docs.openstack.org/developer/nova/project_scope.html

 I want to find cinder's, but failed,  do you know where to find it?

 It's important to let developers know what feature should be
 introduced into cinder and what shouldn't.

 BR
 Wang Hao
>>>
>>> I believe Nova team needed to formalize the scop to have an explanation
>>> for all the "this doesn't belong in Nova" comments on feature requests.
>>> Does Cinder suffer from similar problems? From my perspective it's not
>>> critically needed.
>>
>> I agree. I haven't seen a need for something like that with Cinder. Wang
>> Hao, is there a reason you feel you need that?
>>
>
> For reference here is the Cinder mission statement:
> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n273
>
> All projects listed in the governance repository reference/projects.yaml
> have a mission statement, I do encourage folks thinking about starting a
> project to look at the mission statements here first as there may
> already be an effort ongoing with which you can align your work.
>
> Thanks Wang Hao,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-30 Thread David_Paterson
Dell - Internal Use - Confidential
 I don’t really understand the friction, it’s incredibly useful, especially for 
new users.

Use it or don’t use it it’s up to you,  nowhere is it implied that the user is 
forced to configure tempest with the tool.

Without a tool like this it is complex for new users to get up and going with 
tempest.  It is also more difficult for other parties to integrate with 
Tempest, Rally had to write their own configuration tooling for instance.

Thanks,
dp



From: Andrea Frittoli [mailto:andrea.fritt...@gmail.com]
Sent: Monday, November 30, 2015 8:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [QA][Tempest] Use tempest-config for 
tempest-cli-improvements

Looking at the tool, it seems to me that is servers a combination of functions:
- provision test resources
- support for distribution specific / cloud specific overrides to default
- support for configuration override via CLI
- discovery of configuration

Test resource provisioning is something that I agree is useful to have.
We plan in Mitaka to separate out of tempest.conf the configuration of test 
resources, and have a CLI tool to provision them [0]. We could re-use code from 
this tool to achieve that.

Support for distribution specific / cloud specific overrides is also something 
that is useful. In clouds where I control the deployment process I inject extra 
configs in tempest.conf based on the deployment options. In clouds where I 
don't, I maintain a partial tempest.conf with the list of options which I 
expects I have to modify to match the target cloud.

That's pretty easy to achieve though - simply append the extra configs to the 
bottom of tempest.conf and it's done. Duplicate configuration options are not 
an issue, the last one wins. Still we could support specifying a number of 
configuration files to the non-yet implemented "tempest run" command.

Support for configuration override via CLI is something that I can see it can 
be useful during development or troubleshooting, we could support that as 
options of the non-yet implemented "tempest run" command.

The last point is discovery - I believe we should use that only as we use it 
today in the gate - i.e. fail fast if the generated configuration does not 
match what can be discovered from the cloud.

So I would not get the script as is into tempest, but I think many of the 
functions implemented by it can fit into tempest - and some are already there.

andrea

[0] https://review.openstack.org/#/c/173334/

On Mon, Nov 30, 2015 at 7:39 AM Masayuki Igawa 
mailto:masayuki.ig...@gmail.com>> wrote:
Hi,

I agree with Ken'ichi's opinion, basically. Tempest users should know "what do 
we test for?" and we shouldn't discover values that we test for automatically.
If users seems that "My current cloud is good. This is what I expect.", 
discovering function could work. But I suppose many of users would use 
tempest-config-generator for a new their cloud. So I feel the tool users could 
be misunderstanding easily.

But I also think that tempest users don't need to know all of the 
configurations.
So, how about something like introducing "a configuration wizard" for tempest 
configuration?
This is a just idea, though..


Anyway, if you have the motivation to introduce tempest-config, how about 
writing a spec for the feature for a concrete discussion?
(I think we should have agreements about the target issues, users, solutions, 
etc.)

Best Regards,
-- Masayuki Igawa

2015年11月29日(日) 22:07 Yair Fried mailto:yfr...@redhat.com>>:
Hi,
I agree with Jordan.
We don't have to use the tool as part of the gate. It's target audience is 
people and not CI systems. More specifically - new users.
However, we could add a gate (or a few) for the tool that makes sure a proper 
conf file is generated. It doesn't have to run the tests, just compare the 
output of the script to the conf generated by devstack.

Re Rally - I believe the best place for tempest configuration script is within 
tempest. That said, if the Tempest community doesn't want this tool, we'll have 
to settle for the Rally solution.

Regards
Yair

On Fri, Nov 27, 2015 at 11:31 AM, Jordan Pittier 
mailto:jordan.pitt...@scality.com>> wrote:
Hi,
I think this script is valuable to some users: Rally and Red Hat expressed 
their needs, they seem clear.

This tool is far from bullet proof and if used blindly or in case of bugs, 
Tempest could be misconfigured. So, we could have this tool inside the Tempest 
repository (in the tools/) but not use it at all for the Gate.

I am not sure I fully understand the resistance for this, if we don"t use this 
config generator for the gate, what's the risk ?

Jordan

On Fri, Nov 27, 2015 at 8:05 AM, Ken'ichi Ohmichi 
mailto:ken1ohmi...@gmail.com>> wrote:
2015-11-27 15:40 GMT+09:00 Daniel Mellado 
mailto:daniel.mellado...@ieee.org>>:
> I still do think that even if there are some issues addressed to the
> feature, such as skipping tests 

Re: [openstack-dev] Role in Congress

2015-11-30 Thread Adam Young

On 11/28/2015 05:03 AM, zhangyali (D) wrote:


Hi Tim and All,

I remember there is a topic named “role assignment for service users” 
in the Tokyo Summit. Since I have not heard any message of this topic. 
Does anyone could contribute some information for me? I think it is 
vital for my design of Congress UI in horizon. Thanks a lot!




There was a fgew places where service users needed "admin" which we need 
to root out.



What else are you thinking about?



Best Regards,

Yali



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Devananda van der Veen
On Mon, Nov 30, 2015 at 3:07 PM, Zane Bitter  wrote:

> On 30/11/15 12:51, Ruby Loo wrote:
>
>>
>>
>> On 30 November 2015 at 10:19, Derek Higgins > > wrote:
>>
>> Hi All,
>>
>>  A few months tripleo switch from its devtest based CI to one
>> that was based on instack. Before doing this we anticipated
>> disruption in the ci jobs and removed them from non tripleo projects.
>>
>>  We'd like to investigate adding it back to heat and ironic as
>> these are the two projects where we find our ci provides the most
>> value. But we can only do this if the results from the job are
>> treated as voting.
>>
>>
>> What does this mean? That the tripleo job could vote and do a -1 and
>> block ironic's gate?
>>
>>
>>  In the past most of the non tripleo projects tended to ignore
>> the results from the tripleo job as it wasn't unusual for the job to
>> broken for days at a time. The thing is, ignoring the results of the
>> job is the reason (the majority of the time) it was broken in the
>> first place.
>>  To decrease the number of breakages we are now no longer
>> running master code for everything (for the non tripleo projects we
>> bump the versions we use periodically if they are working). I
>> believe with this model the CI jobs we run have become a lot more
>> reliable, there are still breakages but far less frequently.
>>
>> What I proposing is we add at least one of our tripleo jobs back to
>> both heat and ironic (and other projects associated with them e.g.
>> clients, ironicinspector etc..), tripleo will switch to running
>> latest master of those repositories and the cores approving on those
>> projects should wait for a passing CI jobs before hitting approve.
>> So how do people feel about doing this? can we give it a go? A
>> couple of people have already expressed an interest in doing this
>> but I'd like to make sure were all in agreement before switching it
>> on.
>>
>> This seems to indicate that the tripleo jobs are non-voting, or at least
>> won't block the gate -- so I'm fine with adding tripleo jobs to ironic.
>> But if you want cores to wait/make sure they pass, then shouldn't they
>> be voting? (Guess I'm a bit confused.)
>>
>
> +1
>
> I don't think it hurts to turn it on, but tbh I'm uncomfortable with the
> mental overhead of a non-voting job that I have to manually treat as a
> voting job. If it's stable enough to make it a voting job, I'd prefer we
> just make it voting. And if it's not then I'd like to see it be made stable
> enough to be a voting job and then make it voting.


This is roughly where I sit as well -- if it's non-voting, experience tells
me that it will largely be ignored, and as such, isn't a good use of
resources.

I haven't looked at tripleo or tripleoci in a while, so I wont assume that
my recollection of the CI jobs bears any resemblance to what exists today.
Could you explain what areas of ironic (or its subprojects) will be covered
by these tests?  If they are already covered by existing tests, then I
don't see the benefit of adding another job; conversely, if this is testing
areas we don't cover today, then there's probably value in running
tripleoci in a voting fashion for now and then moving that coverage into
ironic's project testing.

-Deva
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Zane Bitter

On 30/11/15 12:51, Ruby Loo wrote:



On 30 November 2015 at 10:19, Derek Higgins mailto:der...@redhat.com>> wrote:

Hi All,

 A few months tripleo switch from its devtest based CI to one
that was based on instack. Before doing this we anticipated
disruption in the ci jobs and removed them from non tripleo projects.

 We'd like to investigate adding it back to heat and ironic as
these are the two projects where we find our ci provides the most
value. But we can only do this if the results from the job are
treated as voting.


What does this mean? That the tripleo job could vote and do a -1 and
block ironic's gate?


 In the past most of the non tripleo projects tended to ignore
the results from the tripleo job as it wasn't unusual for the job to
broken for days at a time. The thing is, ignoring the results of the
job is the reason (the majority of the time) it was broken in the
first place.
 To decrease the number of breakages we are now no longer
running master code for everything (for the non tripleo projects we
bump the versions we use periodically if they are working). I
believe with this model the CI jobs we run have become a lot more
reliable, there are still breakages but far less frequently.

What I proposing is we add at least one of our tripleo jobs back to
both heat and ironic (and other projects associated with them e.g.
clients, ironicinspector etc..), tripleo will switch to running
latest master of those repositories and the cores approving on those
projects should wait for a passing CI jobs before hitting approve.
So how do people feel about doing this? can we give it a go? A
couple of people have already expressed an interest in doing this
but I'd like to make sure were all in agreement before switching it on.

This seems to indicate that the tripleo jobs are non-voting, or at least
won't block the gate -- so I'm fine with adding tripleo jobs to ironic.
But if you want cores to wait/make sure they pass, then shouldn't they
be voting? (Guess I'm a bit confused.)


+1

I don't think it hurts to turn it on, but tbh I'm uncomfortable with the 
mental overhead of a non-voting job that I have to manually treat as a 
voting job. If it's stable enough to make it a voting job, I'd prefer we 
just make it voting. And if it's not then I'd like to see it be made 
stable enough to be a voting job and then make it voting.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] CentOS7 Merging Plan

2015-11-30 Thread Dmitry Teselkin
Hello,

We're almost got green BVT on custom CentOS7 ISO and it seems that it's
the time to discuss the plan how this feature could be merged.

This is not the only one feature that is in a queue. Unfortunately,
almost any other feature will be broken if merged after CentOS7, so it
was decided to merge our changes last.

This is not an official announcement, rather a notification letter to
start a discussion and find any objections.

So the plan is:

* merge all features that are going to be merged before Thusday, Dec 3
* call for merge freeze starting at Dec 3, due Dec 7
* rebase all CentOS7-related pathsets and resolve any conflicts with
  merged code (Dec 3)
* build custom ISO, pass BVT (and other tests) (Dec 3)
* merge all CentOS7-related patchsets at once (Dec 4)
* build an ISO and pass BVT again (Dec 4)
* run additional test during weekend (Dec 5, 6) to be sure that ISO
  good enough

According to this plan on Monday, Dec 7 we should either get CentOS7
based ISO, or revert all incompatible changes.

-- 
Thanks,
Dmitry Teselkin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2015-11-30 13:22:23 -0800:
> On 11/30/2015 02:15 PM, Sean Dague wrote:
> > On 11/30/2015 03:01 PM, Robert Collins wrote:
> >> On 1 December 2015 at 08:37, Ben Nemec  wrote:
> >>> On 11/30/2015 12:42 PM, Joshua Harlow wrote:
>  Hi all,
> 
>  I just wanted to bring up an issue, possible solution and get feedback
>  on it from folks because it seems to be an on-going problem that shows
>  up not when an application is initially deployed but as on-going
>  operation and running of that application proceeds (ie after running for
>  a period of time).
> 
>  The jist of the problem is the following:
> 
>  A <> has a need to ensure that no
>  application on the same machine can manipulate a given resource on that
>  same machine, so it uses the lock file pattern (acquire a *local* lock
>  file for that resource, manipulate that resource, release that lock
>  file) to do actions on that resource in a safe manner (note this does
>  not ensure safety outside of that machine, lock files are *not*
>  distributed locks).
> 
>  The api that we expose from oslo is typically accessed via the following:
> 
> oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
>  external=False, lock_path=None, semaphores=None, delay=0.01)
> 
>  or via its underlying library (that I extracted from oslo.concurrency
>  and have improved to add more usefulness) @
>  http://fasteners.readthedocs.org/
> 
>  The issue though for <> is that each of
>  these projects now typically has a large amount of lock files that exist
>  or have existed and no easy way to determine when those lock files can
>  be deleted (afaik no? periodic task exists in said projects to clean up
>  lock files, or to delete them when they are no longer in use...) so what
>  happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
>  appear and there is no a simple solution to clean lock files up (since
>  oslo.concurrency is really not the right layer to know when a lock can
>  or can not be deleted, only the application knows that...)
> 
>  So then we get a few creative solutions like the following:
> 
>  - https://review.openstack.org/#/c/241663/
>  - https://review.openstack.org/#/c/239678/
>  - (and others?)
> 
>  So I wanted to ask the question, how are people involved in <  favorite openstack project>> cleaning up these files (are they at all?)
> 
>  Another idea that I have been proposing also is to use offset locks.
> 
>  This would allow for not creating X lock files, but create a *single*
>  lock file per project and use offsets into it as the way to lock. For
>  example nova could/would create a 1MB (or larger/smaller) *empty* file
>  for locks, that would allow for 1,048,576 locks to be used at the same
>  time, which honestly should be way more than enough, and then there
>  would not need to be any lock cleanup at all... Is there any reason this
>  wasn't initially done back way when this lock file code was created?
>  (https://github.com/harlowja/fasteners/pull/10 adds this functionality
>  to the underlying library if people want to look it over)
> >>>
> >>> I think the main reason was that even with a million locks available,
> >>> you'd have to find a way to hash the lock names to offsets in the file,
> >>> and a million isn't a very large collision space for that.  Having two
> >>> differently named locks that hashed to the same offset would lead to
> >>> incredibly confusing bugs.
> >>>
> >>> We could switch to requiring the projects to provide the offsets instead
> >>> of hashing a string value, but that's just pushing the collision problem
> >>> off onto every project that uses us.
> >>>
> >>> So that's the problem as I understand it, but where does that leave us
> >>> for solutions?  First, there's
> >>> https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
> >>> which allows consumers to delete lock files when they're done with them.
> >>>  Of course, in that case the onus is on the caller to make sure the lock
> >>> couldn't possibly be in use anymore.
> >>>
> >>> Second, is this actually a problem?  Modern filesystems have absurdly
> >>> large limits on the number of files in a directory, so it's highly
> >>> unlikely we would ever exhaust that, and we're creating all zero byte
> >>> files so there shouldn't be a significant space impact either.  In the
> >>> past I believe our recommendation has been to simply create a cleanup
> >>> job that runs on boot, before any of the OpenStack services start, that
> >>> deletes all of the lock files.  At that point you know it's safe to
> >>> delete them, and it prevents your lock file directory from growing 
> >>> forever.
> >>
> >> Not that high - ext3 (still the default for nov

Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Steven Hardy
On Mon, Nov 30, 2015 at 12:51:53PM -0500, Ruby Loo wrote:
>On 30 November 2015 at 10:19, Derek Higgins  wrote:
> 
>  Hi All,
> 
>      A few months tripleo switch from its devtest based CI to one that
>  was based on instack. Before doing this we anticipated disruption in the
>  ci jobs and removed them from non tripleo projects.
> 
>      We'd like to investigate adding it back to heat and ironic as
>  these are the two projects where we find our ci provides the most value.
>  But we can only do this if the results from the job are treated as
>  voting.
> 
>What does this mean? That the tripleo job could vote and do a -1 and block
>ironic's gate?

I believe it means they would be non voting, but cores should be careful
not to ignore them, e.g if a patch isn't passing tripleo CI it should be
investigated before merging said patch.

>      In the past most of the non tripleo projects tended to ignore the
>  results from the tripleo job as it wasn't unusual for the job to broken
>  for days at a time. The thing is, ignoring the results of the job is the
>  reason (the majority of the time) it was broken in the first place.
>      To decrease the number of breakages we are now no longer running
>  master code for everything (for the non tripleo projects we bump the
>  versions we use periodically if they are working). I believe with this
>  model the CI jobs we run have become a lot more reliable, there are
>  still breakages but far less frequently.
> 
>  What I proposing is we add at least one of our tripleo jobs back to both
>  heat and ironic (and other projects associated with them e.g. clients,
>  ironicinspector etc..), tripleo will switch to running latest master of
>  those repositories and the cores approving on those projects should wait
>  for a passing CI jobs before hitting approve. So how do people feel
>  about doing this? can we give it a go? A couple of people have already
>  expressed an interest in doing this but I'd like to make sure were all
>  in agreement before switching it on.
> 
>This seems to indicate that the tripleo jobs are non-voting, or at least
>won't block the gate -- so I'm fine with adding tripleo jobs to ironic.
>But if you want cores to wait/make sure they pass, then shouldn't they be
>voting? (Guess I'm a bit confused.)

The subtext here is that automated testing of OpenStack deployments is
hard, and TripleO CI sometimes experiences breakage for various reasons
including regressions in any one of the OpenStack projects it uses.

For example, TripleO CI has been broken for the last day or two due to a
nodepool regression - in this scenario it's probably best for Ironic and
Heat cores to maintain the ability to land patches, even if we may decide
it's unwise to land larger and/or more risky changes until they can be
validated against TripleO CI.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Steven Hardy
On Mon, Nov 30, 2015 at 03:19:18PM +, Derek Higgins wrote:
> Hi All,
> 
> A few months tripleo switch from its devtest based CI to one that was
> based on instack. Before doing this we anticipated disruption in the ci jobs
> and removed them from non tripleo projects.
> 
> We'd like to investigate adding it back to heat and ironic as these are
> the two projects where we find our ci provides the most value. But we can
> only do this if the results from the job are treated as voting.
> 
> In the past most of the non tripleo projects tended to ignore the
> results from the tripleo job as it wasn't unusual for the job to broken for
> days at a time. The thing is, ignoring the results of the job is the reason
> (the majority of the time) it was broken in the first place.
> To decrease the number of breakages we are now no longer running master
> code for everything (for the non tripleo projects we bump the versions we
> use periodically if they are working). I believe with this model the CI jobs
> we run have become a lot more reliable, there are still breakages but far
> less frequently.
> 
> What I proposing is we add at least one of our tripleo jobs back to both
> heat and ironic (and other projects associated with them e.g. clients,
> ironicinspector etc..), tripleo will switch to running latest master of
> those repositories and the cores approving on those projects should wait for
> a passing CI jobs before hitting approve. So how do people feel about doing
> this? can we give it a go? A couple of people have already expressed an
> interest in doing this but I'd like to make sure were all in agreement
> before switching it on.

+1 - TripleO has quite frequently encountered heat related bugs in the
past, and it'd be good to catch those earlier if at all possible.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Joshua Harlow

Ben Nemec wrote:

On 11/30/2015 02:15 PM, Sean Dague wrote:

On 11/30/2015 03:01 PM, Robert Collins wrote:

On 1 December 2015 at 08:37, Ben Nemec  wrote:

On 11/30/2015 12:42 PM, Joshua Harlow wrote:

Hi all,

I just wanted to bring up an issue, possible solution and get feedback
on it from folks because it seems to be an on-going problem that shows
up not when an application is initially deployed but as on-going
operation and running of that application proceeds (ie after running for
a period of time).

The jist of the problem is the following:

A<>  has a need to ensure that no
application on the same machine can manipulate a given resource on that
same machine, so it uses the lock file pattern (acquire a *local* lock
file for that resource, manipulate that resource, release that lock
file) to do actions on that resource in a safe manner (note this does
not ensure safety outside of that machine, lock files are *not*
distributed locks).

The api that we expose from oslo is typically accessed via the following:

oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
external=False, lock_path=None, semaphores=None, delay=0.01)

or via its underlying library (that I extracted from oslo.concurrency
and have improved to add more usefulness) @
http://fasteners.readthedocs.org/

The issue though for<>  is that each of
these projects now typically has a large amount of lock files that exist
or have existed and no easy way to determine when those lock files can
be deleted (afaik no? periodic task exists in said projects to clean up
lock files, or to delete them when they are no longer in use...) so what
happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
appear and there is no a simple solution to clean lock files up (since
oslo.concurrency is really not the right layer to know when a lock can
or can not be deleted, only the application knows that...)

So then we get a few creative solutions like the following:

- https://review.openstack.org/#/c/241663/
- https://review.openstack.org/#/c/239678/
- (and others?)

So I wanted to ask the question, how are people involved in<>  cleaning up these files (are they at all?)

Another idea that I have been proposing also is to use offset locks.

This would allow for not creating X lock files, but create a *single*
lock file per project and use offsets into it as the way to lock. For
example nova could/would create a 1MB (or larger/smaller) *empty* file
for locks, that would allow for 1,048,576 locks to be used at the same
time, which honestly should be way more than enough, and then there
would not need to be any lock cleanup at all... Is there any reason this
wasn't initially done back way when this lock file code was created?
(https://github.com/harlowja/fasteners/pull/10 adds this functionality
to the underlying library if people want to look it over)

I think the main reason was that even with a million locks available,
you'd have to find a way to hash the lock names to offsets in the file,
and a million isn't a very large collision space for that.  Having two
differently named locks that hashed to the same offset would lead to
incredibly confusing bugs.

We could switch to requiring the projects to provide the offsets instead
of hashing a string value, but that's just pushing the collision problem
off onto every project that uses us.

So that's the problem as I understand it, but where does that leave us
for solutions?  First, there's
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
which allows consumers to delete lock files when they're done with them.
  Of course, in that case the onus is on the caller to make sure the lock
couldn't possibly be in use anymore.

Second, is this actually a problem?  Modern filesystems have absurdly
large limits on the number of files in a directory, so it's highly
unlikely we would ever exhaust that, and we're creating all zero byte
files so there shouldn't be a significant space impact either.  In the
past I believe our recommendation has been to simply create a cleanup
job that runs on boot, before any of the OpenStack services start, that
deletes all of the lock files.  At that point you know it's safe to
delete them, and it prevents your lock file directory from growing forever.

Not that high - ext3 (still the default for nova ephemeral
partitions!) has a limit of 64k in one directory.

That said, I don't disagree - my thinkis is that we should advise
putting such files on a tmpfs.

So, I think the issue really is that the named external locks were
originally thought to be handling some pretty sensitive critical
sections. Both cinder / nova have less than 20 such named locks.

Cinder uses a parametrized version for all volume operations -
https://github.com/openstack/cinder/blob/7fb767f2d652f070a20fd70d92585d61e56f3a50/cinder/volume/manager.py#L143


Nova also does something similar in image cache
https://github.com/openstack/nova/blob/1734ce7101982d

Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-11-30 Thread Steven Hardy
On Tue, Dec 01, 2015 at 08:47:20AM +1300, Steve Baker wrote:
> On 30/11/15 23:21, Steven Hardy wrote:
> >On Mon, Nov 30, 2015 at 10:03:29AM +0100, Lennart Regebro wrote:
> >>I'm tasked to implement a command that shows error messages when a
> >>deployment has failed. I have a vague memory of having seen scripts
> >>that do something like this, if that exists, can somebody point me in
> >>teh right direction?
> >I wrote a super simple script and put it in a blog post a while back:
> >
> >http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-3-cluster.html
> >
> >All it does is find the failed SoftwareDeployment resources, then do heat
> >deployment-show on the resource, so you can see the stderr associated with
> >the failure.
> >
> >Having tripleoclient do that by default would be useful.
> >
> >>Any opinions on what that should do, specifically? Traverse failed
> >>resources to find error messages, I assume. Anything else?
> >Yeah, but I think for this to be useful, we need to go a bit deeper than
> >just showing the resource error - there are a number of typical failure
> >modes, and I end up repeating the same steps to debug every time.
> >
> >1. SoftwareDeployment failed (mentioned above).  Every time, you need to
> >see the name of the SoftwareDeployment which failed, figure out if it
> >failed on one or all of the servers, then look at the stderr for clues.
> >
> >2. A server failed to build (OS::Nova::Server resource is FAILED), here we
> >need to check both nova and ironic, looking first to see if ironic has the
> >node(s) in the wrong state for scheduling (e.g nova gave us a no valid
> >host error), and then if they are OK in ironic, do nova show on the failed
> >host to see the reason nova gives us for it failing to go ACTIVE.
> >
> >3. A stack timeout happened.  IIRC when this happens, we currently fail
> >with an obscure keystone related backtrace due to the token expiring.  We
> >should instead catch this error and show the heat stack status_reason,
> >which should say clearly the stack timed out.
> >
> >If we could just make these three cases really clear and easy to debug, I
> >think things would be much better (IME the above are a high proportion of
> >all failures), but I'm sure folks can come up with other ideas to add to
> >the list.
> >
> I'm actually drafting a spec which includes a command which does this. I
> hope to submit it soon, but here is the current state of that command's
> description:
> 
> Diagnosing resources in a FAILED state
> --
> 
> One command will be implemented:
> - openstack overcloud failed list
> 
> This will print a yaml tree showing the hierarchy of nested stacks until it
> gets to the actual failed resource, then it will show information regarding
> the
> failure. For most resource types this information will be the status_reason,
> but for software-deployment resources the deploy_stdout, deploy_stderr and
> deploy_status code will be printed.
> 
> In addition to this stand-alone command, this output will also be printed
> when
> an ``openstack overcloud deploy`` or ``openstack overcloud update`` command
> results in a stack in a FAILED state.

This sounds great!

Another piece of low-hanging-fruit in the meantime is we should actually
print the stack_status_reason on failure:

https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/overcloud_deploy.py#L280

The DeploymentError raised could include the stack_status_reason vs the
unqualified "Heat Stack create failed".

I guess your event listing partially overlaps with this, as you can now
derive the stack_status_reason from the last event, but it's still be good
to loudly output it so folks can see more quickly when things such as
timeouts happen that are clearly displayed in the top-level stack status.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Steve Baker

On 01/12/15 04:19, Derek Higgins wrote:

Hi All,

A few months tripleo switch from its devtest based CI to one that 
was based on instack. Before doing this we anticipated disruption in 
the ci jobs and removed them from non tripleo projects.


We'd like to investigate adding it back to heat and ironic as 
these are the two projects where we find our ci provides the most 
value. But we can only do this if the results from the job are treated 
as voting.


In the past most of the non tripleo projects tended to ignore the 
results from the tripleo job as it wasn't unusual for the job to 
broken for days at a time. The thing is, ignoring the results of the 
job is the reason (the majority of the time) it was broken in the 
first place.
To decrease the number of breakages we are now no longer running 
master code for everything (for the non tripleo projects we bump the 
versions we use periodically if they are working). I believe with this 
model the CI jobs we run have become a lot more reliable, there are 
still breakages but far less frequently.


What I proposing is we add at least one of our tripleo jobs back to 
both heat and ironic (and other projects associated with them e.g. 
clients, ironicinspector etc..), tripleo will switch to running latest 
master of those repositories and the cores approving on those projects 
should wait for a passing CI jobs before hitting approve. So how do 
people feel about doing this? can we give it a go? A couple of people 
have already expressed an interest in doing this but I'd like to make 
sure were all in agreement before switching it on.


+1 for heat from me. It sounds like the job won't be voting, but heat 
cores should be strongly encouraged to treat it as such.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Ben Nemec
On 11/30/2015 02:15 PM, Sean Dague wrote:
> On 11/30/2015 03:01 PM, Robert Collins wrote:
>> On 1 December 2015 at 08:37, Ben Nemec  wrote:
>>> On 11/30/2015 12:42 PM, Joshua Harlow wrote:
 Hi all,

 I just wanted to bring up an issue, possible solution and get feedback
 on it from folks because it seems to be an on-going problem that shows
 up not when an application is initially deployed but as on-going
 operation and running of that application proceeds (ie after running for
 a period of time).

 The jist of the problem is the following:

 A <> has a need to ensure that no
 application on the same machine can manipulate a given resource on that
 same machine, so it uses the lock file pattern (acquire a *local* lock
 file for that resource, manipulate that resource, release that lock
 file) to do actions on that resource in a safe manner (note this does
 not ensure safety outside of that machine, lock files are *not*
 distributed locks).

 The api that we expose from oslo is typically accessed via the following:

oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
 external=False, lock_path=None, semaphores=None, delay=0.01)

 or via its underlying library (that I extracted from oslo.concurrency
 and have improved to add more usefulness) @
 http://fasteners.readthedocs.org/

 The issue though for <> is that each of
 these projects now typically has a large amount of lock files that exist
 or have existed and no easy way to determine when those lock files can
 be deleted (afaik no? periodic task exists in said projects to clean up
 lock files, or to delete them when they are no longer in use...) so what
 happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
 appear and there is no a simple solution to clean lock files up (since
 oslo.concurrency is really not the right layer to know when a lock can
 or can not be deleted, only the application knows that...)

 So then we get a few creative solutions like the following:

 - https://review.openstack.org/#/c/241663/
 - https://review.openstack.org/#/c/239678/
 - (and others?)

 So I wanted to ask the question, how are people involved in <>>> favorite openstack project>> cleaning up these files (are they at all?)

 Another idea that I have been proposing also is to use offset locks.

 This would allow for not creating X lock files, but create a *single*
 lock file per project and use offsets into it as the way to lock. For
 example nova could/would create a 1MB (or larger/smaller) *empty* file
 for locks, that would allow for 1,048,576 locks to be used at the same
 time, which honestly should be way more than enough, and then there
 would not need to be any lock cleanup at all... Is there any reason this
 wasn't initially done back way when this lock file code was created?
 (https://github.com/harlowja/fasteners/pull/10 adds this functionality
 to the underlying library if people want to look it over)
>>>
>>> I think the main reason was that even with a million locks available,
>>> you'd have to find a way to hash the lock names to offsets in the file,
>>> and a million isn't a very large collision space for that.  Having two
>>> differently named locks that hashed to the same offset would lead to
>>> incredibly confusing bugs.
>>>
>>> We could switch to requiring the projects to provide the offsets instead
>>> of hashing a string value, but that's just pushing the collision problem
>>> off onto every project that uses us.
>>>
>>> So that's the problem as I understand it, but where does that leave us
>>> for solutions?  First, there's
>>> https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
>>> which allows consumers to delete lock files when they're done with them.
>>>  Of course, in that case the onus is on the caller to make sure the lock
>>> couldn't possibly be in use anymore.
>>>
>>> Second, is this actually a problem?  Modern filesystems have absurdly
>>> large limits on the number of files in a directory, so it's highly
>>> unlikely we would ever exhaust that, and we're creating all zero byte
>>> files so there shouldn't be a significant space impact either.  In the
>>> past I believe our recommendation has been to simply create a cleanup
>>> job that runs on boot, before any of the OpenStack services start, that
>>> deletes all of the lock files.  At that point you know it's safe to
>>> delete them, and it prevents your lock file directory from growing forever.
>>
>> Not that high - ext3 (still the default for nova ephemeral
>> partitions!) has a limit of 64k in one directory.
>>
>> That said, I don't disagree - my thinkis is that we should advise
>> putting such files on a tmpfs.
> 
> So, I think the issue really is that the named external locks were
> orig

Re: [openstack-dev] [OpenStack-Infra] IRC Bot issues

2015-11-30 Thread Jeremy Stanley
On 2015-11-30 19:54:33 + (+), Paul Michali wrote:
> Check out https://freenode.net/irc_servers.shtml which lists the servers. I
> was using irc.freenode.net. Switched to weber.freenode.net and able to
> connect.
> 
> (now everyone will hop on that one and I'll have to pick another :)

Thanks--I (perhaps incorrectly?) assumed that only those they
maintain in the chat.freenode.net DNS round-robin are considered
active.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Ben Nemec
On 11/30/2015 01:57 PM, Joshua Harlow wrote:
> Ben Nemec wrote:
>> On 11/30/2015 12:42 PM, Joshua Harlow wrote:
>>> Hi all,
>>>
>>> I just wanted to bring up an issue, possible solution and get feedback
>>> on it from folks because it seems to be an on-going problem that shows
>>> up not when an application is initially deployed but as on-going
>>> operation and running of that application proceeds (ie after running for
>>> a period of time).
>>>
>>> The jist of the problem is the following:
>>>
>>> A<>  has a need to ensure that no
>>> application on the same machine can manipulate a given resource on that
>>> same machine, so it uses the lock file pattern (acquire a *local* lock
>>> file for that resource, manipulate that resource, release that lock
>>> file) to do actions on that resource in a safe manner (note this does
>>> not ensure safety outside of that machine, lock files are *not*
>>> distributed locks).
>>>
>>> The api that we expose from oslo is typically accessed via the following:
>>>
>>> oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
>>> external=False, lock_path=None, semaphores=None, delay=0.01)
>>>
>>> or via its underlying library (that I extracted from oslo.concurrency
>>> and have improved to add more usefulness) @
>>> http://fasteners.readthedocs.org/
>>>
>>> The issue though for<>  is that each of
>>> these projects now typically has a large amount of lock files that exist
>>> or have existed and no easy way to determine when those lock files can
>>> be deleted (afaik no? periodic task exists in said projects to clean up
>>> lock files, or to delete them when they are no longer in use...) so what
>>> happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
>>> appear and there is no a simple solution to clean lock files up (since
>>> oslo.concurrency is really not the right layer to know when a lock can
>>> or can not be deleted, only the application knows that...)
>>>
>>> So then we get a few creative solutions like the following:
>>>
>>> - https://review.openstack.org/#/c/241663/
>>> - https://review.openstack.org/#/c/239678/
>>> - (and others?)
>>>
>>> So I wanted to ask the question, how are people involved in<>> favorite openstack project>>  cleaning up these files (are they at all?)
>>>
>>> Another idea that I have been proposing also is to use offset locks.
>>>
>>> This would allow for not creating X lock files, but create a *single*
>>> lock file per project and use offsets into it as the way to lock. For
>>> example nova could/would create a 1MB (or larger/smaller) *empty* file
>>> for locks, that would allow for 1,048,576 locks to be used at the same
>>> time, which honestly should be way more than enough, and then there
>>> would not need to be any lock cleanup at all... Is there any reason this
>>> wasn't initially done back way when this lock file code was created?
>>> (https://github.com/harlowja/fasteners/pull/10 adds this functionality
>>> to the underlying library if people want to look it over)
>>
>> I think the main reason was that even with a million locks available,
>> you'd have to find a way to hash the lock names to offsets in the file,
>> and a million isn't a very large collision space for that.  Having two
>> differently named locks that hashed to the same offset would lead to
>> incredibly confusing bugs.
>>
>> We could switch to requiring the projects to provide the offsets instead
>> of hashing a string value, but that's just pushing the collision problem
>> off onto every project that uses us.
>>
>> So that's the problem as I understand it, but where does that leave us
>> for solutions?  First, there's
>> https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
>> which allows consumers to delete lock files when they're done with them.
>>   Of course, in that case the onus is on the caller to make sure the lock
>> couldn't possibly be in use anymore.
> 
> Ya, I wonder how many folks are actually doing this, because the exposed 
> API of @synchronized doesn't seem to tell u what file to even delete in 
> the first place :-/ perhaps we should make that more accessible so that 
> people/consumers of that code could know what to delete...

I'm not opposed to allowing users to clean up lock files, although I
think the docstrings for the methods should be very clear that it isn't
strictly necessary and it must be done carefully to avoid deleting
in-use files (the existing docstring is actually insufficient IMHO, but
I'm pretty sure I reviewed it when it went in so I have no one else to
blame ;-).

> 
>>
>> Second, is this actually a problem?  Modern filesystems have absurdly
>> large limits on the number of files in a directory, so it's highly
>> unlikely we would ever exhaust that, and we're creating all zero byte
>> files so there shouldn't be a significant space impact either.  In the
>> past I believe our recommendation has been to simply create a cleanup
>> job that runs on boot, be

Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Joshua Harlow

Sean Dague wrote:

On 11/30/2015 03:01 PM, Robert Collins wrote:

On 1 December 2015 at 08:37, Ben Nemec  wrote:

On 11/30/2015 12:42 PM, Joshua Harlow wrote:

Hi all,

I just wanted to bring up an issue, possible solution and get feedback
on it from folks because it seems to be an on-going problem that shows
up not when an application is initially deployed but as on-going
operation and running of that application proceeds (ie after running for
a period of time).

The jist of the problem is the following:

A<>  has a need to ensure that no
application on the same machine can manipulate a given resource on that
same machine, so it uses the lock file pattern (acquire a *local* lock
file for that resource, manipulate that resource, release that lock
file) to do actions on that resource in a safe manner (note this does
not ensure safety outside of that machine, lock files are *not*
distributed locks).

The api that we expose from oslo is typically accessed via the following:

oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
external=False, lock_path=None, semaphores=None, delay=0.01)

or via its underlying library (that I extracted from oslo.concurrency
and have improved to add more usefulness) @
http://fasteners.readthedocs.org/

The issue though for<>  is that each of
these projects now typically has a large amount of lock files that exist
or have existed and no easy way to determine when those lock files can
be deleted (afaik no? periodic task exists in said projects to clean up
lock files, or to delete them when they are no longer in use...) so what
happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
appear and there is no a simple solution to clean lock files up (since
oslo.concurrency is really not the right layer to know when a lock can
or can not be deleted, only the application knows that...)

So then we get a few creative solutions like the following:

- https://review.openstack.org/#/c/241663/
- https://review.openstack.org/#/c/239678/
- (and others?)

So I wanted to ask the question, how are people involved in<>  cleaning up these files (are they at all?)

Another idea that I have been proposing also is to use offset locks.

This would allow for not creating X lock files, but create a *single*
lock file per project and use offsets into it as the way to lock. For
example nova could/would create a 1MB (or larger/smaller) *empty* file
for locks, that would allow for 1,048,576 locks to be used at the same
time, which honestly should be way more than enough, and then there
would not need to be any lock cleanup at all... Is there any reason this
wasn't initially done back way when this lock file code was created?
(https://github.com/harlowja/fasteners/pull/10 adds this functionality
to the underlying library if people want to look it over)

I think the main reason was that even with a million locks available,
you'd have to find a way to hash the lock names to offsets in the file,
and a million isn't a very large collision space for that.  Having two
differently named locks that hashed to the same offset would lead to
incredibly confusing bugs.

We could switch to requiring the projects to provide the offsets instead
of hashing a string value, but that's just pushing the collision problem
off onto every project that uses us.

So that's the problem as I understand it, but where does that leave us
for solutions?  First, there's
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
which allows consumers to delete lock files when they're done with them.
  Of course, in that case the onus is on the caller to make sure the lock
couldn't possibly be in use anymore.

Second, is this actually a problem?  Modern filesystems have absurdly
large limits on the number of files in a directory, so it's highly
unlikely we would ever exhaust that, and we're creating all zero byte
files so there shouldn't be a significant space impact either.  In the
past I believe our recommendation has been to simply create a cleanup
job that runs on boot, before any of the OpenStack services start, that
deletes all of the lock files.  At that point you know it's safe to
delete them, and it prevents your lock file directory from growing forever.

Not that high - ext3 (still the default for nova ephemeral
partitions!) has a limit of 64k in one directory.

That said, I don't disagree - my thinkis is that we should advise
putting such files on a tmpfs.


So, I think the issue really is that the named external locks were
originally thought to be handling some pretty sensitive critical
sections. Both cinder / nova have less than 20 such named locks.

Cinder uses a parametrized version for all volume operations -
https://github.com/openstack/cinder/blob/7fb767f2d652f070a20fd70d92585d61e56f3a50/cinder/volume/manager.py#L143


Nova also does something similar in image cache
https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/virt/libv

Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Joshua Harlow

Sean Dague wrote:

On 11/30/2015 03:01 PM, Robert Collins wrote:

On 1 December 2015 at 08:37, Ben Nemec  wrote:

On 11/30/2015 12:42 PM, Joshua Harlow wrote:

Hi all,

I just wanted to bring up an issue, possible solution and get feedback
on it from folks because it seems to be an on-going problem that shows
up not when an application is initially deployed but as on-going
operation and running of that application proceeds (ie after running for
a period of time).

The jist of the problem is the following:

A<>  has a need to ensure that no
application on the same machine can manipulate a given resource on that
same machine, so it uses the lock file pattern (acquire a *local* lock
file for that resource, manipulate that resource, release that lock
file) to do actions on that resource in a safe manner (note this does
not ensure safety outside of that machine, lock files are *not*
distributed locks).

The api that we expose from oslo is typically accessed via the following:

oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
external=False, lock_path=None, semaphores=None, delay=0.01)

or via its underlying library (that I extracted from oslo.concurrency
and have improved to add more usefulness) @
http://fasteners.readthedocs.org/

The issue though for<>  is that each of
these projects now typically has a large amount of lock files that exist
or have existed and no easy way to determine when those lock files can
be deleted (afaik no? periodic task exists in said projects to clean up
lock files, or to delete them when they are no longer in use...) so what
happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
appear and there is no a simple solution to clean lock files up (since
oslo.concurrency is really not the right layer to know when a lock can
or can not be deleted, only the application knows that...)

So then we get a few creative solutions like the following:

- https://review.openstack.org/#/c/241663/
- https://review.openstack.org/#/c/239678/
- (and others?)

So I wanted to ask the question, how are people involved in<>  cleaning up these files (are they at all?)

Another idea that I have been proposing also is to use offset locks.

This would allow for not creating X lock files, but create a *single*
lock file per project and use offsets into it as the way to lock. For
example nova could/would create a 1MB (or larger/smaller) *empty* file
for locks, that would allow for 1,048,576 locks to be used at the same
time, which honestly should be way more than enough, and then there
would not need to be any lock cleanup at all... Is there any reason this
wasn't initially done back way when this lock file code was created?
(https://github.com/harlowja/fasteners/pull/10 adds this functionality
to the underlying library if people want to look it over)

I think the main reason was that even with a million locks available,
you'd have to find a way to hash the lock names to offsets in the file,
and a million isn't a very large collision space for that.  Having two
differently named locks that hashed to the same offset would lead to
incredibly confusing bugs.

We could switch to requiring the projects to provide the offsets instead
of hashing a string value, but that's just pushing the collision problem
off onto every project that uses us.

So that's the problem as I understand it, but where does that leave us
for solutions?  First, there's
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
which allows consumers to delete lock files when they're done with them.
  Of course, in that case the onus is on the caller to make sure the lock
couldn't possibly be in use anymore.

Second, is this actually a problem?  Modern filesystems have absurdly
large limits on the number of files in a directory, so it's highly
unlikely we would ever exhaust that, and we're creating all zero byte
files so there shouldn't be a significant space impact either.  In the
past I believe our recommendation has been to simply create a cleanup
job that runs on boot, before any of the OpenStack services start, that
deletes all of the lock files.  At that point you know it's safe to
delete them, and it prevents your lock file directory from growing forever.

Not that high - ext3 (still the default for nova ephemeral
partitions!) has a limit of 64k in one directory.

That said, I don't disagree - my thinkis is that we should advise
putting such files on a tmpfs.


So, I think the issue really is that the named external locks were
originally thought to be handling some pretty sensitive critical
sections. Both cinder / nova have less than 20 such named locks.

Cinder uses a parametrized version for all volume operations -
https://github.com/openstack/cinder/blob/7fb767f2d652f070a20fd70d92585d61e56f3a50/cinder/volume/manager.py#L143


Nova also does something similar in image cache
https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/virt/libv

Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2015-11-30 10:42:53 -0800:

Hi all,

I just wanted to bring up an issue, possible solution and get feedback
on it from folks because it seems to be an on-going problem that shows
up not when an application is initially deployed but as on-going
operation and running of that application proceeds (ie after running for
a period of time).

The jist of the problem is the following:

A<>  has a need to ensure that no
application on the same machine can manipulate a given resource on that
same machine, so it uses the lock file pattern (acquire a *local* lock
file for that resource, manipulate that resource, release that lock
file) to do actions on that resource in a safe manner (note this does
not ensure safety outside of that machine, lock files are *not*
distributed locks).

The api that we expose from oslo is typically accessed via the following:

oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
external=False, lock_path=None, semaphores=None, delay=0.01)

or via its underlying library (that I extracted from oslo.concurrency
and have improved to add more usefulness) @
http://fasteners.readthedocs.org/

The issue though for<>  is that each of
these projects now typically has a large amount of lock files that exist
or have existed and no easy way to determine when those lock files can
be deleted (afaik no? periodic task exists in said projects to clean up
lock files, or to delete them when they are no longer in use...) so what
happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
appear and there is no a simple solution to clean lock files up (since
oslo.concurrency is really not the right layer to know when a lock can
or can not be deleted, only the application knows that...)

So then we get a few creative solutions like the following:

- https://review.openstack.org/#/c/241663/
- https://review.openstack.org/#/c/239678/
- (and others?)

So I wanted to ask the question, how are people involved in<>  cleaning up these files (are they at all?)

Another idea that I have been proposing also is to use offset locks.

This would allow for not creating X lock files, but create a *single*
lock file per project and use offsets into it as the way to lock. For
example nova could/would create a 1MB (or larger/smaller) *empty* file
for locks, that would allow for 1,048,576 locks to be used at the same
time, which honestly should be way more than enough, and then there
would not need to be any lock cleanup at all... Is there any reason this
wasn't initially done back way when this lock file code was created?
(https://github.com/harlowja/fasteners/pull/10 adds this functionality
to the underlying library if people want to look it over)


This is really complicated, and basically just makes the directory of
lock files _look_ clean. But it still leaves each offset stale, and has
to be cleaned anyway.


What do u mean here (out of curiosity), each offset stale? The file 
would basically never change size after startup (pick a large enough 
number, 10 million, 1 trillion billion...) and use it appropriately from 
there on out...




Fasteners already has process locks that use fcntl/flock.

These locks provide enough to allow you to infer things about.  the owner
of the lock file. If there's no process still holding the exclusive lock
when you try to lock it, then YOU own it, and thus control the resource.


Well not really, python doesn't expose the ability to introspect who has 
the handle afaik, I  tried to look into that and it looks like fnctl 
(the C api) might have a way to get it, but u can't really introspect 
that, without as u stated, acquiring the lock yourself... I can try to 
recall more of this investigation when I was trying to add a @owner_pid 
property onto fasteners interprocess lock class but from my simple 
memory the exposed API isn't there in python.




A cron job which tries to flock anything older than ${REASONABLE_TIME}
and deletes them seems fine. Whatever process was trying to interact
with the resource is gone at that point.


Yes, or a periodic thread in the application that can do this in a safe 
manner (using its ability to know exactly what its own apps internals 
are doing...)




Now, anything that needs to safely manage a resource beyond without a
live process will need to keep track of its own state and be idempotent
anyway. IMO this isn't something lock files alone solve well. I believe
you're familiar with a library named taskflow that is supposed to help
write code that does this better ;). Even without taskflow, if you are
trying to do something exclusive without a single process that stays
alive, you need to do _something_ to keep track of state and restart
or revert that flow. That is a state management problem, not a locking
problem.



Agreed. ;)


__
OpenStack Development Mailing List (not for usage questions)
Unsubs

Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Joshua Harlow

Joshua Harlow wrote:

Ben Nemec wrote:

On 11/30/2015 12:42 PM, Joshua Harlow wrote:

Hi all,

I just wanted to bring up an issue, possible solution and get feedback
on it from folks because it seems to be an on-going problem that shows
up not when an application is initially deployed but as on-going
operation and running of that application proceeds (ie after running for
a period of time).

The jist of the problem is the following:

A<> has a need to ensure that no
application on the same machine can manipulate a given resource on that
same machine, so it uses the lock file pattern (acquire a *local* lock
file for that resource, manipulate that resource, release that lock
file) to do actions on that resource in a safe manner (note this does
not ensure safety outside of that machine, lock files are *not*
distributed locks).

The api that we expose from oslo is typically accessed via the
following:

oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
external=False, lock_path=None, semaphores=None, delay=0.01)

or via its underlying library (that I extracted from oslo.concurrency
and have improved to add more usefulness) @
http://fasteners.readthedocs.org/

The issue though for<> is that each of
these projects now typically has a large amount of lock files that exist
or have existed and no easy way to determine when those lock files can
be deleted (afaik no? periodic task exists in said projects to clean up
lock files, or to delete them when they are no longer in use...) so what
happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
appear and there is no a simple solution to clean lock files up (since
oslo.concurrency is really not the right layer to know when a lock can
or can not be deleted, only the application knows that...)

So then we get a few creative solutions like the following:

- https://review.openstack.org/#/c/241663/
- https://review.openstack.org/#/c/239678/
- (and others?)

So I wanted to ask the question, how are people involved in<> cleaning up these files (are they at all?)


From some simple greps using:

$ echo "Removal usage in" $(basename `pwd`); grep -R 
remove_external_lock_file *


Removal usage in cinder


Removal usage in nova
nova/virt/libvirt/imagecache.py: 
lockutils.remove_external_lock_file(lock_file,


Removal usage in glance


Removal usage in neutron


So me thinks people aren't cleaning any of these up :-/



Another idea that I have been proposing also is to use offset locks.

This would allow for not creating X lock files, but create a *single*
lock file per project and use offsets into it as the way to lock. For
example nova could/would create a 1MB (or larger/smaller) *empty* file
for locks, that would allow for 1,048,576 locks to be used at the same
time, which honestly should be way more than enough, and then there
would not need to be any lock cleanup at all... Is there any reason this
wasn't initially done back way when this lock file code was created?
(https://github.com/harlowja/fasteners/pull/10 adds this functionality
to the underlying library if people want to look it over)


I think the main reason was that even with a million locks available,
you'd have to find a way to hash the lock names to offsets in the file,
and a million isn't a very large collision space for that. Having two
differently named locks that hashed to the same offset would lead to
incredibly confusing bugs.

We could switch to requiring the projects to provide the offsets instead
of hashing a string value, but that's just pushing the collision problem
off onto every project that uses us.

So that's the problem as I understand it, but where does that leave us
for solutions? First, there's
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151

which allows consumers to delete lock files when they're done with them.
Of course, in that case the onus is on the caller to make sure the lock
couldn't possibly be in use anymore.


Ya, I wonder how many folks are actually doing this, because the exposed
API of @synchronized doesn't seem to tell u what file to even delete in
the first place :-/ perhaps we should make that more accessible so that
people/consumers of that code could know what to delete...



Second, is this actually a problem? Modern filesystems have absurdly
large limits on the number of files in a directory, so it's highly
unlikely we would ever exhaust that, and we're creating all zero byte
files so there shouldn't be a significant space impact either. In the
past I believe our recommendation has been to simply create a cleanup
job that runs on boot, before any of the OpenStack services start, that
deletes all of the lock files. At that point you know it's safe to
delete them, and it prevents your lock file directory from growing
forever.


Except as we move to never shutting an app down (always online and live
upgrades and all that jazz), it will have to run more than just on boot,
but point taken.



I know 

Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-30 Thread Fox, Kevin M
Is that because of trying to shoehorn docker containers into rpm's though? I've 
never seen anyone else try and use them that way. Maybe they belong in a docker 
repo like the hub or something openstack.org hosted instead?

Thanks,
Kevin

From: Igor Kalnitsky [ikalnit...@mirantis.com]
Sent: Friday, November 27, 2015 1:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the 
Fuel master node

Hey Vladimir,

Thanks for your effort on doing this job. Unfortunately we have not so
much time left and FF is coming, so I'm afraid it's become unreal to
make it before FF. Especially if it takes 2-3 days to fix system
tests.


Andrew,

I had the same opinion some time ago, but it was changed because
nobody puts effort to fix our Docker experience. Moreover Docker is
still buggy and we have a plenty of issues such as stale mount points
for instance. Besides, I don't like our upgrade procedure -

1. Install fuel-docker-images.rpm
2. Load images from installed tarball to Docker
3. Re-create containers from new images

Where (2) and (3) are manual steps and breaks idea of "yum update"
delivery approach.

Thanks,
Igor

On Wed, Nov 25, 2015 at 9:43 PM, Andrew Woodward  wrote:
> 
> IMO, removing the docker containers is a mistake v.s. fixing them and using
> them properly. They provide an isolation that is necessary (and that we
> mangle) to make services portable and scaleable. We really should sit down
> and document how we really want all of the services to interact before we
> rip the containers out.
>
> I agree, the way we use containers now still is quite wrong, and brings us
> some negative value, but I'm not sold on stripping them out now just because
> they no longer bring the same upgrades value as before.
> 
>
> My opinion aside, we are rushing into this far to late in the feature cycle.
> Prior to moving forward with this, we need a good QA plan, the spec is quite
> light on that and must receive review and approval from QA. This needs to
> include an actual testing plan.
>
> From the implementation side, we are pushing up against the FF deadline. We
> need to document what our time objectives are for this and when we will no
> longer consider this for 8.0.
>
> Lastly, for those that are +1 on the thread here, please review and comment
> on the spec, It's received almost no attention for something with such a
> large impact.
>
> On Tue, Nov 24, 2015 at 4:58 PM Vladimir Kozhukalov
>  wrote:
>>
>> The status is as follows:
>>
>> 1) Fuel-main [1] and fuel-library [2] patches can deploy the master node
>> w/o docker containers
>> 2) I've not built experimental ISO yet (have been testing and debugging
>> manually)
>> 3) There are still some flaws (need better formatting, etc.)
>> 4) Plan for tomorrow is to build experimental ISO and to begin fixing
>> system tests and fix the spec.
>>
>> [1] https://review.openstack.org/#/c/248649
>> [2] https://review.openstack.org/#/c/248650
>>
>> Vladimir Kozhukalov
>>
>> On Mon, Nov 23, 2015 at 7:51 PM, Vladimir Kozhukalov
>>  wrote:
>>>
>>> Colleagues,
>>>
>>> I've started working on the change. Here are two patches (fuel-main [1]
>>> and fuel-library [2]). They are not ready to review (still does not work and
>>> under active development). Changes are not going to be huge. Here is a spec
>>> [3]. Will keep the status up to date in this ML thread.
>>>
>>>
>>> [1] https://review.openstack.org/#/c/248649
>>> [2] https://review.openstack.org/#/c/248650
>>> [3] https://review.openstack.org/#/c/248814
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Mon, Nov 23, 2015 at 3:35 PM, Aleksandr Maretskiy
>>>  wrote:



 On Mon, Nov 23, 2015 at 2:27 PM, Bogdan Dobrelya
  wrote:
>
> On 23.11.2015 12:47, Aleksandr Maretskiy wrote:
> > Hi all,
> >
> > as you know, Rally runs inside docker on Fuel master node, so docker
> > removal (good improvement) is a problem for Rally users.
> >
> > To solve this, I'm planning to make native Rally installation on Fuel
> > master node that is running on CentOS 7,
> > and then make a step-by-step instruction how to make this
> > installation.
> >
> > So I hope docker removal will not make issues for Rally users.
>
> I believe the most backwards compatible scenario is to keep the docker
> installed while removing the fuel-* docker things back to the host OS.
> So nothing would prevent user from pulling and running whichever docker
> containers he wants to put on the Fuel master node. Makes sense?
>

 Sounds good


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>
>> _

Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Sean Dague
On 11/30/2015 03:01 PM, Robert Collins wrote:
> On 1 December 2015 at 08:37, Ben Nemec  wrote:
>> On 11/30/2015 12:42 PM, Joshua Harlow wrote:
>>> Hi all,
>>>
>>> I just wanted to bring up an issue, possible solution and get feedback
>>> on it from folks because it seems to be an on-going problem that shows
>>> up not when an application is initially deployed but as on-going
>>> operation and running of that application proceeds (ie after running for
>>> a period of time).
>>>
>>> The jist of the problem is the following:
>>>
>>> A <> has a need to ensure that no
>>> application on the same machine can manipulate a given resource on that
>>> same machine, so it uses the lock file pattern (acquire a *local* lock
>>> file for that resource, manipulate that resource, release that lock
>>> file) to do actions on that resource in a safe manner (note this does
>>> not ensure safety outside of that machine, lock files are *not*
>>> distributed locks).
>>>
>>> The api that we expose from oslo is typically accessed via the following:
>>>
>>>oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
>>> external=False, lock_path=None, semaphores=None, delay=0.01)
>>>
>>> or via its underlying library (that I extracted from oslo.concurrency
>>> and have improved to add more usefulness) @
>>> http://fasteners.readthedocs.org/
>>>
>>> The issue though for <> is that each of
>>> these projects now typically has a large amount of lock files that exist
>>> or have existed and no easy way to determine when those lock files can
>>> be deleted (afaik no? periodic task exists in said projects to clean up
>>> lock files, or to delete them when they are no longer in use...) so what
>>> happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
>>> appear and there is no a simple solution to clean lock files up (since
>>> oslo.concurrency is really not the right layer to know when a lock can
>>> or can not be deleted, only the application knows that...)
>>>
>>> So then we get a few creative solutions like the following:
>>>
>>> - https://review.openstack.org/#/c/241663/
>>> - https://review.openstack.org/#/c/239678/
>>> - (and others?)
>>>
>>> So I wanted to ask the question, how are people involved in <>> favorite openstack project>> cleaning up these files (are they at all?)
>>>
>>> Another idea that I have been proposing also is to use offset locks.
>>>
>>> This would allow for not creating X lock files, but create a *single*
>>> lock file per project and use offsets into it as the way to lock. For
>>> example nova could/would create a 1MB (or larger/smaller) *empty* file
>>> for locks, that would allow for 1,048,576 locks to be used at the same
>>> time, which honestly should be way more than enough, and then there
>>> would not need to be any lock cleanup at all... Is there any reason this
>>> wasn't initially done back way when this lock file code was created?
>>> (https://github.com/harlowja/fasteners/pull/10 adds this functionality
>>> to the underlying library if people want to look it over)
>>
>> I think the main reason was that even with a million locks available,
>> you'd have to find a way to hash the lock names to offsets in the file,
>> and a million isn't a very large collision space for that.  Having two
>> differently named locks that hashed to the same offset would lead to
>> incredibly confusing bugs.
>>
>> We could switch to requiring the projects to provide the offsets instead
>> of hashing a string value, but that's just pushing the collision problem
>> off onto every project that uses us.
>>
>> So that's the problem as I understand it, but where does that leave us
>> for solutions?  First, there's
>> https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
>> which allows consumers to delete lock files when they're done with them.
>>  Of course, in that case the onus is on the caller to make sure the lock
>> couldn't possibly be in use anymore.
>>
>> Second, is this actually a problem?  Modern filesystems have absurdly
>> large limits on the number of files in a directory, so it's highly
>> unlikely we would ever exhaust that, and we're creating all zero byte
>> files so there shouldn't be a significant space impact either.  In the
>> past I believe our recommendation has been to simply create a cleanup
>> job that runs on boot, before any of the OpenStack services start, that
>> deletes all of the lock files.  At that point you know it's safe to
>> delete them, and it prevents your lock file directory from growing forever.
> 
> Not that high - ext3 (still the default for nova ephemeral
> partitions!) has a limit of 64k in one directory.
> 
> That said, I don't disagree - my thinkis is that we should advise
> putting such files on a tmpfs.

So, I think the issue really is that the named external locks were
originally thought to be handling some pretty sensitive critical
sections. Both cinder / nova have less than 20 such named locks.

Cinder uses 

[openstack-dev] [ironic] weekly subteam status report

2015-11-30 Thread Ruby Loo
Hi,


We are elated to present this week's subteam report for Ironic. As usual,
this is pulled directly from the Ironic whiteboard[0] and formatted.


Bugs (dtantsur)

===

(diff with Nov 23)

- Open: 174 (+3). 11 new, 60 in progress, 0 critical, 16 high and 11
incomplete

- Nova bugs with Ironic tag: 24. 0 new, 0 critical, 0 high

- Inspector bugs: 13 (-1). 0 new, 0 critical, 6 high

- bug number is slowly growing since 28.09.2015 (135 open bugs)

- bugs to be aware of before the release this week:

- https://bugs.launchpad.net/ironic/+bug/1507738 - gives headache to people
with CentOS 7, might be considered a regression (status: fix on review)

- https://bugs.launchpad.net/ironic/+bug/1512544 - "grenade jobs are
failing" does not sound promising (status: in progress, no patch attached)

- https://bugs.launchpad.net/ironic/+bug/1408067 - do we still experience
these?



Network isolation (Neutron/Ironic work) (jroll)

===

- nova spec has landed; 2/3 patches are updated and passing unit tests

- ironic reviews still ongoing...



Live upgrades (lucasagomes, lintan)

===

- Submit a patch to add test to enforce object version bump correctly

- https://review.openstack.org/#/c/249624/ MERGED



Boot interface refactor (jroll)

===

- done!

- let's remove this one for next meeting \o/



Parallel tasks with futurist (dtantsur)

===

- still refactoring manager.py a bit:
https://review.openstack.org/#/c/249938/ is WIP



Node filter API and claims endpoint (jroll, devananda)

==

- spec still in review - https://review.openstack.org/#/c/204641



Multiple compute hosts (jroll, devananda)

=

- dependant on node filter API


ironic-lib adoption (rloo)

==

- ironic-lib 0.4.0 has been released and updated in requirements, patch is
almost ready (needs reno) but holding off til after ironic release:
https://review.openstack.org/#/c/184443/



Nova Liaisons (jlvillal & mrda)

===

- No meetiing/updates



Testing/Quality (jlvillal/lekha/krtaylor)

=

- No meeting/updates



Inspector (dtansur)

===

- Fully switched to Reno for both inspector projects

- e.g
http://docs.openstack.org/releasenotes/python-ironic-inspector-client/unreleased.html

- Documentation is now using Sphinx, will appear on
docs.openstack.org/developer/ironic-inspector this week

- python-ironic-inspector-client to be released this week, ironic-inspector
does not seem to have enough changes

- ironic-inspector-specs addition is proposed to the TC



Bifrost (TheJulia)

==

- Gate job is working again, presently looking at refactoring non-voting
test job.



webclient (krotscheck / betherly)

=

- Panel currently being wrapped in plugin. Actions on the nodes are
working. Details page to do then start port



Drivers:



DRAC (ifarkas/lucas)



- patches refactoring the driver to use python-dracclient instead of
pywsman are on gerrit


iLO (wanyen)



- 3rd party CI:
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080806.html


iRMC (naohirot)

---

https://review.openstack.org//#/q/owner:+naohirot+status:+open,n,z

- Status: Reactive (solicited for core team's review)

- iRMC out of band inspection (bp/ironic-node-properties-discovery)

- Status: Active

- Enhance Power Interface for Soft Reboot and NMI
(bp/enhance-power-interface-for-soft-reboot-and-nmi)

- Add 'abort' support for Soft Power Off and Inject NMI
(bp/task-control-functions-for-long-running-tasks)

- iRMC OOB rescue mode support (bp/irmc-oob-rescue-mode-support)

- Status: Done, thanks!

- New boot driver interface for iRMC drivers (bp/new-boot-interface)


.


Until next week,

--ruby


[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-11-30 10:42:53 -0800:
> Hi all,
> 
> I just wanted to bring up an issue, possible solution and get feedback 
> on it from folks because it seems to be an on-going problem that shows 
> up not when an application is initially deployed but as on-going 
> operation and running of that application proceeds (ie after running for 
> a period of time).
> 
> The jist of the problem is the following:
> 
> A <> has a need to ensure that no 
> application on the same machine can manipulate a given resource on that 
> same machine, so it uses the lock file pattern (acquire a *local* lock 
> file for that resource, manipulate that resource, release that lock 
> file) to do actions on that resource in a safe manner (note this does 
> not ensure safety outside of that machine, lock files are *not* 
> distributed locks).
> 
> The api that we expose from oslo is typically accessed via the following:
> 
>oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None, 
> external=False, lock_path=None, semaphores=None, delay=0.01)
> 
> or via its underlying library (that I extracted from oslo.concurrency 
> and have improved to add more usefulness) @ 
> http://fasteners.readthedocs.org/
> 
> The issue though for <> is that each of 
> these projects now typically has a large amount of lock files that exist 
> or have existed and no easy way to determine when those lock files can 
> be deleted (afaik no? periodic task exists in said projects to clean up 
> lock files, or to delete them when they are no longer in use...) so what 
> happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387 
> appear and there is no a simple solution to clean lock files up (since 
> oslo.concurrency is really not the right layer to know when a lock can 
> or can not be deleted, only the application knows that...)
> 
> So then we get a few creative solutions like the following:
> 
> - https://review.openstack.org/#/c/241663/
> - https://review.openstack.org/#/c/239678/
> - (and others?)
> 
> So I wanted to ask the question, how are people involved in < favorite openstack project>> cleaning up these files (are they at all?)
> 
> Another idea that I have been proposing also is to use offset locks.
> 
> This would allow for not creating X lock files, but create a *single* 
> lock file per project and use offsets into it as the way to lock. For 
> example nova could/would create a 1MB (or larger/smaller) *empty* file 
> for locks, that would allow for 1,048,576 locks to be used at the same 
> time, which honestly should be way more than enough, and then there 
> would not need to be any lock cleanup at all... Is there any reason this 
> wasn't initially done back way when this lock file code was created? 
> (https://github.com/harlowja/fasteners/pull/10 adds this functionality 
> to the underlying library if people want to look it over)

This is really complicated, and basically just makes the directory of
lock files _look_ clean. But it still leaves each offset stale, and has
to be cleaned anyway.

Fasteners already has process locks that use fcntl/flock.

These locks provide enough to allow you to infer things about.  the owner
of the lock file. If there's no process still holding the exclusive lock
when you try to lock it, then YOU own it, and thus control the resource.

A cron job which tries to flock anything older than ${REASONABLE_TIME}
and deletes them seems fine. Whatever process was trying to interact
with the resource is gone at that point.

Now, anything that needs to safely manage a resource beyond without a
live process will need to keep track of its own state and be idempotent
anyway. IMO this isn't something lock files alone solve well. I believe
you're familiar with a library named taskflow that is supposed to help
write code that does this better ;). Even without taskflow, if you are
trying to do something exclusive without a single process that stays
alive, you need to do _something_ to keep track of state and restart
or revert that flow. That is a state management problem, not a locking
problem.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [ceph] Puppet Ceph CI

2015-11-30 Thread David Moreau Simard
Hey Adam,

A bit late here, sorry.
Ceph works fine with OpenStack Kilo but at the time we developed the
integration tests for puppet-ceph with Kilo, there were some issues
specific to our test implementation and we chose to settle with Juno
at the time.

On the topic of CI, I can no longer sponsor the third party CI
(through my former employer, iWeb) as I am with Red Hat now.
I see this as an opportunity to drop the custom system tests with
vagrant and instead improve the acceptance tests.

What do you think ?


David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Mon, Nov 23, 2015 at 6:45 PM, Adam Lawson  wrote:
> I'm confused, what is the context here? We use Ceph with OpenStack Kilo
> without issue.
>
> On Nov 23, 2015 2:28 PM, "David Moreau Simard"  wrote:
>>
>> Last I remember, David Gurtner tried to use Kilo instead of Juno but
>> he bumped into some problems and we settled for Juno at the time [1].
>> At this point we should already be testing against both Liberty and
>> Infernalis, we're overdue for an upgrade in that regard.
>>
>> But, yes, +1 to split acceptance tests:
>> 1) Ceph
>> 2) Ceph + Openstack
>>
>> Actually learning what failed is indeed challenging sometimes, I don't
>> have enough experience with the acceptance testing to suggest anything
>> better.
>> We have the flexibility of creating different logfiles, maybe we can
>> find a way to split out the relevant bits into another file.
>>
>> [1]: https://review.openstack.org/#/c/153783/
>>
>> David Moreau Simard
>> Senior Software Engineer | Openstack RDO
>>
>> dmsimard = [irc, github, twitter]
>>
>>
>> On Mon, Nov 23, 2015 at 2:45 PM, Andrew Woodward  wrote:
>> > I think I have a good lead on the recent failures in openstack / swift /
>> > radosgw integration component that we have since disabled. It looks like
>> > there is a oslo.config version upgrade conflict in the Juno repo we
>> > where
>> > using for CentOS. I think moving to Kilo will help sort this out, but at
>> > the
>> > same time I think it would be prudent to separate the Ceph v.s.
>> > OpenStack
>> > integration into separate jobs so that we have a better idea of which is
>> > a
>> > problem. If there is census for this, I'd need some direction / help, as
>> > well as set them up as non-voting for now.
>> >
>> > Looking into this I also found that the only place that we do
>> > integration
>> > any of the cephx logic was in the same test so we will need to create a
>> > component for it in the ceph integration as well as use it in the
>> > OpenStack
>> > side.
>> >
>> > Lastly un-winding the integration failure seemed overly complex. Is
>> > there a
>> > way that we can correlate the test status inside the job at a high level
>> > besides the entire job passed / failed without breaking them into
>> > separate
>> > jobs?
>> > --
>> >
>> > --
>> >
>> > Andrew Woodward
>> >
>> > Mirantis
>> >
>> > Fuel Community Ambassador
>> >
>> > Ceph Community
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Robert Collins
On 1 December 2015 at 08:37, Ben Nemec  wrote:
> On 11/30/2015 12:42 PM, Joshua Harlow wrote:
>> Hi all,
>>
>> I just wanted to bring up an issue, possible solution and get feedback
>> on it from folks because it seems to be an on-going problem that shows
>> up not when an application is initially deployed but as on-going
>> operation and running of that application proceeds (ie after running for
>> a period of time).
>>
>> The jist of the problem is the following:
>>
>> A <> has a need to ensure that no
>> application on the same machine can manipulate a given resource on that
>> same machine, so it uses the lock file pattern (acquire a *local* lock
>> file for that resource, manipulate that resource, release that lock
>> file) to do actions on that resource in a safe manner (note this does
>> not ensure safety outside of that machine, lock files are *not*
>> distributed locks).
>>
>> The api that we expose from oslo is typically accessed via the following:
>>
>>oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
>> external=False, lock_path=None, semaphores=None, delay=0.01)
>>
>> or via its underlying library (that I extracted from oslo.concurrency
>> and have improved to add more usefulness) @
>> http://fasteners.readthedocs.org/
>>
>> The issue though for <> is that each of
>> these projects now typically has a large amount of lock files that exist
>> or have existed and no easy way to determine when those lock files can
>> be deleted (afaik no? periodic task exists in said projects to clean up
>> lock files, or to delete them when they are no longer in use...) so what
>> happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
>> appear and there is no a simple solution to clean lock files up (since
>> oslo.concurrency is really not the right layer to know when a lock can
>> or can not be deleted, only the application knows that...)
>>
>> So then we get a few creative solutions like the following:
>>
>> - https://review.openstack.org/#/c/241663/
>> - https://review.openstack.org/#/c/239678/
>> - (and others?)
>>
>> So I wanted to ask the question, how are people involved in <> favorite openstack project>> cleaning up these files (are they at all?)
>>
>> Another idea that I have been proposing also is to use offset locks.
>>
>> This would allow for not creating X lock files, but create a *single*
>> lock file per project and use offsets into it as the way to lock. For
>> example nova could/would create a 1MB (or larger/smaller) *empty* file
>> for locks, that would allow for 1,048,576 locks to be used at the same
>> time, which honestly should be way more than enough, and then there
>> would not need to be any lock cleanup at all... Is there any reason this
>> wasn't initially done back way when this lock file code was created?
>> (https://github.com/harlowja/fasteners/pull/10 adds this functionality
>> to the underlying library if people want to look it over)
>
> I think the main reason was that even with a million locks available,
> you'd have to find a way to hash the lock names to offsets in the file,
> and a million isn't a very large collision space for that.  Having two
> differently named locks that hashed to the same offset would lead to
> incredibly confusing bugs.
>
> We could switch to requiring the projects to provide the offsets instead
> of hashing a string value, but that's just pushing the collision problem
> off onto every project that uses us.
>
> So that's the problem as I understand it, but where does that leave us
> for solutions?  First, there's
> https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
> which allows consumers to delete lock files when they're done with them.
>  Of course, in that case the onus is on the caller to make sure the lock
> couldn't possibly be in use anymore.
>
> Second, is this actually a problem?  Modern filesystems have absurdly
> large limits on the number of files in a directory, so it's highly
> unlikely we would ever exhaust that, and we're creating all zero byte
> files so there shouldn't be a significant space impact either.  In the
> past I believe our recommendation has been to simply create a cleanup
> job that runs on boot, before any of the OpenStack services start, that
> deletes all of the lock files.  At that point you know it's safe to
> delete them, and it prevents your lock file directory from growing forever.

Not that high - ext3 (still the default for nova ephemeral
partitions!) has a limit of 64k in one directory.

That said, I don't disagree - my thinkis is that we should advise
putting such files on a tmpfs.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Joshua Harlow

Ben Nemec wrote:

On 11/30/2015 12:42 PM, Joshua Harlow wrote:

Hi all,

I just wanted to bring up an issue, possible solution and get feedback
on it from folks because it seems to be an on-going problem that shows
up not when an application is initially deployed but as on-going
operation and running of that application proceeds (ie after running for
a period of time).

The jist of the problem is the following:

A<>  has a need to ensure that no
application on the same machine can manipulate a given resource on that
same machine, so it uses the lock file pattern (acquire a *local* lock
file for that resource, manipulate that resource, release that lock
file) to do actions on that resource in a safe manner (note this does
not ensure safety outside of that machine, lock files are *not*
distributed locks).

The api that we expose from oslo is typically accessed via the following:

oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None,
external=False, lock_path=None, semaphores=None, delay=0.01)

or via its underlying library (that I extracted from oslo.concurrency
and have improved to add more usefulness) @
http://fasteners.readthedocs.org/

The issue though for<>  is that each of
these projects now typically has a large amount of lock files that exist
or have existed and no easy way to determine when those lock files can
be deleted (afaik no? periodic task exists in said projects to clean up
lock files, or to delete them when they are no longer in use...) so what
happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387
appear and there is no a simple solution to clean lock files up (since
oslo.concurrency is really not the right layer to know when a lock can
or can not be deleted, only the application knows that...)

So then we get a few creative solutions like the following:

- https://review.openstack.org/#/c/241663/
- https://review.openstack.org/#/c/239678/
- (and others?)

So I wanted to ask the question, how are people involved in<>  cleaning up these files (are they at all?)

Another idea that I have been proposing also is to use offset locks.

This would allow for not creating X lock files, but create a *single*
lock file per project and use offsets into it as the way to lock. For
example nova could/would create a 1MB (or larger/smaller) *empty* file
for locks, that would allow for 1,048,576 locks to be used at the same
time, which honestly should be way more than enough, and then there
would not need to be any lock cleanup at all... Is there any reason this
wasn't initially done back way when this lock file code was created?
(https://github.com/harlowja/fasteners/pull/10 adds this functionality
to the underlying library if people want to look it over)


I think the main reason was that even with a million locks available,
you'd have to find a way to hash the lock names to offsets in the file,
and a million isn't a very large collision space for that.  Having two
differently named locks that hashed to the same offset would lead to
incredibly confusing bugs.

We could switch to requiring the projects to provide the offsets instead
of hashing a string value, but that's just pushing the collision problem
off onto every project that uses us.

So that's the problem as I understand it, but where does that leave us
for solutions?  First, there's
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
which allows consumers to delete lock files when they're done with them.
  Of course, in that case the onus is on the caller to make sure the lock
couldn't possibly be in use anymore.


Ya, I wonder how many folks are actually doing this, because the exposed 
API of @synchronized doesn't seem to tell u what file to even delete in 
the first place :-/ perhaps we should make that more accessible so that 
people/consumers of that code could know what to delete...




Second, is this actually a problem?  Modern filesystems have absurdly
large limits on the number of files in a directory, so it's highly
unlikely we would ever exhaust that, and we're creating all zero byte
files so there shouldn't be a significant space impact either.  In the
past I believe our recommendation has been to simply create a cleanup
job that runs on boot, before any of the OpenStack services start, that
deletes all of the lock files.  At that point you know it's safe to
delete them, and it prevents your lock file directory from growing forever.


Except as we move to never shutting an app down (always online and live 
upgrades and all that jazz), it will have to run more than just on boot, 
but point taken.




I know we've had this discussion in the past, but I don't think anyone
has ever told me that having lock files hang around was a functional
problem for them.  It seems to be largely cosmetic complaints about not
cleaning up the old files (which, as you noted, Oslo can't really solve
because we have no idea when consumers are finished with locks) and
given the amount

Re: [openstack-dev] [OpenStack-Infra] IRC Bot issues

2015-11-30 Thread Paul Michali
Check out https://freenode.net/irc_servers.shtml which lists the servers. I
was using irc.freenode.net. Switched to weber.freenode.net and able to
connect.

(now everyone will hop on that one and I'll have to pick another :)



On Mon, Nov 30, 2015 at 2:46 PM Clark, Jay  wrote:

> Can't connect either. Dead in the water
>
> Regards,
> Jay Clark
> Sr. OpenStack Deployment Engineer
> E: jason.t.cl...@hpe.com
> H: 919.341.4670
> M: 919.345.1127
> IRC (freenode): jasondotstar
>
> 
> From: lichen.hangzhou [lichen.hangz...@gmail.com]
> Sent: Monday, November 30, 2015 9:17 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: openstack-in...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack-Infra] IRC Bot issues
>
> Can't connect +1 and web client do not work  :(
>
> -chen
>
> At 2015-11-30 22:08:12, "Hinds, Luke (Nokia - GB/Bristol)" <
> luke.hi...@nokia.com> wrote:
> Me too.  It is possible to get on using the web client though;
> https://webchat.freenode.net/ .
>
> On Mon, 2015-11-30 at 14:00 +, EXT Dugger, Donald D wrote:
> I can’t even connect to the IRC server at all, can others get it?
>
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
>
> From: Joshua Hesketh [mailto:joshua.hesk...@gmail.com joshua.hesk...@gmail.com>]
> Sent: Monday, November 30, 2015 2:50 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>;
> openstack-infra  openstack-in...@lists.openstack.org>>
> Subject: [openstack-dev] IRC Bot issues
>
> Hi all,
> Freenode are currently experiencing a severe DDoS attack that are having
> an effect on our bots. As such the meetbot, irc logging and gerrit watcher
> are interminably available.
>
> We expect the bots to resume their normal function once Freenode has
> recovered. For now, meetings may have to be postponed or minuted by hand.
> Cheers,
> Josh
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org openstack-in...@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Encouraging first-time contributors through bug tags/reviews

2015-11-30 Thread sean roberts
Being successful at your first patch for most people means that their first
effort is different than an a regular patch.

Identifying abandoned work more quickly is good. It doesn't help the first
timer.

Tagging low hanging fruit for first timers I like. I'm recommending we add
a mentor as part of the idea, so the project mentor is responsible for the
work and the first timer learning.

On Monday, November 30, 2015, Doug Hellmann  wrote:

> Excerpts from sean roberts's message of 2015-11-30 07:57:54 -0800:
> > How about:
> > First timers assign a bug to a mentor and the mentor takes responsibility
> > for the first timer learning from the bug to completion.
>
> That would mean the learning process is different from what we want the
> regular process to be.
>
> If the problem is identifying "In Progress" bugs that are actually not
> being worked on, then let's figure out a way to make that easier.
> sdague's point about the auto-abandon process may help. We could query
> gerrit for "stale" reviews that would have met the old abandon
> requirements and that refer to bugs, for example. Using that
> information, someone could follow-up with the patch owner to see if it
> is actually abandoned, before changing the bug status or encouraging the
> owner to abandon the patch.
>
> >
> > Per project, a few people volunteer themselves as mentors. As easy as
> > responding to [project][mentor] emails.
> >
> > On Monday, November 30, 2015, Sean Dague >
> wrote:
> >
> > > On 11/25/2015 03:22 PM, Shamail wrote:
> > > > Hi,
> > > >
> > > >> On Nov 25, 2015, at 11:05 PM, Doug Hellmann  
> > > > wrote:
> > > >>
> > > >> Excerpts from Shamail Tahir's message of 2015-11-25 09:15:54 -0500:
> > > >>> Hi everyone,
> > > >>>
> > > >>> Andrew Mitry recently shared a medium post[1] by Kent C. Dobbs
> which
> > > >>> discusses how one open-source project is encouraging contributions
> by
> > > new
> > > >>> open-source contributors through a combination of a special tag
> (which
> > > is
> > > >>> associated with work that is needed but can only be completed by
> > > someone
> > > >>> who is a first-time contributor) and helpful comments in the review
> > > phase
> > > >>> to ensure the contribution(s) eventually get merged.
> > > >>>
> > > >>> While reading the article, I immediately thought about our
> > > >>> low-hanging-fruit bug tag which is used for a very similar purpose
> in
> > > "bug
> > > >>> fixing" section of  the "how to contribute" page[2].  The
> > > low-hanging-fruit
> > > >>> tag is used to identify items that are generally suitable for
> > > first-time or
> > > >>> beginner contributors but, in reality, anyone can pick them up.
> > > >>>
> > > >>> I wanted to propose a new tag (or even changing the, existing,
> > > low-hanging
> > > >>> fruit tag) that would identify items that we are reserving for
> > > first-time
> > > >>> OpenStack contributors (e.g. a patch-set for the item submitted by
> > > someone
> > > >>> who is not a first time contributor would be rejected)... The same
> > > article
> > > >>> that Andrew shared mentions using an "up-for-grabs" tag which also
> > > >>> populates the items at up-for-grabs[3] (a site where people
> looking to
> > > >>> start working on open-source projects see entry-level items from
> > > multiple
> > > >>> projects).  If we move forward with an exclusive tag for
> first-timers
> > > then
> > > >>> it would be nice if we could use the up-for-grabs tag so that
> OpenStack
> > > >>> also shows up on the list too.  Please let me know if this change
> > > should be
> > > >>> proposed elsewhere, the tags are maintained in launchpad and the
> wiki I
> > > >>> found related to bug tags[4] didn't indicate a procedure for
> > > submitting a
> > > >>> change proposal.
> > > >>
> > > >> I like the idea of making bugs suitable for first-timers more
> > > >> discoverable. I'm not sure we need to *reserve* any bugs for any
> class
> > > >> of contributor. What benefit do you think that provides?
> > > > I would have to defer to additional feedback here...
> > > >
> > > > My own perspective from when I was doing my first contribution is
> that
> > > it was hard to find active "low-hanging-fruit" items.  Most were
> already
> > > work-in-progress or assigned.
> > >
> > > This was a direct consequence of us dropping the auto-abandoning of old
> > > code reviews in gerrit. When a review is abandoned the bug is flipped
> > > back to New instead of In Progress.
> > >
> > > I found quite often people go and gobble up bugs assigning them to
> > > themselves, but don't make real progress on them. Then new contributors
> > > show up, and don't work on any of those issues because our tools say
> > > someone is already on top of it.
> > >
> > > -Sean
> > >
> > > --
> > > Sean Dague
> > > http://dague.net
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-d

Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-11-30 Thread Steve Baker

On 30/11/15 23:21, Steven Hardy wrote:

On Mon, Nov 30, 2015 at 10:03:29AM +0100, Lennart Regebro wrote:

I'm tasked to implement a command that shows error messages when a
deployment has failed. I have a vague memory of having seen scripts
that do something like this, if that exists, can somebody point me in
teh right direction?

I wrote a super simple script and put it in a blog post a while back:

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-3-cluster.html

All it does is find the failed SoftwareDeployment resources, then do heat
deployment-show on the resource, so you can see the stderr associated with
the failure.

Having tripleoclient do that by default would be useful.


Any opinions on what that should do, specifically? Traverse failed
resources to find error messages, I assume. Anything else?

Yeah, but I think for this to be useful, we need to go a bit deeper than
just showing the resource error - there are a number of typical failure
modes, and I end up repeating the same steps to debug every time.

1. SoftwareDeployment failed (mentioned above).  Every time, you need to
see the name of the SoftwareDeployment which failed, figure out if it
failed on one or all of the servers, then look at the stderr for clues.

2. A server failed to build (OS::Nova::Server resource is FAILED), here we
need to check both nova and ironic, looking first to see if ironic has the
node(s) in the wrong state for scheduling (e.g nova gave us a no valid
host error), and then if they are OK in ironic, do nova show on the failed
host to see the reason nova gives us for it failing to go ACTIVE.

3. A stack timeout happened.  IIRC when this happens, we currently fail
with an obscure keystone related backtrace due to the token expiring.  We
should instead catch this error and show the heat stack status_reason,
which should say clearly the stack timed out.

If we could just make these three cases really clear and easy to debug, I
think things would be much better (IME the above are a high proportion of
all failures), but I'm sure folks can come up with other ideas to add to
the list.

I'm actually drafting a spec which includes a command which does this. I 
hope to submit it soon, but here is the current state of that command's 
description:


Diagnosing resources in a FAILED state
--

One command will be implemented:
- openstack overcloud failed list

This will print a yaml tree showing the hierarchy of nested stacks until it
gets to the actual failed resource, then it will show information 
regarding the

failure. For most resource types this information will be the status_reason,
but for software-deployment resources the deploy_stdout, deploy_stderr and
deploy_status code will be printed.

In addition to this stand-alone command, this output will also be 
printed when

an ``openstack overcloud deploy`` or ``openstack overcloud update`` command
results in a stack in a FAILED state.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Ben Nemec
On 11/30/2015 12:42 PM, Joshua Harlow wrote:
> Hi all,
> 
> I just wanted to bring up an issue, possible solution and get feedback 
> on it from folks because it seems to be an on-going problem that shows 
> up not when an application is initially deployed but as on-going 
> operation and running of that application proceeds (ie after running for 
> a period of time).
> 
> The jist of the problem is the following:
> 
> A <> has a need to ensure that no 
> application on the same machine can manipulate a given resource on that 
> same machine, so it uses the lock file pattern (acquire a *local* lock 
> file for that resource, manipulate that resource, release that lock 
> file) to do actions on that resource in a safe manner (note this does 
> not ensure safety outside of that machine, lock files are *not* 
> distributed locks).
> 
> The api that we expose from oslo is typically accessed via the following:
> 
>oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None, 
> external=False, lock_path=None, semaphores=None, delay=0.01)
> 
> or via its underlying library (that I extracted from oslo.concurrency 
> and have improved to add more usefulness) @ 
> http://fasteners.readthedocs.org/
> 
> The issue though for <> is that each of 
> these projects now typically has a large amount of lock files that exist 
> or have existed and no easy way to determine when those lock files can 
> be deleted (afaik no? periodic task exists in said projects to clean up 
> lock files, or to delete them when they are no longer in use...) so what 
> happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387 
> appear and there is no a simple solution to clean lock files up (since 
> oslo.concurrency is really not the right layer to know when a lock can 
> or can not be deleted, only the application knows that...)
> 
> So then we get a few creative solutions like the following:
> 
> - https://review.openstack.org/#/c/241663/
> - https://review.openstack.org/#/c/239678/
> - (and others?)
> 
> So I wanted to ask the question, how are people involved in < favorite openstack project>> cleaning up these files (are they at all?)
> 
> Another idea that I have been proposing also is to use offset locks.
> 
> This would allow for not creating X lock files, but create a *single* 
> lock file per project and use offsets into it as the way to lock. For 
> example nova could/would create a 1MB (or larger/smaller) *empty* file 
> for locks, that would allow for 1,048,576 locks to be used at the same 
> time, which honestly should be way more than enough, and then there 
> would not need to be any lock cleanup at all... Is there any reason this 
> wasn't initially done back way when this lock file code was created? 
> (https://github.com/harlowja/fasteners/pull/10 adds this functionality 
> to the underlying library if people want to look it over)

I think the main reason was that even with a million locks available,
you'd have to find a way to hash the lock names to offsets in the file,
and a million isn't a very large collision space for that.  Having two
differently named locks that hashed to the same offset would lead to
incredibly confusing bugs.

We could switch to requiring the projects to provide the offsets instead
of hashing a string value, but that's just pushing the collision problem
off onto every project that uses us.

So that's the problem as I understand it, but where does that leave us
for solutions?  First, there's
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L151
which allows consumers to delete lock files when they're done with them.
 Of course, in that case the onus is on the caller to make sure the lock
couldn't possibly be in use anymore.

Second, is this actually a problem?  Modern filesystems have absurdly
large limits on the number of files in a directory, so it's highly
unlikely we would ever exhaust that, and we're creating all zero byte
files so there shouldn't be a significant space impact either.  In the
past I believe our recommendation has been to simply create a cleanup
job that runs on boot, before any of the OpenStack services start, that
deletes all of the lock files.  At that point you know it's safe to
delete them, and it prevents your lock file directory from growing forever.

I know we've had this discussion in the past, but I don't think anyone
has ever told me that having lock files hang around was a functional
problem for them.  It seems to be largely cosmetic complaints about not
cleaning up the old files (which, as you noted, Oslo can't really solve
because we have no idea when consumers are finished with locks) and
given the amount of trouble we've had with interprocess locking in the
past I've never felt that a cosmetic issue was sufficient reason to
reopen that can of worms.  I'll just note again that every time we've
started messing with this stuff we run into a bunch of sticky problems
and edge cases, so it would take a pretty c

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-30 Thread Andrew Laski

On 11/30/15 at 07:32am, Sean Dague wrote:

On 11/24/2015 10:09 AM, John Garbutt wrote:

On 24 November 2015 at 15:00, Balázs Gibizer
 wrote:

From: Andrew Laski [mailto:and...@lascii.com]
Sent: November 24, 2015 15:35
On 11/24/15 at 10:26am, Balázs Gibizer wrote:

From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
Sent: November 23, 2015 22:33
On 11/23/2015 2:23 PM, Andrew Laski wrote:

On 11/23/15 at 04:43pm, Balázs Gibizer wrote:

From: Andrew Laski [mailto:and...@lascii.com]
Sent: November 23, 2015 17:03

On 11/23/15 at 08:54am, Ryan Rossiter wrote:



On 11/23/2015 5:33 AM, John Garbutt wrote:

On 20 November 2015 at 09:37, Balázs Gibizer
 wrote:






There is a bit I am conflicted/worried about, and thats when we
start including verbatim, DB objects into the notifications. At
least you can now quickly detect if that blob is something
compatible with your current parsing code. My preference is
really to keep the Notifications as a totally separate object
tree, but I am sure there are many cases where that ends up
being seemingly stupid duplicate work. I am not expressing this
well in text form :(

Are you saying we don't want to be willy-nilly tossing DB
objects across the wire? Yeah that was part of the rug-pulling
of just having the payload contain an object. We're
automatically tossing everything with the object then, whether
or not some of that was supposed to be a secret. We could add
some sort of property to the field like
dont_put_me_on_the_wire=True (or I guess a
notification_ready() function that helps an object sanitize
itself?) that the notifications will look at to know if it puts
that on the wire-serialized dict, but that's adding a lot more
complexity and work to a pile that's already growing rapidly.


I don't want to be tossing db objects across the wire.  But I
also am not convinced that we should be tossing the current
objects over the wire either.
You make the point that there may be things in the object that
shouldn't be exposed, and I think object version bumps is another
thing to watch out for.
So far the only object that has been bumped is Instance but in
doing so no notifications needed to change.  I think if we just
put objects into notifications we're coupling the notification
versions to db or RPC changes unnecessarily.  Some times they'll
move together but other times, like moving flavor into
instance_extra, there's no reason to bump notifications.



Sanitizing existing versioned objects before putting them to the
wire is not hard to do.
You can see an example of doing it in
https://review.openstack.org/#/c/245678/8/nova/objects/service.py,
cm
L382.
We don't need extra effort to take care of minor version bumps
because that does not break a well written consumer. We do have to
take care of the major version bumps but that is a rare event and
therefore can be handled one by one in a way John suggested, by
keep sending the previous major version for a while too.


That review is doing much of what I was suggesting.  There is a
separate notification and payload object.  The issue I have is that
within the ServiceStatusPayload the raw Service object and version
is being dumped, with the filter you point out.  But I don't think
that consumers really care about tracking Service object versions
and dealing with compatibility there, it would be easier for them
to track the ServiceStatusPayload version which can remain
relatively stable even if Service is changing to adapt to db/RPC changes.

Not only do they not really care about tracking the Service object
versions, they probably also don't care about what's in that filter list.

But I think you're getting on the right track as to where this needs
to go. We can integrate the filtering into the versioning of the payload.
But instead of a blacklist, we turn the filter into a white list. If
the underlying object adds a new field that we don't want/care if
people know about, the payload version doesn't have to change. But if
we add something (or if we're changing the existing fields) that we
want to expose, we then assert that we need to update the version of
the payload, so the consumer can look at the payload and say "oh, in
1.x, now I get ___" and can add the appropriate checks/compat.
Granted with this you can get into rebase nightmares ([1] still
haunts me in my sleep), but I don't see us frantically changing the
exposed fields all too often. This way gives us some form of
pseudo-pinning of the subobject. Heck, in this method, we could even
pass the whitelist on the wire right? That way we tell the consumer

explicitly what's available to them (kinda like a fake schema).


I think see your point, and it seems like a good way forward. Let's
turn the black list to a white list. Now I'm thinking about creating a
new Field type something like WhiteListedObjectField which get a type
name (as the ObjectField) but also get a white_list that describes which

fields needs to be used from the original type.

Then this new field

[openstack-dev] [keystone][all] keystone-spec proposal freeze date and using roll call instead of +2/+2/+A for specs

2015-11-30 Thread Steve Martinelli


As a reminder: keystone spec freeze date is the end of mitaka-1 (this
friday), any spec being proposed for mitaka will have to go through an
exception process next week.

For the next keystone meeting (tomorrow) I'd like to propose we use a roll
call / vote mechanism on each open specs that is proposed against the
mitaka release. If a majority of the cores (50% or more) agree that a spec
aligns with project plans for mitaka, then we should get it merged by
friday (clean up any wording and proposed APIs) so the author can start
working on it on monday.

I added [all] because I wanted to know how other projects approach this
subject. I think a roll call feature for -specs would be great, similar to
how there is one for governance changes. Do others just use +2/+2/+A? In
keystone we've often let specs sit for a little to give other cores a
chance to chime in. Thoughts?

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Debugging via PyCharm (pydevd)

2015-11-30 Thread Chad Roberts
I submitted a bug (https://bugs.launchpad.net/sahara/+bug/1521266) and have
asked in-channel, but I'll also post here just in case it rings a bell for
someone.

Recently, Sahara was changed to use oslo.service to launch our wsgi api
server.  Prior to that change, I was able to successfully run and debug the
sahara-api process using PyCharm.

After the change to use oslo.service as the launcher, I am still able to
run sahara-api using the same config that has worked for as long as I can
remember, but when I run in debug mode, the api appears to start up
normally, but any request I send never receives a response and any
breakpoints I've tried never seem to get hit.

If I backout the changes to use oslo.service, I am able to debug
successfully again.

Any chance that sort of thing sounds familiar to anyone?

Thanks,
Chad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance] Filling performance team working items list and taking part in their resolving

2015-11-30 Thread Dina Belova
Matt,

I guess it's good solution. Thanks for doing that!

Cheers,
Dina

On Mon, Nov 30, 2015 at 6:30 PM, Matt Riedemann 
wrote:

>
>
> On 11/27/2015 3:54 AM, Dina Belova wrote:
>
>> Hey OpenStack devs and operators!
>>
>> Folks, I would like to share list of working items Performance Team is
>> currently having in the backlog -
>> https://etherpad.openstack.org/p/perf-zoom-zoom [Work Items to grab]
>> section. I'm really encouraging you to fill it with concrete pieces of
>> work you think will be useful and take part in the
>> development/investigation by assigning some of them to yourself and
>> working on them :)
>>
>> Cheers,
>> Dina
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> One of the work items was to enable the nova metadata service in a large
> ops job. There are two voting large ops jobs in nova, one runs with
> nova-network and the other runs with neutron. Besides those differences,
> I'm not sure if there is any other difference in the jobs. So I guess we'd
> just need to pick which one runs the nova-api-metadata service rather than
> using config drive. The only other job I know of that runs that is the
> postgres job and that runs nova-network, so I'd say we turn on n-api-meta
> in the neutron large ops job.
>
> Are there any issues with that?
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday December 1st at 19:00 UTC

2015-11-30 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday December 1st, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-24-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-24-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-24-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova mid cycle details

2015-11-30 Thread Murray, Paul (HP Cloud)
The rates are listed on the hotel information page on that site. All include 
tax and breakfast.

Paul

From: Michael Still [mailto:mi...@stillhq.com]
Sent: 28 November 2015 04:39
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Nova mid cycle details

Hey,

I filled in the first part of that page, but when it got to hotels I got 
confused. The web site doesn't seem to mention the night rate for the HP price. 
Do you know what that is?

Thanks,
Michael

On Sat, Nov 28, 2015 at 3:06 AM, Murray, Paul (HP Cloud) 
mailto:pmur...@hpe.com>> wrote:
The Nova Mitaka mid cycle meetup is in Bristol, UK at the Hewlett Packard 
Enterprise office.

The mid cycle wiki page is here:
https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprint

Note that there is a web site for signing up for the event and booking hotel 
rooms at a reduced event rate here:
https://starcite.smarteventscloud.com/hpe/NovaMidCycleMeeting

If you want to book a room at the event rate you do need to register on that 
site.

There is also an Eventbrite event that was created before the above web site 
was available. Do not worry if you have registered using Eventbrite, we will 
recognize those registrations as well. But if you do want to book a room you 
will need to register again on the above site.

Paul

Paul Murray
Nova Technical Lead, HPE Cloud
Hewlett Packard Enterprise
+44 117 316 2527


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] using reno for libraries

2015-11-30 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2015-11-30 17:58:45 +0100:
> On 11/30/2015 05:14 PM, Doug Hellmann wrote:
> > Excerpts from Dmitry Tantsur's message of 2015-11-30 10:06:25 +0100:
> >> On 11/28/2015 02:48 PM, Doug Hellmann wrote:
> >>> Excerpts from Doug Hellmann's message of 2015-11-27 10:21:36 -0500:
>  Liaisons,
> 
>  We're making good progress on adding reno to service projects as
>  we head to the Mitaka-1 milestone. Thank you!
> 
>  We also need to add reno to all of the other deliverables with
>  changes that might affect deployers. That means clients and other
>  libraries, SDKs, etc. with configuration options or where releases
>  can change deployment behavior in some way. Now that most teams
>  have been through this conversion once, it should be easy to replicate
>  for the other repositories in a similar way.
> 
>  Libraries have 2 audiences for release notes: developers consuming
>  the library and deployers pushing out new versions of the libraries.
>  To separate the notes for the two audiences, and avoid doing manually
>  something that we have been doing automatically, we can use reno
>  just for deployer release notes (changes in support for options,
>  drivers, etc.). That means the library repositories that need reno
>  should have it configured just like for the service projects, with
>  the separate jobs and a publishing location different from their
>  existing developer documentation. The developer docs can continue
>  to include notes for the developer audience.
> >>>
> >>> I've had a couple of questions about this split for release notes. The
> >>> intent is for developer-focused notes to continue to come from commit
> >>> messages and in-tree documentation, while using reno for new and
> >>> additional deployer-focused communication. Most commits to libraries
> >>> won't need reno release notes.
> >>
> >> This looks like unnecessary overcomplication. Why not use such a
> >> convenient tool for both kinds of release notes instead of having us
> >> invent and maintain one more place to put release notes, now for
> >
> > In the past we have had rudimentary release notes and changelogs
> > for developers to read based on the git commit messages. Since
> > deployers and developers care about different things, we don't want
> > to make either group sift through the notes meant for the other.
> > So, we publish notes in different ways.
> 
> Hmm, so maybe for small libraries with few changes it's still fine to 
> publish them together, what do you think?

I'm not sure why you would want to do that. Publishing the ChangeLog
contents in the developer documentation is (or can be) completely
automatic. It should only be possible to add reno notes for
deployer-facing changes, and those notes will need to be written
in a way the deployer can understand, which is not necessarily a
requirement for a commit message.

> > The thing that is new here is publishing release notes for changes
> > in libraries that deployers need to know about. While the Oslo code
> > was in the incubator, and being copied into applications, it was
> > possible to detect deployer-focused changes like new or deprecated
> > configuration options in the application and put the notes there.
> > Using shared libraries means those changes can happen without
> > application developers being aware of them, so the library maintainers
> > need to be publishing notes. Using reno for those notes is consistent
> > with the way they are handled in the applications, so we're extending
> > one tool to more repositories.
> >
> >> developers? It's already not so easy to explain reno to newcomers, this
> >> idea makes it even harder...
> >
> > Can you tell me more about the difficulty you've had? I would like to
> > improve the documentation for reno and for how we use it.
> 
> Usually people are stuck at the "how do I do this at all" stage :) we've 
> even added it to the ironic developer FAQ. As to me, the official reno 
> documentation is nice enough (but see below), maybe people are not aware 
> of it.
> 
> Another "issue" (at least for our newcomers) with reno docs is that 
> http://docs.openstack.org/developer/reno/usage.html#generating-a-report 
> mentions the "reno report" command which is not something we all 
> actually use, we use these "tox -ereleasenotes" command. What is worse, 
> this command (I guess it's by design) does not catch release note files 
> that are just created locally. It took me time to figure out that I have 
> to commit release notes before "tox -ereleasenotes" would show them in 
> the rendered HTML.

The reno documentation is written for any user, not just OpenStack
developers. Those instructions should work if reno is installed,
even though we've wrapped reno in tox to make it simpler to run in
the gate. We can add some information about using tox to build
locally to the project team guide.

I'll look into 

Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-11-30 Thread melanie witt
On Nov 26, 2015, at 9:36, John Garbutt  wrote:

> A suggestion in the past, that I like, is creating a nova functional
> test that stress tests the quota code.
> 
> Hopefully that will be able to help reproduce the error.
> That should help prove if any proposed fix actually works.

+1, I think it's wise to get some data on the current state of quotas before 
choosing a redesign. IIRC, Joe Gordon described a test scenario he used to use 
to reproduce quota bugs locally, in one of the launchpad bugs. If we could 
automate something like that, we could use it to demonstrate how quotas 
currently behave during parallel requests and try things like disabling 
reservations. I also like the idea of being able to verify the effects of 
proposed fixes.

-melanie (irc: melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-11-30 Thread Joshua Harlow

Hi all,

I just wanted to bring up an issue, possible solution and get feedback 
on it from folks because it seems to be an on-going problem that shows 
up not when an application is initially deployed but as on-going 
operation and running of that application proceeds (ie after running for 
a period of time).


The jist of the problem is the following:

A <> has a need to ensure that no 
application on the same machine can manipulate a given resource on that 
same machine, so it uses the lock file pattern (acquire a *local* lock 
file for that resource, manipulate that resource, release that lock 
file) to do actions on that resource in a safe manner (note this does 
not ensure safety outside of that machine, lock files are *not* 
distributed locks).


The api that we expose from oslo is typically accessed via the following:

  oslo_concurrency.lockutils.synchronized(name, lock_file_prefix=None, 
external=False, lock_path=None, semaphores=None, delay=0.01)


or via its underlying library (that I extracted from oslo.concurrency 
and have improved to add more usefulness) @ 
http://fasteners.readthedocs.org/


The issue though for <> is that each of 
these projects now typically has a large amount of lock files that exist 
or have existed and no easy way to determine when those lock files can 
be deleted (afaik no? periodic task exists in said projects to clean up 
lock files, or to delete them when they are no longer in use...) so what 
happens is bugs like https://bugs.launchpad.net/cinder/+bug/1432387 
appear and there is no a simple solution to clean lock files up (since 
oslo.concurrency is really not the right layer to know when a lock can 
or can not be deleted, only the application knows that...)


So then we get a few creative solutions like the following:

- https://review.openstack.org/#/c/241663/
- https://review.openstack.org/#/c/239678/
- (and others?)

So I wanted to ask the question, how are people involved in > cleaning up these files (are they at all?)


Another idea that I have been proposing also is to use offset locks.

This would allow for not creating X lock files, but create a *single* 
lock file per project and use offsets into it as the way to lock. For 
example nova could/would create a 1MB (or larger/smaller) *empty* file 
for locks, that would allow for 1,048,576 locks to be used at the same 
time, which honestly should be way more than enough, and then there 
would not need to be any lock cleanup at all... Is there any reason this 
wasn't initially done back way when this lock file code was created? 
(https://github.com/harlowja/fasteners/pull/10 adds this functionality 
to the underlying library if people want to look it over)


In general would like to hear peoples thoughts/ideas/complaints/other,

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Ruby Loo
On 30 November 2015 at 10:19, Derek Higgins  wrote:

> Hi All,
>
> A few months tripleo switch from its devtest based CI to one that was
> based on instack. Before doing this we anticipated disruption in the ci
> jobs and removed them from non tripleo projects.
>
> We'd like to investigate adding it back to heat and ironic as these
> are the two projects where we find our ci provides the most value. But we
> can only do this if the results from the job are treated as voting.
>

What does this mean? That the tripleo job could vote and do a -1 and block
ironic's gate?


>
> In the past most of the non tripleo projects tended to ignore the
> results from the tripleo job as it wasn't unusual for the job to broken for
> days at a time. The thing is, ignoring the results of the job is the reason
> (the majority of the time) it was broken in the first place.
> To decrease the number of breakages we are now no longer running
> master code for everything (for the non tripleo projects we bump the
> versions we use periodically if they are working). I believe with this
> model the CI jobs we run have become a lot more reliable, there are still
> breakages but far less frequently.
>
> What I proposing is we add at least one of our tripleo jobs back to both
> heat and ironic (and other projects associated with them e.g. clients,
> ironicinspector etc..), tripleo will switch to running latest master of
> those repositories and the cores approving on those projects should wait
> for a passing CI jobs before hitting approve. So how do people feel about
> doing this? can we give it a go? A couple of people have already expressed
> an interest in doing this but I'd like to make sure were all in agreement
> before switching it on.
>
> This seems to indicate that the tripleo jobs are non-voting, or at least
won't block the gate -- so I'm fine with adding tripleo jobs to ironic. But
if you want cores to wait/make sure they pass, then shouldn't they be
voting? (Guess I'm a bit confused.)

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Anita Kuno
On 11/30/2015 12:33 PM, Dmitry Tantsur wrote:
> On 11/30/2015 06:24 PM, Anita Kuno wrote:
>> On 11/30/2015 12:17 PM, Dmitry Tantsur wrote:
>>> On 11/30/2015 05:34 PM, Anita Kuno wrote:
 On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:
> Hi,
>
>
>
>This is to announce that  we have  setup  a  Third Party CI
> environment
> for Proliant iLO Drivers. The results will be posted  under "HP
> Proliant CI
> check" section in Non-voting mode.   We will be  running the basic
> deploy
> tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We
> will
> first  pursue to make the results consistent and over a period of
> time we
> will try to promote it to voting mode.
>
>
>
>  For more information check the Wiki:
> https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci
>
> ,
> for any issues please contact ilo_driv...@groups.ext.hpe.com
>
>
>
>
>
> Thanks & Regards,
>
> Gururaja Grandhi
>
> R&D Project Manager
>
> HPE Proliant  Ironic  Project
>
>
>
> __
>
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

 Please do not post announcements to the mailing list about the
 existence
 of your third party ci system.
>>>
>>> Could you please explain why? As a developer I appreciated this post.
>>>

 Ensure your third party ci system is listed here:
 https://wiki.openstack.org/wiki/ThirdPartySystems (there are
 instructions on the bottom of the page) as well as fill out a template
 on your system so that folks can find your third party ci system the
 same as all other third party ci systems.
>>>
>>> Wiki is not an announcement FWIW.
>>
>> If Ironic wants to hear about announced drivers they have agreed to do
>> so as part of their weekly irc meeting:
>> 2015-11-30T17:19:55   I think it is reasonable for each
>> driver team, if they want to announce it in the meeting, to do so on the
>> whiteboard section for their driver. we'll all see that in the weekly
>> meeting
>> 2015-11-30T17:20:08   but it will avoid spamming the whole
>> openstack list
>>
>> http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2015-11-30.log
>>
> 
> I was there and I already said that I'm not buying into "spamming the
> list" argument. There are much less important things that I see here
> right now, even though I do actively use filters to only see potentially
> relevant things. We've been actively (and not very successfully)
> encouraging people to use ML instead of IRC conversations (or even
> private messages and video chats), and this thread does not seem in line
> with it.

Please discuss this with the leadership of your project.

All announcements about the existence of a third party ci will be
redirected to the third party systems wikipage.

Thank you,
Anita.

> 
>>
>> Thank you,
>> Anita.
>>
>>>

 Ensure you are familiar with the requirements for third party systems
 listed here:
 http://docs.openstack.org/infra/system-config/third_party.html#requirements



 Thank you,
 Anita.
>>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan

2015-11-30 Thread Ian Cordasco
On 11/23/15, 14:20, "Flavio Percoco"  wrote:

>Greetings,
>
>I'd like to propose adding Sabari Kumar Murugesan to the glance-core
>team. Sabari has been contributing for quite a bit to the project with
>great reviews and he's also been providing great feedback in matters
>related to the design of the service, libraries and other areas of the
>team.
>
>I believe he'd be a great addition to the glance-core team as he has
>demonstrated a good knowledge of the code, service and project's
>priorities.
>
>If Sabari accepts to join and there are no objections from other
>members of the community, I'll proceed to add Sabari to the team in a
>week from now.
>
>Thanks,
>Flavio
>
>-- 
>@flaper87
>Flavio Percoco

I'm no longer a core, but I'm +1 on this.

Congrats Sabari!

--
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-11-30 Thread Carl Baldwin
++

On Wed, Nov 25, 2015 at 3:05 PM, Assaf Muller  wrote:
> On Mon, Nov 23, 2015 at 7:02 AM, Rossella Sblendido  
> wrote:
>>
>>
>> On 11/20/2015 03:54 AM, Armando M. wrote:
>>>
>>>
>>>
>>> On 19 November 2015 at 18:26, Assaf Muller >> > wrote:
>>>
>>> On Wed, Nov 18, 2015 at 9:14 PM, Armando M. >> > wrote:
>>> > Hi Neutrites,
>>> >
>>> > We are nearly two weeks away from the end of Mitaka 1.
>>> >
>>> > I am writing this email to invite you to be mindful to what you
>>> review,
>>> > especially in the next couple of weeks. Whenever you have the time
>>> to review
>>> > code, please consider giving priority to the following:
>>> >
>>> > Patches that target blueprints targeted for Mitaka;
>>> > Patches that target bugs that are either critical or high;
>>> > Patches that target rfe-approved 'bugs';
>>> > Patches that target specs that have followed the most current
>>> submission
>>> > process;
>>>
>>> Is it possible to create Gerrit dashboards for patches that answer
>>> these
>>> criteria, and then persist the links in Neutron's dashboards devref
>>> page?
>>> http://docs.openstack.org/developer/neutron/dashboards/index.html
>>> That'd be super useful.
>>>
>>>
>>> We should look into that, but to be perfectly honest I am not sure how
>>> easy it would be, since we'd need to cross-reference content that lives
>>> into gerrit as well as launchpad. Would that even be possible?
>>
>>
>> To cross-reference we can use the bug ID or the blueprint name.
>>
>> I created a script that queries launchpad to get:
>> 1) Bug number of the bugs tagged with approved-rfe
>> 2) Bug number of the critical/high bugs
>> 3) list of blueprints targeted for the current milestone (mitaka-1)
>>
>> With this info the script builds a .dash file that can be used by
>> gerrit-dash-creator [2] to produce a dashboard url .
>>
>> The script prints also the queries that can be used in gerrit UI directly,
>> e.g.:
>> Critical/High Bugs
>> (topic:bug/1399249 OR topic:bug/1399280 OR topic:bug/1443421 OR
>> topic:bug/1453350 OR topic:bug/1462154 OR topic:bug/1478100 OR
>> topic:bug/1490051 OR topic:bug/1491131 OR topic:bug/1498790 OR
>> topic:bug/1505575 OR topic:bug/1505843 OR topic:bug/1513678 OR
>> topic:bug/1513765 OR topic:bug/1514810)
>>
>>
>> This is the dashboard I get right now [3]
>>
>> I tried in many ways to get Gerrit to filter patches if the commit message
>> contains a bug ID. Something like:
>>
>> (message:"#1399249" OR message:"#1399280" OR message:"#1443421" OR
>> message:"#1453350" OR message:"#1462154" OR message:"#1478100" OR
>> message:"#1490051" OR message:"#1491131" OR message:"#1498790" OR
>> message:"#1505575" OR message:"#1505843" OR message:"#1513678" OR
>> message:"#1513765" OR message:"#1514810")
>>
>> but it doesn't work well, the result of the filter contains patches that
>> have nothing to do with the bugs queried.
>> That's why I had to filter using the topic.
>>
>> CAVEAT: To make the dashboard work, bug fixes must use the topic "bug/ID"
>> and patches implementing a blueprint the topic "bp/name". If a patch is not
>> following this convention it won't be showed in the dashboard, since the
>> topic is used as filter. Most of us use this convention already anyway so I
>> hope it's not too much of a burden.
>>
>> Feedback is appreciated :)
>
> Rossella this is exactly what I wanted :) Let's iterate on the patch
> and merge it.
> We could then consider running the script automatically on a daily
> basis and publishing the
> resulting URL in a nice bookmarkable place.
>
>>
>> [1] https://review.openstack.org/248645
>> [2] https://github.com/openstack/gerrit-dash-creator
>> [3] https://goo.gl/sglSbp
>>
>>>
>>> Btw, I was looking at the current blueprint assignments [1] for Mitaka:
>>> there are some blueprints that still need assignee, approver and
>>> drafter; we should close the gap. If there are volunteers, please reach
>>> out to me.
>>>
>>> Thanks,
>>> Armando
>>>
>>> [1] https://blueprints.launchpad.net/neutron/mitaka/+assignments
>>>
>>>
>>> >
>>> > Everything else should come later, no matter how easy or interesting
>>> it is
>>> > to review; remember that as a community we have the collective duty
>>> to work
>>> > towards a common (set of) target(s), as being planned in
>>> collaboration with
>>> > the Neutron Drivers team and the larger core team.
>>> >
>>> > I would invite submitters to ensure that the Launchpad resources
>>> > (blueprints, and bug report) capture the most updated view in terms
>>> of
>>> > patches etc. Work with your approver to help him/her be focussed
>>> where it
>>> > matters most.
>>> >
>>> > Finally, we had plenty of discussions at the design summit, and some
>>> of
>>> > those discussions will have to be followed up with actions (aka code
>>> in
>>> > OpenStack lingo). Even though, we no long

Re: [openstack-dev] [Neutron] Call for review focus

2015-11-30 Thread Carl Baldwin
On Tue, Nov 24, 2015 at 4:47 AM, Rossella Sblendido  wrote:
>> I looked for the address scopes blueprint [1] which is targeted for
>> Mitaka-1 [2] and there are 6 (or 5, one is in the gate) patches on the
>> bp/address-scopes topic [3].  It isn't obvious to me yet why it didn't
>> get picked up on the dashboard.  I've only started to look in to this
>> and may not have much time right now.  I wondered if you could easily
>> tell why it didn't get picked up.
>>
>> Isn't it missing bp/ ? From the URL I can only see topic:address-scope,
>> which isn't the right one.
>
>
> Yes Armando is right. I fixed that. Another reason is that I am filtering
> out patches that are WIP or that failed Jenkins tests. This can be changed
> anyway. This is what I get now (after fixing the missing 'bp/') [1]

I tend to think that eliminating patches that are failing tests may
not be a good thing to do in general.  I think it makes the dashboard
too dynamic in the face of all of the unreliable tests that we see
come and go.  It would go dark when we see those check queue bombs
that fail all of the patches.  This is just my opinion and I could
probably adjust the end result to suit my tastes.

I do filter patch sets with failing tests in some of my own personal queries.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 06:24 PM, Anita Kuno wrote:

On 11/30/2015 12:17 PM, Dmitry Tantsur wrote:

On 11/30/2015 05:34 PM, Anita Kuno wrote:

On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:

Hi,



   This is to announce that  we have  setup  a  Third Party CI
environment
for Proliant iLO Drivers. The results will be posted  under "HP
Proliant CI
check" section in Non-voting mode.   We will be  running the basic
deploy
tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We
will
first  pursue to make the results consistent and over a period of
time we
will try to promote it to voting mode.



 For more information check the Wiki:
https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci
,
for any issues please contact ilo_driv...@groups.ext.hpe.com





Thanks & Regards,

Gururaja Grandhi

R&D Project Manager

HPE Proliant  Ironic  Project



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please do not post announcements to the mailing list about the existence
of your third party ci system.


Could you please explain why? As a developer I appreciated this post.



Ensure your third party ci system is listed here:
https://wiki.openstack.org/wiki/ThirdPartySystems (there are
instructions on the bottom of the page) as well as fill out a template
on your system so that folks can find your third party ci system the
same as all other third party ci systems.


Wiki is not an announcement FWIW.


If Ironic wants to hear about announced drivers they have agreed to do
so as part of their weekly irc meeting:
2015-11-30T17:19:55   I think it is reasonable for each
driver team, if they want to announce it in the meeting, to do so on the
whiteboard section for their driver. we'll all see that in the weekly
meeting
2015-11-30T17:20:08   but it will avoid spamming the whole
openstack list

http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2015-11-30.log


I was there and I already said that I'm not buying into "spamming the 
list" argument. There are much less important things that I see here 
right now, even though I do actively use filters to only see potentially 
relevant things. We've been actively (and not very successfully) 
encouraging people to use ML instead of IRC conversations (or even 
private messages and video chats), and this thread does not seem in line 
with it.




Thank you,
Anita.





Ensure you are familiar with the requirements for third party systems
listed here:
http://docs.openstack.org/infra/system-config/third_party.html#requirements


Thank you,
Anita.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Anita Kuno
On 11/30/2015 12:17 PM, Dmitry Tantsur wrote:
> On 11/30/2015 05:34 PM, Anita Kuno wrote:
>> On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:
>>> Hi,
>>>
>>>
>>>
>>>   This is to announce that  we have  setup  a  Third Party CI
>>> environment
>>> for Proliant iLO Drivers. The results will be posted  under "HP
>>> Proliant CI
>>> check" section in Non-voting mode.   We will be  running the basic
>>> deploy
>>> tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We
>>> will
>>> first  pursue to make the results consistent and over a period of
>>> time we
>>> will try to promote it to voting mode.
>>>
>>>
>>>
>>> For more information check the Wiki:
>>> https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci
>>> ,
>>> for any issues please contact ilo_driv...@groups.ext.hpe.com
>>>
>>>
>>>
>>>
>>>
>>> Thanks & Regards,
>>>
>>> Gururaja Grandhi
>>>
>>> R&D Project Manager
>>>
>>> HPE Proliant  Ironic  Project
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> Please do not post announcements to the mailing list about the existence
>> of your third party ci system.
> 
> Could you please explain why? As a developer I appreciated this post.
> 
>>
>> Ensure your third party ci system is listed here:
>> https://wiki.openstack.org/wiki/ThirdPartySystems (there are
>> instructions on the bottom of the page) as well as fill out a template
>> on your system so that folks can find your third party ci system the
>> same as all other third party ci systems.
> 
> Wiki is not an announcement FWIW.

If Ironic wants to hear about announced drivers they have agreed to do
so as part of their weekly irc meeting:
2015-11-30T17:19:55   I think it is reasonable for each
driver team, if they want to announce it in the meeting, to do so on the
whiteboard section for their driver. we'll all see that in the weekly
meeting
2015-11-30T17:20:08   but it will avoid spamming the whole
openstack list

http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2015-11-30.log

Thank you,
Anita.

> 
>>
>> Ensure you are familiar with the requirements for third party systems
>> listed here:
>> http://docs.openstack.org/infra/system-config/third_party.html#requirements
>>
>>
>> Thank you,
>> Anita.
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-11-30 Thread Steve Martinelli

In the Mitaka release, the keystone team will be removing functionality
that was marked for deprecation in Kilo, and marking certain functions as
deprecated in Mitaka (that may be removed in at least 2 cycles).

removing deprecated functionality
=

This is not a full list, but these are by and large the most contentious
topics.

* Eventlet support: This was marked as deprecated back in Kilo and is
currently scheduled to be removed in Mitaka in favor of running keystone in
a WSGI server. This is currently how we test keystone in the gate, and
based on the feedback we received at the summit, a lot of folks have moved
to running keystone under Apache since we’ve announced this change.
OpenStack's CI is configured to mainly test using this deployment model.
See [0] for when we started to issue warnings.

* Using LDAP to store assignment data: Like eventlet support, this feature
was also deprecated in Kilo and scheduled to be removed in Mitaka. To store
assignment data (role assignments) we suggest using an SQL based backend
rather than LDAP. See [1] for when we started to issue warnings.

* Using LDAP to store project and domain data: The same as above, see [2]
for when we started to issue warnings.

* for a complete list:
https://blueprints.launchpad.net/keystone/+spec/removed-as-of-mitaka

functions deprecated as of mitaka
=

The following will adhere to the TC’s new standard on deprecating
functionality [3].

* LDAP write support for identity: We suggest simply not writing to LDAP
for users and groups, this effectively makes create, delete and update of
LDAP users and groups a no-op. It will be removed in the O release.

* PKI tokens: We suggest using UUID or fernet tokens instead. The PKI token
format has had issues with security and causes problems with both horizon
and swift when the token contains an excessively large service catalog. It
will be removed in the O release.

* v2.0 of our API: Lastly, the keystone team recommends using v3 of our
Identity API. We have had the intention of deprecating v2.0 for a while
(since Juno actually), and have finally decided to formally deprecate v2.0.
OpenStack’s CI runs successful v3 only jobs, there is complete feature
parity with v2.0, and we feel the CLI exposed via openstackclient is mature
enough to say with certainty that we can deprecate v2.0. It will be around
for at least FOUR releases, with the authentication routes
(POST /auth/tokens) potentially sticking around for longer.

* for a complete list:
https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-mitaka


If you have ANY concern about the following, please speak up now and let us
know!


Thanks!

Steve Martinelli
OpenStack Keystone Project Team Lead


[0]
https://github.com/openstack/keystone/blob/b475040636ccc954949e6372a60dd86845644611/keystone/server/eventlet.py#L77-L80
[1]
https://github.com/openstack/keystone/blob/28a30f53a6c0d4e84d60795e08f137e8194abbe9/keystone/assignment/backends/ldap.py#L34
[2]
https://github.com/openstack/keystone/blob/28a30f53a6c0d4e84d60795e08f137e8194abbe9/keystone/resource/backends/ldap.py#L36-L39

[3]
http://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Lucas Alvares Gomes
I'm +1 for re-enabling it in Ironic. I think ironic plays a big role
in the tripleo puzzle so we should be aware of breakages in the
tripleo project.

On Mon, Nov 30, 2015 at 3:19 PM, Derek Higgins  wrote:
> Hi All,
>
> A few months tripleo switch from its devtest based CI to one that was
> based on instack. Before doing this we anticipated disruption in the ci jobs
> and removed them from non tripleo projects.
>
> We'd like to investigate adding it back to heat and ironic as these are
> the two projects where we find our ci provides the most value. But we can
> only do this if the results from the job are treated as voting.
>
> In the past most of the non tripleo projects tended to ignore the
> results from the tripleo job as it wasn't unusual for the job to broken for
> days at a time. The thing is, ignoring the results of the job is the reason
> (the majority of the time) it was broken in the first place.
> To decrease the number of breakages we are now no longer running master
> code for everything (for the non tripleo projects we bump the versions we
> use periodically if they are working). I believe with this model the CI jobs
> we run have become a lot more reliable, there are still breakages but far
> less frequently.
>
> What I proposing is we add at least one of our tripleo jobs back to both
> heat and ironic (and other projects associated with them e.g. clients,
> ironicinspector etc..), tripleo will switch to running latest master of
> those repositories and the cores approving on those projects should wait for
> a passing CI jobs before hitting approve. So how do people feel about doing
> this? can we give it a go? A couple of people have already expressed an
> interest in doing this but I'd like to make sure were all in agreement
> before switching it on.
>
> thanks,
> Derek.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 05:34 PM, Anita Kuno wrote:

On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:

Hi,



  This is to announce that  we have  setup  a  Third Party CI environment
for Proliant iLO Drivers. The results will be posted  under "HP Proliant CI
check" section in Non-voting mode.   We will be  running the basic deploy
tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We will
first  pursue to make the results consistent and over a period of time we
will try to promote it to voting mode.



For more information check the Wiki:
https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci ,
for any issues please contact ilo_driv...@groups.ext.hpe.com





Thanks & Regards,

Gururaja Grandhi

R&D Project Manager

HPE Proliant  Ironic  Project



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please do not post announcements to the mailing list about the existence
of your third party ci system.


Could you please explain why? As a developer I appreciated this post.



Ensure your third party ci system is listed here:
https://wiki.openstack.org/wiki/ThirdPartySystems (there are
instructions on the bottom of the page) as well as fill out a template
on your system so that folks can find your third party ci system the
same as all other third party ci systems.


Wiki is not an announcement FWIW.



Ensure you are familiar with the requirements for third party systems
listed here:
http://docs.openstack.org/infra/system-config/third_party.html#requirements

Thank you,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Derek Higgins



On 30/11/15 17:03, Dmitry Tantsur wrote:

On 11/30/2015 04:19 PM, Derek Higgins wrote:

Hi All,

 A few months tripleo switch from its devtest based CI to one that
was based on instack. Before doing this we anticipated disruption in the
ci jobs and removed them from non tripleo projects.

 We'd like to investigate adding it back to heat and ironic as these
are the two projects where we find our ci provides the most value. But
we can only do this if the results from the job are treated as voting.

 In the past most of the non tripleo projects tended to ignore the
results from the tripleo job as it wasn't unusual for the job to broken
for days at a time. The thing is, ignoring the results of the job is the
reason (the majority of the time) it was broken in the first place.
 To decrease the number of breakages we are now no longer running
master code for everything (for the non tripleo projects we bump the
versions we use periodically if they are working). I believe with this
model the CI jobs we run have become a lot more reliable, there are
still breakages but far less frequently.

What I proposing is we add at least one of our tripleo jobs back to both
heat and ironic (and other projects associated with them e.g. clients,
ironicinspector etc..), tripleo will switch to running latest master of
those repositories and the cores approving on those projects should wait
for a passing CI jobs before hitting approve. So how do people feel
about doing this? can we give it a go? A couple of people have already
expressed an interest in doing this but I'd like to make sure were all
in agreement before switching it on.


I'm one of these "people", so definitely +1 here.

By the way, is it possible to NOT run tripleo-ci on changes touching
only tests and docs? We do the same for our devstack jobs, it saves some
infra resources.
We don't do it currently, but I'm sure we could and it sounds like a 
good idea to me.






thanks,
Derek.

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 11/30/2015

2015-11-30 Thread Anastasia Kuznetsova
Thanks everyone for joining us!

Meeting minutes:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-11-30-16.01.html
Meeting log:
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-11-30-16.01.log.html

-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 04:19 PM, Derek Higgins wrote:

Hi All,

 A few months tripleo switch from its devtest based CI to one that
was based on instack. Before doing this we anticipated disruption in the
ci jobs and removed them from non tripleo projects.

 We'd like to investigate adding it back to heat and ironic as these
are the two projects where we find our ci provides the most value. But
we can only do this if the results from the job are treated as voting.

 In the past most of the non tripleo projects tended to ignore the
results from the tripleo job as it wasn't unusual for the job to broken
for days at a time. The thing is, ignoring the results of the job is the
reason (the majority of the time) it was broken in the first place.
 To decrease the number of breakages we are now no longer running
master code for everything (for the non tripleo projects we bump the
versions we use periodically if they are working). I believe with this
model the CI jobs we run have become a lot more reliable, there are
still breakages but far less frequently.

What I proposing is we add at least one of our tripleo jobs back to both
heat and ironic (and other projects associated with them e.g. clients,
ironicinspector etc..), tripleo will switch to running latest master of
those repositories and the cores approving on those projects should wait
for a passing CI jobs before hitting approve. So how do people feel
about doing this? can we give it a go? A couple of people have already
expressed an interest in doing this but I'd like to make sure were all
in agreement before switching it on.


I'm one of these "people", so definitely +1 here.

By the way, is it possible to NOT run tripleo-ci on changes touching 
only tests and docs? We do the same for our devstack jobs, it saves some 
infra resources.




thanks,
Derek.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] using reno for libraries

2015-11-30 Thread Dmitry Tantsur

On 11/30/2015 05:14 PM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2015-11-30 10:06:25 +0100:

On 11/28/2015 02:48 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-11-27 10:21:36 -0500:

Liaisons,

We're making good progress on adding reno to service projects as
we head to the Mitaka-1 milestone. Thank you!

We also need to add reno to all of the other deliverables with
changes that might affect deployers. That means clients and other
libraries, SDKs, etc. with configuration options or where releases
can change deployment behavior in some way. Now that most teams
have been through this conversion once, it should be easy to replicate
for the other repositories in a similar way.

Libraries have 2 audiences for release notes: developers consuming
the library and deployers pushing out new versions of the libraries.
To separate the notes for the two audiences, and avoid doing manually
something that we have been doing automatically, we can use reno
just for deployer release notes (changes in support for options,
drivers, etc.). That means the library repositories that need reno
should have it configured just like for the service projects, with
the separate jobs and a publishing location different from their
existing developer documentation. The developer docs can continue
to include notes for the developer audience.


I've had a couple of questions about this split for release notes. The
intent is for developer-focused notes to continue to come from commit
messages and in-tree documentation, while using reno for new and
additional deployer-focused communication. Most commits to libraries
won't need reno release notes.


This looks like unnecessary overcomplication. Why not use such a
convenient tool for both kinds of release notes instead of having us
invent and maintain one more place to put release notes, now for


In the past we have had rudimentary release notes and changelogs
for developers to read based on the git commit messages. Since
deployers and developers care about different things, we don't want
to make either group sift through the notes meant for the other.
So, we publish notes in different ways.


Hmm, so maybe for small libraries with few changes it's still fine to 
publish them together, what do you think?




The thing that is new here is publishing release notes for changes
in libraries that deployers need to know about. While the Oslo code
was in the incubator, and being copied into applications, it was
possible to detect deployer-focused changes like new or deprecated
configuration options in the application and put the notes there.
Using shared libraries means those changes can happen without
application developers being aware of them, so the library maintainers
need to be publishing notes. Using reno for those notes is consistent
with the way they are handled in the applications, so we're extending
one tool to more repositories.


developers? It's already not so easy to explain reno to newcomers, this
idea makes it even harder...


Can you tell me more about the difficulty you've had? I would like to
improve the documentation for reno and for how we use it.


Usually people are stuck at the "how do I do this at all" stage :) we've 
even added it to the ironic developer FAQ. As to me, the official reno 
documentation is nice enough (but see below), maybe people are not aware 
of it.


Another "issue" (at least for our newcomers) with reno docs is that 
http://docs.openstack.org/developer/reno/usage.html#generating-a-report 
mentions the "reno report" command which is not something we all 
actually use, we use these "tox -ereleasenotes" command. What is worse, 
this command (I guess it's by design) does not catch release note files 
that are just created locally. It took me time to figure out that I have 
to commit release notes before "tox -ereleasenotes" would show them in 
the rendered HTML.


Finally, people are confused by how our release note jobs handle 
branches. E.g. ironic-inspector release notes [1] currently seem to show 
release notes from stable/liberty (judging by the version), so no 
current items [2] are shown.


[1] http://docs.openstack.org/releasenotes/ironic-inspector/unreleased.html
[2] for example 
http://docs-draft.openstack.org/18/250418/2/gate/gate-ironic-inspector-releasenotes/f0b9363//releasenotes/build/html/unreleased.html




Doug





Doug



After we start using reno for libraries, the release announcement
email tool will be updated to use those same notes to build the
message in addition to looking at the git change log. This will be
a big step toward unifying the release process for services and
libraries, and will allow us to make progress on completing the
automation work we have planned for this cycle.

It's not necessary to add reno to the liberty branch for library
projects, since we tend to backport far fewer changes to libraries.
If you maintain a library that does see a lot of backports, by all
means go

Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-30 Thread Anita Kuno
On 11/23/2015 01:00 PM, Jim Rollenhagen wrote:
> On Mon, Nov 16, 2015 at 06:05:54AM -0800, Jim Rollenhagen wrote:
>>
>> Another idea I floated last week was to do a virtual midcycle of sorts.
>> Treat it like a normal midcycle in that everyone tells their management
>> "I'm out for 3-4 days for the midcycle", but they don't travel anywhere.
>> We come up with an agenda, see if there's any planning/syncing work to
>> do, or if it's all just hacking on code/reviews.
>>
>> Then we can set up some hangouts (or similar) to get people in the same
>> "room" working on things. Time zones will get weird, but we tend to
>> split into smaller groups at the midcycle anyway; this is just more
>> timezone-aligned. We can also find windows where time zones overlap when
>> we want to go across those boundaries. Disclaimer: people may need to
>> work some weird hours to do this well.
>>
>> I think this might get a little bit bumpy, but if it goes relatively
>> well we can try to improve on it for the future. Worst case, it's a
>> total failure and is roughly equivalent to the "no midcycle" option.
> 
> Nobody has objected, so we're going to roll with this. See y'all there. :)
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Listing your virtual sprint on the virtual sprints wikipage is helpful
to those folks who might not work on ironic daily to consider helping:
https://wiki.openstack.org/wiki/VirtualSprints

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-30 Thread Walter A. Boring IV
As a side note to the DR discussion here, there was a session in Tokyo 
that talked about a new

DR project called Smaug.   You can see their mission statement here:
https://launchpad.net/smaug

https://github.com/openstack/smaug

There is another service in the making called DRagon:
https://www.youtube.com/watch?v=upCzuFnswtw
http://www.slideshare.net/AlonMarx/dragon-and-cinder-v-brownbag-54639869

Yes that's 2 DR like service starting in OpenStack that are related to 
dragons.


Walt



Sean and Michal,

In fact, there is a reason that I ask this question. Recently I have a
confusion about if cinder should provide the ability of Disaster
Recovery to storage resources, like volume. I mean we have volume
replication v1&v2, but for DR, specially DR between two independent
OpenStack sites(production and DR site), I feel we still need more
features to support it, for example consistency group for replication,
etc. I'm not sure if those features belong in Cinder or some new
project for DR.

BR
WangHao

2015-11-30 3:02 GMT+08:00 Sean McGinnis :

On Sun, Nov 29, 2015 at 11:44:19AM +, Dulko, Michal wrote:

On Sat, 2015-11-28 at 10:56 +0800, hao wang wrote:

Hi guys,

I notice nova have a clarification of project scope:
http://docs.openstack.org/developer/nova/project_scope.html

I want to find cinder's, but failed,  do you know where to find it?

It's important to let developers know what feature should be
introduced into cinder and what shouldn't.

BR
Wang Hao

I believe Nova team needed to formalize the scop to have an explanation
for all the "this doesn't belong in Nova" comments on feature requests.
Does Cinder suffer from similar problems? From my perspective it's not
critically needed.

I agree. I haven't seen a need for something like that with Cinder. Wang
Hao, is there a reason you feel you need that?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] notification subteam meeting

2015-11-30 Thread Balázs Gibizer
Hi, 

The next meeting of the nova notification subteam will happen 2015-12-01 
Tuesday 20:00 UTC [1] on #openstack-meeting-alt on freenode 

Agenda:
- Status of the outstanding specs and code reviews
- Mid-cycle
- AOB

See you there.

Cheers,
Gibi

 [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20151201T20 
 [2] https://wiki.openstack.org/wiki/Meetings/NovaNotification


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Anita Kuno
On 11/30/2015 11:25 AM, Gururaj Grandhi wrote:
> Hi,
> 
> 
> 
>  This is to announce that  we have  setup  a  Third Party CI environment
> for Proliant iLO Drivers. The results will be posted  under "HP Proliant CI
> check" section in Non-voting mode.   We will be  running the basic deploy
> tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We will
> first  pursue to make the results consistent and over a period of time we
> will try to promote it to voting mode.
> 
> 
> 
>For more information check the Wiki:
> https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci ,
> for any issues please contact ilo_driv...@groups.ext.hpe.com
> 
> 
> 
> 
> 
> Thanks & Regards,
> 
> Gururaja Grandhi
> 
> R&D Project Manager
> 
> HPE Proliant  Ironic  Project
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Please do not post announcements to the mailing list about the existence
of your third party ci system.

Ensure your third party ci system is listed here:
https://wiki.openstack.org/wiki/ThirdPartySystems (there are
instructions on the bottom of the page) as well as fill out a template
on your system so that folks can find your third party ci system the
same as all other third party ci systems.

Ensure you are familiar with the requirements for third party systems
listed here:
http://docs.openstack.org/infra/system-config/third_party.html#requirements

Thank you,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-11-30 Thread Gururaj Grandhi
Hi,



 This is to announce that  we have  setup  a  Third Party CI environment
for Proliant iLO Drivers. The results will be posted  under "HP Proliant CI
check" section in Non-voting mode.   We will be  running the basic deploy
tests for  iscsi_ilo and agent_ilo drivers  for the check queue.  We will
first  pursue to make the results consistent and over a period of time we
will try to promote it to voting mode.



   For more information check the Wiki:
https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci ,
for any issues please contact ilo_driv...@groups.ext.hpe.com





Thanks & Regards,

Gururaja Grandhi

R&D Project Manager

HPE Proliant  Ironic  Project
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] IRC Bot issues

2015-11-30 Thread Jeremy Stanley
On 2015-11-30 20:49:46 +1100 (+1100), Joshua Hesketh wrote:
> Freenode are currently experiencing a severe DDoS attack that are
> having an effect on our bots. As such the meetbot, irc logging and
> gerrit watcher are interminably available.
> 
> We expect the bots to resume their normal function once Freenode
> has recovered. For now, meetings may have to be postponed or
> minuted by hand.

I'm not sure if the attacks on Freenode have subsided yet, but I was
finally able to get the "openstack" meetbot reconnected around 15:45
UTC and it's seemed stable for half hour since. That's not to say I
have much faith at the moment it'll stick around, but it's at least
an improvement.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Encouraging first-time contributors through bug tags/reviews

2015-11-30 Thread Doug Hellmann
Excerpts from sean roberts's message of 2015-11-30 07:57:54 -0800:
> How about:
> First timers assign a bug to a mentor and the mentor takes responsibility
> for the first timer learning from the bug to completion.

That would mean the learning process is different from what we want the
regular process to be.

If the problem is identifying "In Progress" bugs that are actually not
being worked on, then let's figure out a way to make that easier.
sdague's point about the auto-abandon process may help. We could query
gerrit for "stale" reviews that would have met the old abandon
requirements and that refer to bugs, for example. Using that
information, someone could follow-up with the patch owner to see if it
is actually abandoned, before changing the bug status or encouraging the
owner to abandon the patch.

> 
> Per project, a few people volunteer themselves as mentors. As easy as
> responding to [project][mentor] emails.
> 
> On Monday, November 30, 2015, Sean Dague  wrote:
> 
> > On 11/25/2015 03:22 PM, Shamail wrote:
> > > Hi,
> > >
> > >> On Nov 25, 2015, at 11:05 PM, Doug Hellmann  > > wrote:
> > >>
> > >> Excerpts from Shamail Tahir's message of 2015-11-25 09:15:54 -0500:
> > >>> Hi everyone,
> > >>>
> > >>> Andrew Mitry recently shared a medium post[1] by Kent C. Dobbs which
> > >>> discusses how one open-source project is encouraging contributions by
> > new
> > >>> open-source contributors through a combination of a special tag (which
> > is
> > >>> associated with work that is needed but can only be completed by
> > someone
> > >>> who is a first-time contributor) and helpful comments in the review
> > phase
> > >>> to ensure the contribution(s) eventually get merged.
> > >>>
> > >>> While reading the article, I immediately thought about our
> > >>> low-hanging-fruit bug tag which is used for a very similar purpose in
> > "bug
> > >>> fixing" section of  the "how to contribute" page[2].  The
> > low-hanging-fruit
> > >>> tag is used to identify items that are generally suitable for
> > first-time or
> > >>> beginner contributors but, in reality, anyone can pick them up.
> > >>>
> > >>> I wanted to propose a new tag (or even changing the, existing,
> > low-hanging
> > >>> fruit tag) that would identify items that we are reserving for
> > first-time
> > >>> OpenStack contributors (e.g. a patch-set for the item submitted by
> > someone
> > >>> who is not a first time contributor would be rejected)... The same
> > article
> > >>> that Andrew shared mentions using an "up-for-grabs" tag which also
> > >>> populates the items at up-for-grabs[3] (a site where people looking to
> > >>> start working on open-source projects see entry-level items from
> > multiple
> > >>> projects).  If we move forward with an exclusive tag for first-timers
> > then
> > >>> it would be nice if we could use the up-for-grabs tag so that OpenStack
> > >>> also shows up on the list too.  Please let me know if this change
> > should be
> > >>> proposed elsewhere, the tags are maintained in launchpad and the wiki I
> > >>> found related to bug tags[4] didn't indicate a procedure for
> > submitting a
> > >>> change proposal.
> > >>
> > >> I like the idea of making bugs suitable for first-timers more
> > >> discoverable. I'm not sure we need to *reserve* any bugs for any class
> > >> of contributor. What benefit do you think that provides?
> > > I would have to defer to additional feedback here...
> > >
> > > My own perspective from when I was doing my first contribution is that
> > it was hard to find active "low-hanging-fruit" items.  Most were already
> > work-in-progress or assigned.
> >
> > This was a direct consequence of us dropping the auto-abandoning of old
> > code reviews in gerrit. When a review is abandoned the bug is flipped
> > back to New instead of In Progress.
> >
> > I found quite often people go and gobble up bugs assigning them to
> > themselves, but don't make real progress on them. Then new contributors
> > show up, and don't work on any of those issues because our tools say
> > someone is already on top of it.
> >
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] using reno for libraries

2015-11-30 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2015-11-30 10:06:25 +0100:
> On 11/28/2015 02:48 PM, Doug Hellmann wrote:
> > Excerpts from Doug Hellmann's message of 2015-11-27 10:21:36 -0500:
> >> Liaisons,
> >>
> >> We're making good progress on adding reno to service projects as
> >> we head to the Mitaka-1 milestone. Thank you!
> >>
> >> We also need to add reno to all of the other deliverables with
> >> changes that might affect deployers. That means clients and other
> >> libraries, SDKs, etc. with configuration options or where releases
> >> can change deployment behavior in some way. Now that most teams
> >> have been through this conversion once, it should be easy to replicate
> >> for the other repositories in a similar way.
> >>
> >> Libraries have 2 audiences for release notes: developers consuming
> >> the library and deployers pushing out new versions of the libraries.
> >> To separate the notes for the two audiences, and avoid doing manually
> >> something that we have been doing automatically, we can use reno
> >> just for deployer release notes (changes in support for options,
> >> drivers, etc.). That means the library repositories that need reno
> >> should have it configured just like for the service projects, with
> >> the separate jobs and a publishing location different from their
> >> existing developer documentation. The developer docs can continue
> >> to include notes for the developer audience.
> >
> > I've had a couple of questions about this split for release notes. The
> > intent is for developer-focused notes to continue to come from commit
> > messages and in-tree documentation, while using reno for new and
> > additional deployer-focused communication. Most commits to libraries
> > won't need reno release notes.
> 
> This looks like unnecessary overcomplication. Why not use such a 
> convenient tool for both kinds of release notes instead of having us 
> invent and maintain one more place to put release notes, now for 

In the past we have had rudimentary release notes and changelogs
for developers to read based on the git commit messages. Since
deployers and developers care about different things, we don't want
to make either group sift through the notes meant for the other.
So, we publish notes in different ways.

The thing that is new here is publishing release notes for changes
in libraries that deployers need to know about. While the Oslo code
was in the incubator, and being copied into applications, it was
possible to detect deployer-focused changes like new or deprecated
configuration options in the application and put the notes there.
Using shared libraries means those changes can happen without
application developers being aware of them, so the library maintainers
need to be publishing notes. Using reno for those notes is consistent
with the way they are handled in the applications, so we're extending
one tool to more repositories.

> developers? It's already not so easy to explain reno to newcomers, this 
> idea makes it even harder...

Can you tell me more about the difficulty you've had? I would like to
improve the documentation for reno and for how we use it.

Doug

> 
> >
> > Doug
> >
> >>
> >> After we start using reno for libraries, the release announcement
> >> email tool will be updated to use those same notes to build the
> >> message in addition to looking at the git change log. This will be
> >> a big step toward unifying the release process for services and
> >> libraries, and will allow us to make progress on completing the
> >> automation work we have planned for this cycle.
> >>
> >> It's not necessary to add reno to the liberty branch for library
> >> projects, since we tend to backport far fewer changes to libraries.
> >> If you maintain a library that does see a lot of backports, by all
> >> means go ahead and add reno, but it's not a requirement. If you do
> >> set up multiple branches, make sure you have one page that uses the
> >> release-notes directive without specifing a branch, as in the
> >> oslo.config example, to build notes for the "current" branch to get
> >> releases from master and to serve as a test for rendering notes
> >> added to stable branches.
> >>
> >> Thanks,
> >> Doug
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Encouraging first-time contributors through bug tags/reviews

2015-11-30 Thread sean roberts
How about:
First timers assign a bug to a mentor and the mentor takes responsibility
for the first timer learning from the bug to completion.

Per project, a few people volunteer themselves as mentors. As easy as
responding to [project][mentor] emails.

On Monday, November 30, 2015, Sean Dague  wrote:

> On 11/25/2015 03:22 PM, Shamail wrote:
> > Hi,
> >
> >> On Nov 25, 2015, at 11:05 PM, Doug Hellmann  > wrote:
> >>
> >> Excerpts from Shamail Tahir's message of 2015-11-25 09:15:54 -0500:
> >>> Hi everyone,
> >>>
> >>> Andrew Mitry recently shared a medium post[1] by Kent C. Dobbs which
> >>> discusses how one open-source project is encouraging contributions by
> new
> >>> open-source contributors through a combination of a special tag (which
> is
> >>> associated with work that is needed but can only be completed by
> someone
> >>> who is a first-time contributor) and helpful comments in the review
> phase
> >>> to ensure the contribution(s) eventually get merged.
> >>>
> >>> While reading the article, I immediately thought about our
> >>> low-hanging-fruit bug tag which is used for a very similar purpose in
> "bug
> >>> fixing" section of  the "how to contribute" page[2].  The
> low-hanging-fruit
> >>> tag is used to identify items that are generally suitable for
> first-time or
> >>> beginner contributors but, in reality, anyone can pick them up.
> >>>
> >>> I wanted to propose a new tag (or even changing the, existing,
> low-hanging
> >>> fruit tag) that would identify items that we are reserving for
> first-time
> >>> OpenStack contributors (e.g. a patch-set for the item submitted by
> someone
> >>> who is not a first time contributor would be rejected)... The same
> article
> >>> that Andrew shared mentions using an "up-for-grabs" tag which also
> >>> populates the items at up-for-grabs[3] (a site where people looking to
> >>> start working on open-source projects see entry-level items from
> multiple
> >>> projects).  If we move forward with an exclusive tag for first-timers
> then
> >>> it would be nice if we could use the up-for-grabs tag so that OpenStack
> >>> also shows up on the list too.  Please let me know if this change
> should be
> >>> proposed elsewhere, the tags are maintained in launchpad and the wiki I
> >>> found related to bug tags[4] didn't indicate a procedure for
> submitting a
> >>> change proposal.
> >>
> >> I like the idea of making bugs suitable for first-timers more
> >> discoverable. I'm not sure we need to *reserve* any bugs for any class
> >> of contributor. What benefit do you think that provides?
> > I would have to defer to additional feedback here...
> >
> > My own perspective from when I was doing my first contribution is that
> it was hard to find active "low-hanging-fruit" items.  Most were already
> work-in-progress or assigned.
>
> This was a direct consequence of us dropping the auto-abandoning of old
> code reviews in gerrit. When a review is abandoned the bug is flipped
> back to New instead of In Progress.
>
> I found quite often people go and gobble up bugs assigning them to
> themselves, but don't make real progress on them. Then new contributors
> show up, and don't work on any of those issues because our tools say
> someone is already on top of it.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
~sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Supporting Django 1.9

2015-11-30 Thread Thomas Goirand
On 11/29/2015 08:59 PM, Rob Cresswell (rcresswe) wrote:
> https://blueprints.launchpad.net/horizon/+spec/drop-dj17
> 
> 
> This is where changes are currently being tracked. I don’t quite
> understand why these would be back ported; they would break Liberty with
> 1.7. Perhaps we can discuss on IRC tomorrow?
> 
> Rob

Before Mitaka is out, Sid will hold Horizon from Liberty. I don't care
having breakage for Django 1.7 (Sid has 1.8 already), but what I need is
having support for Django 1.9. That's what I want to backport.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-30 Thread Anita Kuno
On 11/29/2015 02:02 PM, Sean McGinnis wrote:
> On Sun, Nov 29, 2015 at 11:44:19AM +, Dulko, Michal wrote:
>> On Sat, 2015-11-28 at 10:56 +0800, hao wang wrote:
>>> Hi guys,
>>>
>>> I notice nova have a clarification of project scope:
>>> http://docs.openstack.org/developer/nova/project_scope.html
>>>
>>> I want to find cinder's, but failed,  do you know where to find it?
>>>
>>> It's important to let developers know what feature should be
>>> introduced into cinder and what shouldn't.
>>>
>>> BR
>>> Wang Hao
>>
>> I believe Nova team needed to formalize the scop to have an explanation
>> for all the "this doesn't belong in Nova" comments on feature requests.
>> Does Cinder suffer from similar problems? From my perspective it's not
>> critically needed.
> 
> I agree. I haven't seen a need for something like that with Cinder. Wang
> Hao, is there a reason you feel you need that?
> 

For reference here is the Cinder mission statement:
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n273

All projects listed in the governance repository reference/projects.yaml
have a mission statement, I do encourage folks thinking about starting a
project to look at the mission statements here first as there may
already be an effort ongoing with which you can align your work.

Thanks Wang Hao,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] IRC Bot issues

2015-11-30 Thread Clark, Jay
Can't connect either. Dead in the water

Regards,
Jay Clark
Sr. OpenStack Deployment Engineer
E: jason.t.cl...@hpe.com
H: 919.341.4670
M: 919.345.1127
IRC (freenode): jasondotstar


From: lichen.hangzhou [lichen.hangz...@gmail.com]
Sent: Monday, November 30, 2015 9:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-in...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Infra] IRC Bot issues

Can't connect +1 and web client do not work  :(

-chen

At 2015-11-30 22:08:12, "Hinds, Luke (Nokia - GB/Bristol)" 
 wrote:
Me too.  It is possible to get on using the web client though;  
https://webchat.freenode.net/ .

On Mon, 2015-11-30 at 14:00 +, EXT Dugger, Donald D wrote:
I can’t even connect to the IRC server at all, can others get it?

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

From: Joshua Hesketh 
[mailto:joshua.hesk...@gmail.com]
Sent: Monday, November 30, 2015 2:50 AM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>; 
openstack-infra 
mailto:openstack-in...@lists.openstack.org>>
Subject: [openstack-dev] IRC Bot issues

Hi all,
Freenode are currently experiencing a severe DDoS attack that are having an 
effect on our bots. As such the meetbot, irc logging and gerrit watcher are 
interminably available.

We expect the bots to resume their normal function once Freenode has recovered. 
For now, meetings may have to be postponed or minuted by hand.
Cheers,
Josh

___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >