Re: [openstack-dev] [Horizon] How do we move forward with xstatic releases?

2016-03-18 Thread Richard Jones
On 13 March 2016 at 07:11, Matthias Runge  wrote:

> On 10/03/16 11:48, Beth Elwell wrote:
> > If we will anyway have potential breakage I don’t understand why the
> > better solution here would not be to just use the bower and npm tools
> > which are standardised for JavaScript and would move Horizon more
> > towards using widely recognised tooling from within not just Openstack
> > but the wider development community. Back versions always need to be
> > supported for a time, however I would add that long term this could end
> > up saving time and create a stable longer term solution.
> >
>
> I have a few issues with those "package managers":
> - downloads are not verified, there is a chance of getting a "bad"
> download.
> - they are pointing to the outside world, like to github etc. While they
> appear to work "most of the time", that might not good enough for the gate
> - how often have we been blocked by releases of software not managed by
> OpenStack? Seriously, that happens quite a few times over a release
> cycle, not to mention breakages by releases of our own tools turning out
> to block one or the other sub-project


To be fair to those package managers,  the issues OpenStack has had with
releases of libraries breaking things is a result of us either:

a) not pinning releases (upper-constraints now fixes that for *things that
use it*, which isn't everything, sadly) or
b) the system that tests upper-constraints changes not having broad enough
testing across OpenStack for us to notice when a new library release breaks
things. I would like to increase the inclusion of Horizon's test suite in
the constraints testing for this reason. At least, it's on my TODO :-)

Horizon, for example, currently does *not* use the upper-constraints
pinning in its test suite or installation, so we're vulnerable to, say, a
python-*client release that's not compatible. I have a patch in the works
to address this, but it kinda depends on us moving over from run_tests.sh
to tox, which is definitely something to wait until N for.


 Richard

ps. as for "unverified downloads" ... they're verified precisely as much as
pypi packages are, and we install a whole buncha those :-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] COE drivers spec

2016-03-18 Thread Jamie Hannaford
Hi Kai,


Does Magnum already have drivers for networks and volumes, or are you 
suggesting this is what needs to be handled by a new COE driver structure?


I think having a directory per COE is a good idea, if the driver handles 
multiple responsibilities. So far we haven't really identified what the 
responsibility of a COE driver is - that's what I'm currently trying to 
identify. What type of classes would be in the default/ and contrib/ 
directories?


I'm assuming the scale manager would need to be specced alongside the COE 
drivers, since each COE has a different way of handling scaling. If that's the 
case, how would a scale manager integrate into the COE driver model we've 
already touched upon? Would each COE class implement various scaling methods 
from a base class, or would there be a manager class per COE?


Jamie



From: Kai Qiang Wu 
Sent: 17 March 2016 15:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] COE drivers spec


Here are some of my raw points,


1. For the driver mentioned, I think we not necessary use bay-driver here, as 
have network-driver, volume-driver, maybe it is not needed to introduce driver 
in bay level.(bay is higher level than network or volume)

maybe like

coes/
swarm/
mesos/
kubernetes

Each coes include, take swarm as example,

coes/
swarm/
default/
contrib/
Or we not use contrib here, just like (one is support by default, others are 
contributed by more contributors and tested in jenkins pipeline)
coes/
swarm/
atomic/
ubuntu/


We have BaseCoE, other specific CoE inherit from that, Each CoE have related 
life management operations, like Create, Update, Get, Delete life cycle 
management.



2. We need to think more about scale manager, which involves scale cluster up 
and down, Maybe a related auto-scale and manual scale ways.


The user cases, like as a Cloud Administrator, I could easily use OpenStack to 
provide CoEs cluster, and manage CoE life cycle. and scale CoEs.
CoEs could do its best to use OpenStack network, and volume services to provide 
CoE related network, volume support.


Others interesting case(not required), if user just want to deploy one 
container in Magnum, we schedule it to the right CoE, (if user manual specify, 
it would schedule to the specific CoE)


Or more user cases .



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Jamie Hannaford ---17/03/2016 07:24:41 pm---Hi all, 
I'm writing the spec for the COE drivers, and I w]Jamie Hannaford ---17/03/2016 
07:24:41 pm---Hi all, I'm writing the spec for the COE drivers, and I wanted 
some feedback about what it should in

From: Jamie Hannaford 
To: "openstack-dev@lists.openstack.org" 
Date: 17/03/2016 07:24 pm
Subject: [openstack-dev] [Magnum] COE drivers spec





Hi all,

I'm writing the spec for the COE drivers, and I wanted some feedback about what 
it should include. I'm trying to reconstruct some of the discussion that was 
had at the mid-cycle meet-up, but since I wasn't there I need to rely on people 
who were :)

From my perspective, the spec should recommend the following:

1. Change the BayModel `coe` attribute to `bay_driver`, the value of which will 
correspond to the name of the directory where the COE code will reside, i.e. 
drivers/{driver_name}

2. Introduce a base Driver class that each COE driver extends. This would 
reside in the drivers dir too. This base driver will specify the interface for 
interacting with a Bay. The following operations would need to be defined by 
each COE driver: Get, Create, List, List detailed, Update, Delete. Each COE 
driver would implement each operation differently depending on their needs, but 
would satisfy the base interface. The base class would also contain common 
logic to avoid code duplication. Any operations that fall outside this 
interface would not exist in the COE driver class, but rather an extension 
situated elsewhere. The JSON payloads for requests would differ from COE to COE.

Cinder already uses this approach to great effect for volume drivers:

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py

Question: Is this base class a feasible idea for Magnum? If so, do we need any 
other operations in the base class that I haven't mentioned?

3. Each COE driver would have its 

Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-18 Thread Daryl Walleck
I do think negative tests should live on, most likely in their own 
tempest-plugin based repository. However, I'm not sure that I agree with the 
reason being because they are functional tests rather than integration. If you 
look at the current tests, one could argue that many non-Nova tests are 
actually functional tests. The Nova tests have the luck of requiring other 
services to function at all, but volumes, identity, and other services can be 
and are being tested in isolation in at least some Tempest tests. If removing 
functional tests from Tempest is problem that is trying to be solved, then 
there's a scope much larger than negative tests to address.

I'm also concerned about the idea of moving these tests back to their 
individual project repos. To run the same tests that I do today, I would need 
to install each individual project as well as Tempest to get the same coverage 
that I get today. That feels like a quite heavy burden just to be able to keep 
the same coverage as there is now. This would also mean that Tempest should 
likely be gated on the tests that exist in the project repositories, which 
seems like would add a great deal of complexity and maintenance.

While I understand that some of these tests may be a duplicate effort in the 
context of the gate, they are needed coverage when testing a deployed OpenStack 
environment. Rather than sending the negative tests and others scattered back 
to the individual projects, would it make sense to have a separate repository 
to hold "extended" Tempest tests? This would allow negative tests to stay in a 
centralized location, as well as provide a home for tests that might not be 
desirable for the gate (e.x. longer scenarios, the authorization tests, etc). 
What I would like to avoid is the pruning of tests that are meaningful to 
simply reduce execution time. If what is desired is a better definition between 
what is Tempest and what is not, then I think that is a more important 
conversation to have in depth.

Daryl

From: Ken'ichi Ohmichi 
Sent: Wednesday, March 16, 2016 8:20 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [QA][all] Propose to remove negative tests from
Tempest

Hi

I have one proposal[1] related to negative tests in Tempest, and
hoping opinions before doing that.

Now Tempest contains negative tests and sometimes patches are being
posted for adding more negative tests, but I'd like to propose
removing them from Tempest instead.

Negative tests verify surfaces of REST APIs for each component without
any integrations between components. That doesn't seem integration
tests which are scope of Tempest.
In addition, we need to spend the test operating time on different
component's gate if adding negative tests into Tempest. For example,
we are operating negative tests of Keystone and more
components on the gate of Nova. That is meaningless, so we need to
avoid more negative tests into Tempest now.

If wanting to add negative tests, it is a nice option to implement
these tests on each component repo with Tempest plugin interface. We
can avoid operating negative tests on different component gates and
each component team can decide what negative tests are valuable on the
gate.

In long term, all negative tests will be migrated into each component
repo with Tempest plugin interface. We will be able to operate
valuable negative tests only on each gate.

Any thoughts?

Thanks
Ken Ohmichi

---
[1]: https://review.openstack.org/#/c/293197/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-18 Thread Jeremy Stanley
On 2016-03-16 10:37:36 -0400 (-0400), Emilien Macchi wrote:
[...]
> If PuppetOpenStack and Infra could avoid overlaps, that would be
> awesome. Some people are working on both groups and to continue the
> best of our collaboration I would like to set this constraint.

We'll do our best! We (Infra) also generally try to avoid overlap
with Release Management, Stable Branch, Quality Assurance, and also
Infra-related sessions for other teams.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][ptls] tagging reviews, making tags searchable

2016-03-18 Thread Amrith Kumar
John, thanks, this is great info. (more comments inline below).

-amrith

> -Original Message-
> From: John Dickinson [mailto:m...@not.mn]
> Sent: Friday, March 18, 2016 11:16 AM
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [all][infra][ptls] tagging reviews, making
> tags searchable
> 
> 
> 
> On 18 Mar 2016, at 7:15, Amrith Kumar wrote:
> 
> > As we were working through reviews for the Mitaka release, the Trove
> team was trying to track groups of reviews that were needed for a specific
> milestone, like m-1, or m-3 or in the recent days for rc1.
> >
> > The only way we could find was to have someone (in this instance, me)
> 'star' the reviews that we wanted and then have people look for reviews
> with 'starredby:amrith' and status:open.
> >
> > How do other projects do this? Is there a simple way to tag reviews in a
> searchable way?
> >
> > Thanks,
> >
> > -amrith
> 
> 
> We've had 3 iterations of tracking/prioritizing reviews in Swift (and it's
> still a work-in-progress).
> 
> 1) I write down stuff on a wiki page.
> https://wiki.openstack.org/wiki/Swift/PriorityReviews Currently, this is
> updated for the work we're getting done over this next week for the Mitaka
> release.
> 
> 2) Gerrit dashboards like https://goo.gl/mtEv1C. This has a section at the
> top which included "starred by the PTL" patches.
> 

[amrith] Yes, we have something like this in a dashboard. I've been also 
looking at some other stuff

http://chris.wang/posts/2016/01/04/gerrit-dashboards
https://gerrit-review.googlesource.com/Documentation/intro-project-owner.html
https://review.openstack.org/Documentation/user-dashboards.html#project-dashboards

Which allow you to cause your dashboards to show up in the projects/dashboards 
area so one doesn't have to have the link shortened links; basically the query 
from gerrit-dashboard-creator tool gets to be part of the project wide 
dashboard.

> 3) A dashboard generated from gerrit and git data.
> http://not.mn/swift/swift_community_dashboard.html The "Community Starred
> Patches" is a list of the top 20 commonly starred patches by the Swift
> community. Basically, for every person who has gerrit activity around
> Swift, I pull their stars. From that I can see which patches are more
> commonly starred. I also weight each person's stars according to their
> level of activity in the project. This gives a very good idea of what the
> community as a whole finds important.
> 

[amrith] This is a wonderful idea and I'm going to see if I can do something 
like this for Trove. Many thanks!

> I've found a role for all of these tools at different times--I don't think
> one is generally better than another. Right now as we're finishing up a
> release for Mitaka, all 3 tools are useful for helping coordinate the
> remaining work.
> 
> My generate community dashboard isn't done. There's a lot more information
> I can pull and use to help prioritize reviews. I plan on working on this
> after the Mitaka release gets cut. Here's a teaser for the next thing I'll
> be doing: given a person's email, generate an ordered list of patches that
> person should review or work on to be most effective.
> 
> --John
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Mitaka RC1 available

2016-03-18 Thread Thierry Carrez

Hello everyone,

Glance is the first milestone-based project to produce a release 
candidate for the end of the Mitaka cycle! You can find the RC1 source 
code tarball at:


https://tarballs.openstack.org/glance/glance-12.0.0.0rc1.tar.gz

Unless release-critical issues are found that warrant a release 
candidate respin, this RC1 will be formally released as the final Mitaka 
Glance release on April 7th. You are therefore strongly encouraged to 
test and validate this tarball !


Alternatively, you can directly test the stable/mitaka release branch at:

http://git.openstack.org/cgit/openstack/glance/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/glance/+filebug

and tag it *glance-rc-potential* to bring it to the Glance release 
crew's attention.


Note that the "master" branch of Glance is now open for Newton 
development, and feature freeze restrictions no longer apply there !


Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-18 Thread Thomas Goirand
On 03/16/2016 03:37 PM, Emilien Macchi wrote:
> If PuppetOpenStack and Infra could avoid overlaps, that would be
> awesome. Some people are working on both groups and to continue the
> best of our collaboration I would like to set this constraint.
> 
> Thanks a lot,

Same for packaging + infra. Last time in Tokyo, Clack was nice enough to
not attend the Infra sessions to attend the packaging-deb one. I hope
the same thing will not happen again to someone (else?) from Infra, and
that we can get some (much needed) time with them, so we can discuss the
packaging jobs on infra.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-18 Thread Sean McGinnis
On Wed, Mar 16, 2016 at 07:10:35AM -0400, Attila Fazekas wrote:
> 
> NO : For any kind of extra quota service.
> 
> In other places I saw other reasons for a quota service or similar,
>  the actual cost of this approach is higher than most people would think so 
> NO.
> 

I have to agree, there is a lot of hidden cost with making this its own
service, both on the development and end user side of things.

> 
> Maybe Library,
> But I do not want to see for example the bad pattern used in nova to spread 
> everywhere.
> 
> The quota usage handling MUST happen in the same DB transaction as the 
> resource record (volume, server..) create/update/delete  .
> 
> There is no need for.:
> - reservation-expirer services or periodic tasks ..
> - there is no need for quota usage correcter shell scripts or whatever
> - multiple commits
> 
> 
> We have a transaction capable DB, to help us,
> not using it would be lame.

I would much rather see a library that at least ensures our handling of
quotas is consistent across projects. We've implemented some things and
had some issues. Nova has implemented some things and had some issues.
Let's get this all in one place so we can all benefit from lessons
learned and work through these issues.

I'm sure there will still be problems. But I think it ends up being a
better user experience even if there are some quirks in the library. At
least they will be consistent quirks across all the projects instead of
needing to understand each project's own set of misbehavior.

> 
> 
> [2] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061338.html
> 
> - Original Message -
> > From: "Nikhil Komawar" 
> > To: "OpenStack Development Mailing List" 
> > Sent: Wednesday, March 16, 2016 7:25:26 AM
> > Subject: [openstack-dev] [cross-project] [all] Quotas -- service vs. library
> > 
> > Hello everyone,
> > 
> > tl;dr;
> > I'm writing to request some feedback on whether the cross project Quotas
> > work should move ahead as a service or a library or going to a far
> > extent I'd ask should this even be in a common repository, would
> > projects prefer to implement everything from scratch in-tree? Should we
> > limit it to a guideline spec?
> > 
> > But before I ask anymore, I want to specifically thank Doug Hellmann,
> > Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
> > Laski for the early feedback that has helped provide some good shape to
> > the already discussions.
> > 
> > Some more context on what the happenings:
> > We've this in progress spec [1] up for providing context and platform
> > for such discussions. I will rephrase it to say that we plan to
> > introduce a new 'entity' in the Openstack realm that may be a library or
> > a service. Both concepts have trade-offs and the WG wanted to get more
> > ideas around such trade-offs from the larger community.
> > 
> > Service:
> > This would entail creating a new project and will introduce managing
> > tables for quotas for all the projects that will use this service. For
> > example if Nova, Glance, and Cinder decide to use it, this 'entity' will
> > be responsible for handling the enforcement, management and DB upgrades
> > of the quotas logic for all resources for all three projects. This means
> > less pain for projects during the implementation and maintenance phase,
> > holistic view of the cloud and almost a guarantee of best practices
> > followed (no clutter or guessing around what different projects are
> > doing). However, it results into a big dependency; all projects rely on
> > this one service for right enforcement, avoiding races (if do not
> > incline on implementing some of that in-tree) and DB
> > migrations/upgrades. It will be at the core of the cloud and prone to
> > attack vectors, bugs and margin of error.
> > 
> > Library:
> > A library could be thought of in two different ways:
> > 1) Something that does not deal with backed DB models, provides a
> > generic enforcement and management engine. To think ahead a little bit
> > it may be a ABC or even a few standard implementation vectors that can
> > be imported into a project space. The project will have it's own API for
> > quotas and the drivers will enforce different types of logic; per se
> > flat quota driver or hierarchical quota driver with custom/project
> > specific logic in project tree. Project maintains it's own DB and
> > upgrades thereof.
> > 2) A library that has models for DB tables that the project can import
> > from. Thus the individual projects will have a handy outline of what the
> > tables should look like, implicitly considering the right table values,
> > arguments, etc. Project has it's own API and implements drivers in-tree
> > by importing this semi-defined structure. Project maintains it's own
> > upgrades but will be somewhat influenced by the common repo.
> > 
> > Library would keep things simple for the common repository and 

[openstack-dev] [cloudkitty] PTL candidacy

2016-03-18 Thread Stéphane Albert

Hi everyone,

I'm announcing my candidacy for PTL of the CloudKitty team during the 
Newton

cycle.

Some of the features and improvements of the project I envisioned are 
not yet

ready and contributed and there is more to come. But due to lack of
contributor's time we'll need more time to make them a reality.
That's why I'd like to apply for a second cycle to help the project get 
these

features and keep the project going forward.
My main focus during this cycle will be to improve project communication 
and

community development.

As we are about to get released as part of Mitaka, I hope it will help 
the

CloudKitty project get a better focus from other people and increase
contributions.

During the Newton cycle I'd like to keep the project focused on these 
specific

points:

- Improve code QA and global test coverage, Mikata was a good step 
forward but

  we can go further.

- Improve internal data modeling and global user experience with new 
APIs.


- Improve performances and scaling.

Review link: https://review.openstack.org/#/c/294327/

Cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][documentation] openstackdocstheme 1.3.0 release

2016-03-18 Thread no-reply
We are jubilant to announce the release of:

openstackdocstheme 1.3.0: OpenStack Docs Theme

With source available at:

http://git.openstack.org/cgit/openstack/openstackdocstheme

Please report issues through launchpad:

http://bugs.launchpad.net/openstack-manuals

For more details, please see below.

1.3.0
^

Other Notes

* The sidebar is not version dependent anymore, it always links to
  the main page.

Changes in openstackdocstheme 1.2.7..1.3.0
--

acbab4c Make theme version independent
1d5e3fe Shorted margin and padding for top page menu
681b186 Use pep8 instead of linters
b67b58c Fix a spell typos
de9834d Rename pep8 to linters test
8d632c8 Remove feedback formular from the footer
f81f334 Hide duplicate titles and empty tocs in generated content

Diffstat (except docs and test files)
-

openstackdocstheme/theme/openstackdocs/footer.html | 12 ---
openstackdocstheme/theme/openstackdocs/layout.html |  3 +--
.../theme/openstackdocs/localtoc.html  |  4 
.../theme/openstackdocs/sidebartoc.html|  8 ++--
.../theme/openstackdocs/static/css/combined.css| 24 ++
.../theme/openstackdocs/static/js/webui-popover.js |  4 ++--
releasenotes/notes/norelease-ccd7722c078a73a2.yaml |  5 +
7 files changed, 34 insertions(+), 26 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-18 Thread Dmitry Borodaenko
On Wed, Mar 16, 2016 at 10:37:36AM -0400, Emilien Macchi wrote:
> If PuppetOpenStack and Infra could avoid overlaps, that would be
> awesome. Some people are working on both groups and to continue the
> best of our collaboration I would like to set this constraint.

Please count Fuel in for this request, would be nice if at least
fishbowl sessions for Fuel did not overlap with PuppetOpenStack and
Infra.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Thanks Adrian,

I think the Keystone approach will work. For others, please speak up if it 
doesn’t work for you.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 9:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I tweaked the blueprint in accordance with this approach, and approved it for 
Newton:
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

I think this is something we can all agree on as a middle ground, If not, I’m 
open to revisiting the discussion.

Thanks,

Adrian

On Mar 17, 2016, at 6:13 PM, Adrian Otto 
> wrote:

Hongbin,

One alternative we could discuss as an option for operators that have a good 
reason not to use Barbican, is to use Keystone.

Keystone credentials store: 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials

The contents are stored in plain text in the Keystone DB, so we would want to 
generate an encryption key per bay, encrypt the certificate and store it in 
keystone. We would then use the same key to decrypt it upon reading the key 
back. This might be an acceptable middle ground for clouds that will not or can 
not run Barbican. This should work for any OpenStack cloud since Grizzly. The 
total amount of code in Magnum would be small, as the API already exists. We 
would need a library function to encrypt and decrypt the data, and ideally a 
way to select different encryption algorithms in case one is judged weak at 
some point in the future, justifying the use of an alternate.

Adrian


On Mar 17, 2016, at 4:55 PM, Adrian Otto 
> wrote:

Hongbin,


On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.


I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in 
clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's 
included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is 
there resistance to adopting both Magnum and Barbican together? I think the 
answer is that people think they can make Magnum work with really old clouds 
that were set up before Barbican was introduced. That expectation is simply not 
reasonable. If there were a way to easily add Barbican to older clouds, perhaps 
this reluctance would melt away.


Magnum should not be in the business of credential storage when there is an 
existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have 
Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/


It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) 
CA signing capability, and (2) certificate storage. My willingness to implement 
an alternative was based on our need to get a certificate generation and 
signing solution that actually worked, as Barbican did not work for that at 

[openstack-dev] [Packaging-Rpm] PTL Candidacy

2016-03-18 Thread Igor Yozhikov
I would like to announce my candidacy for PTL in Packaging-Rpm

I’m working on creating, building, polishing and maintaining Linux packages
for OpenStack projects with  various dependencies for few years since
IceHouse launch. It allowed me to accumulate deep knowledge base about core
OpenStack functionality. My first project was Murano in the very early
state of development even before incubation. I successfully started to
maintain Murano project packages specifications for those period of time.


The main goal for the Packaging-Rpm project, as I see it, is to unify and
simplify approaches Linux package maintainers use in their work day by day.
It is very important that availability of publicly published and suitable
for different flavors of rpm based Linux distros package specifications
makes package building process untied with mainstream vendors.

I wish my experience as package maintainer could help developers all over
the world to make their work easier and more transparent with efficiency
pushed to higher level.

I’m very eager to make it happen and I’m going to dedicate a lot of my time
and efforts as Packaging-RPM’s PTL.


There are a few topics to concentrate on during Newton cycle for the
upstream rpm-packaging:

   1.

   Move forward to finish with already started initiative for initial
   filling of projects’ templates for a common OpenStack dependencies like
   oslo, python clients. This should create basis for further work and should
   unlock development of package specification templates for core OpenStack
   projects.
   2.

   Continue with development of automation tooling for packaging. Creation
   and publishing package specifications for renderspec, pymod2pkg and
   openstack-macros will makes maintenance easier for all who require to build
   and use these tools from packages.
   3. CI checks. At the present moment only SUSE was added it to the
   project. This is not enough because it covers cases only for one vendor.
   Adding more 3rd party CIs (Eg: Mirantis or Fedora/RDO) will improve tests
   and use-cases coverage.


Thanks,
Igor Yozhikov
Senior Deployment Engineer
at Mirantis 
skype: igor.yozhikov
cellular: +7 901 5331200
slack: iyozhikov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] FF exception request for Numa and CPU pinning

2016-03-18 Thread Dmitry Borodaenko
FFE extension granted until March 18 for:
https://review.openstack.org/#/c/285282/

On Wed, Mar 16, 2016 at 10:48:31PM +0300, Dmitry Klenov wrote:
> Folks,
> 
> Majority of the commits for Numa and CPU pinning feature are merged in time
> [0].
> 
> One commit for validation is still to be merged [1]. So we would need 2
> more days to complete the feature.
> 
> Regards,
> Dmitry.
> 
> [0]
> https://review.openstack.org/#/q/status:merged+AND+topic:bp/support-numa-cpu-pinning
> [1] https://review.openstack.org/#/c/285282/
> 
> On Fri, Mar 4, 2016 at 1:59 AM, Dmitry Borodaenko 
> wrote:
> 
> > Granted, merge deadline March 16, feature to be marked experimental
> > until QA has signed off that it's fully tested and stable.
> >
> > --
> > Dmitry Borodaenko
> >
> >
> > On Tue, Mar 01, 2016 at 10:23:08PM +0300, Dmitry Klenov wrote:
> > > Hi,
> > >
> > > I'd like to to request a feature freeze exception for "Add support for
> > > NUMA/CPU pinning features" feature [0].
> > >
> > > Part of this feature is already merged [1]. We have the following patches
> > > in work / on review:
> > >
> > > https://review.openstack.org/#/c/281802/
> > > https://review.openstack.org/#/c/285282/
> > > https://review.openstack.org/#/c/284171/
> > > https://review.openstack.org/#/c/280624/
> > > https://review.openstack.org/#/c/280115/
> > > https://review.openstack.org/#/c/285309/
> > >
> > > No new patches are expected.
> > >
> > > We need 2 weeks after FF to finish this feature.
> > > Risk of not delivering it after 2 weeks is low.
> > >
> > > Regards,
> > > Dmitry
> > >
> > > [0] https://blueprints.launchpad.net/fuel/+spec/support-numa-cpu-pinning
> > > [1]
> > >
> > https://review.openstack.org/#/q/status:merged+topic:bp/support-numa-cpu-pinning
> >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] working on bug reports; what blocks you?

2016-03-18 Thread Kashyap Chamarthy
On Thu, Mar 17, 2016 at 03:28:48PM -0500, Matt Riedemann wrote:
> On 3/17/2016 11:41 AM, Markus Zoeller wrote:
> >What are the various reasons which block you to work on bug reports?
> >This question goes especially to the new contributors but also to the
> >rest of us. For me, personally, it's that most bug reports miss the
> >steps to reproduce which allow me to see the issue on my local system
> >before I start to dig into the code.
> >
> >I'm asking this because I'm not sure what the main reasons are that
> >our bug list is this huge (~1000 open bug reports). Maybe you have
> >reasons which can be resolved or mitigated by me in my bug czar role.
> >Let me know.

Effective bug reporting is top issue for me.  By "effective" I mean:

  - Not assuming any prior context while writing a report.  (Especially
when writing how to reproduce the problem.)
  - Not forgetting to state changes made to various configuration
attributes
  - Describing the problem's symptoms in chronological order.
  - Describing the test environment precisely.

Writing a thoughtful report is hard and time-taking.

https://wiki.openstack.org/wiki/BugFilingRecommendations

> Clear recreate steps is probably #1, but also logs if there are
> obvious failures. A stacktrace goes a long way with a clear
> description of the failure scenario. Obviously need to know the level
> of code being tested.
> 
> For a lot of bugs that are opened on n-2 releases, like kilo at this
> point, my first question is, have you tried this on master to see if
> it's still an issue. That's lazy on my part, but it's easy if I'm not
> aware of a fix that just needs backporting.

I don't view it as being lazy on your part.  Other open source projects
use a similar method -- e.g. in Fedora Project, one month after N+2
(Fedora-24) is released, 'N' (Fedora-22) goes End-of-Life.  And, all
bugs (that are not solved) reported against 'N' (for components with
high bug volume) are closed, with a request to re-test them on N+2
(latest stable release), and re-open it if the issue persists.
Otherwise, it becomes difficult to cope with volume.


-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] lvm2 and docker issue

2016-03-18 Thread Serguei Bezverkhi (sbezverk)
Hi Steven,

As per your suggestion I changed kolla_docker to include -ipc=host and it has 
fixed lvm issue for kola build containers.

Thank you

Serguei

[http://www.cisco.com/c/dam/assets/email-signature-tool/logo_07.png?ct=1423747865775]

Serguei Bezverkhi,
TECHNICAL LEADER.SERVICES
Global SP Services
sbezv...@cisco.com
Phone: +1 416 306 7312
Mobile: +1 514 234 7374

CCIE (R,SP,Sec) - #9527


Cisco.com



[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
Please click 
here for 
Company Registration Information.



From: Steven Dake (stdake)
Sent: Wednesday, March 16, 2016 12:35 AM
To: Serguei Bezverkhi (sbezverk) 
Cc: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: lvm2 and docker issue

Ccing mailing list for eveyrone's advantage.

From: "Serguei Bezverkhi (sbezverk)" 
>
Date: Tuesday, March 15, 2016 at 8:26 PM
To: Steven Dake >
Subject: lvm2 and docker issue

Hi Steven,

I have run few tests and so far with --ipc=host it looks better, at least I do 
not see that "stuck" issue when lvcreate is executed.

docker inspect 417f53c7ac84 | grep -i IPC
"IpcMode": "host",

Now, how do I specify this option when starting kolla containers?

This one I run manually with this command:

docker run --privileged -t -i --ipc=host --net=host --cap-add=ALL -v /dev:/dev 
-v /run:/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/sys/kernel/config:/configfs -v /lib/modules:/lib/modules -v 
/var/lib/cinder:/var/lib/cinder -v /etc/iscsi:/etc/iscsi f4118e59a77a

Thank you

Serguei
You have to extend the kolla_docker module.  An example is here:
https://github.com/openstack/kolla/blob/master/ansible/library/kolla_docker.py#L597

Search the file for the word host - you will probably need stuff wherever that 
is except with pid duplicated with ipc.  I recommend a patch prior to your 
current patch to introduce the feature to kolla_docker.

Regards
-steve


[http://www.cisco.com/c/dam/assets/email-signature-tool/logo_07.png?ct=1423747865775]

Serguei Bezverkhi,
TECHNICAL LEADER.SERVICES
Global SP Services
sbezv...@cisco.com
Phone: +1 416 306 7312
Mobile: +1 514 234 7374

CCIE (R,SP,Sec) - #9527


Cisco.com



[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
Please click 
here for 
Company Registration Information.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Maintaining httplib2 python library

2016-03-18 Thread Brian Haley

On 03/17/2016 06:04 PM, Doug Wiegley wrote:

Here is the non comprehensive list of usages based on what trees I
happen to have checked out (which is quite a few, but not all of
OpenStack for sure).

I think before deciding to take over ownership of an upstream lib (which
is a large commitment over space and time), we should figure out the
migration cost. All the uses in Tempest come from usage in Glance IIRC
(and dealing with chunked encoding).

Neutron seems to use it for a couple of proxies, but that seems like
requests/urllib3 might be sufficient.


The Neutron team should talk to Cory Benfield (CC'd) and myself more about this 
if they run into problems. requests and urllib3 are a little limited with 
respect to proxies due to limitations in httplib itself.

Both of us might be able to dedicate time during the day to fix this if 
Neutron/OpenStack have specific requirements that requests is not currently 
capable of supporting.


Looks like neutron is using it to do HTTP requests via unix domain sockets. 
Unless I’m missing something, requests doesn’t support that directly. There are 
a couple of other libs that do, or we could monkey patch the socket. Or modify 
the agents to use localhost.


We have to use Unix domain sockets in the metadata proxy because it's running in 
a namespace, so can't use localhost to talk to the agent.  But we could use some 
other library of course.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packstack] Update packstack core list

2016-03-18 Thread Emilien Macchi
On Wed, Mar 16, 2016 at 6:35 AM, Alan Pevec  wrote:
> 2016-03-16 11:23 GMT+01:00 Lukas Bezdicka :
>>> ...
>>> - Martin Mágr
>>> - Iván Chavero
>>> - Javier Peña
>>> - Alan Pevec
>>>
>>> I have a doubt about Lukas, he's contributed an awful lot to
>>> Packstack, just not over the last 90 days. Lukas, will you be
>>> contributing in the future? If so, I'd include him in the proposal as
>>> well.
>>
>> Thanks, yeah I do plan to contribute just haven't had time lately for
>> packstack.
>
> I'm also adding David Simard who recently contributed integration tests.
>
> Since there hasn't been -1 votes for a week, I went ahead and
> implemented group membership changes in gerrit.
> Thanks to the past core members, we will welcome you back on the next
>
> One more topic to discuss is if we need PTL election? I'm not sure we
> need formal election yet and de-facto PTL has been Martin Magr, so if
> there aren't other proposal let's just name Martin our overlord?

Packstack is not part of OpenStack big tent so de-facto does not need
a PTL to work.
It's up to the project team to decide if whether or not a PTL is needed.

Oh and of course, go ahead Martin ;-)

> Cheers,
> Alan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] purplerbot irc bot for logs and transclusion

2016-03-18 Thread Chris Dent

On Wed, 16 Mar 2016, Paul Belanger wrote:


So, I cannot comment on the how useful the bot is but if projects are in fact
using it I would like to see it added to openstack-infra so we can properly
manage it.


I was waiting to see if there's sufficient interest. The channels
that it is already in thus far are just experiments. Nobody has
stepped up and said either of:

* "This is something we should make sure we have around"
* "I'd want this to be around if we just added feature X"

If there's not, I can avoid all that and continue using it for my own
purposes.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Adrian Otto
Hongbin,

On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.

I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in 
clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's 
included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is 
there resistance to adopting both Magnum and Barbican together? I think the 
answer is that people think they can make Magnum work with really old clouds 
that were set up before Barbican was introduced. That expectation is simply not 
reasonable. If there were a way to easily add Barbican to older clouds, perhaps 
this reluctance would melt away.

Magnum should not be in the business of credential storage when there is an 
existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have 
Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/

It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) 
CA signing capability, and (2) certificate storage. My willingness to implement 
an alternative was based on our need to get a certificate generation and 
signing solution that actually worked, as Barbican did not work for that at the 
time. I have always viewed Barbican as a suitable solution for certificate 
storage, as that was what it was first designed for. Since then, we have 
implemented certificate generation and signing logic within a library that does 
not depend on Barbican, and we can use that safely in production use cases. 
What we don’t have built in is what Barbican is best at, secure storage for our 
certificates that will allow multi-conductor operation.

I am opposed to the idea that Magnum should re-implement Barbican for 
certificate storage just because operators are reluctant to adopt it. If we 
need to ship a Barbican instance along with each Magnum control plane, so be 
it, but I don’t see the value in re-inventing the wheel. I promised the 
OpenStack community that we were out to integrate with and enhance OpenStack 
not to replace it.

Now, with all that said, I do recognize that not all clouds are motivated to 
use all available security best practices. They may be operating in 
environments that they believe are already secure (because of a secure 
perimeter), and that it’s okay to run fundamentally insecure software within 
those environments. As misguided as this viewpoint may be, it’s common. My 
belief is that it’s best to offer the best practice by default, and only allow 
insecure operation when someone deliberately turns of fundamental security 
features.

With all this said, I also care about Magnum adoption as much as all of us, so 
I’d like us to think creatively about how to strike the right balance between 
re-implementing existing technology, and making that technology easily 
accessible.

Thanks,

Adrian


Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-03-18 Thread Emilien Macchi
On Thu, Mar 17, 2016 at 2:22 PM, Sergii Golovatiuk
 wrote:
> Guys,
>
> Fuel has own implementation of pacemaker [1]. It's functionality may be
> useful in other projects.
>
> [1] https://github.com/fuel-infra/puppet-pacemaker

I'm afraid to see 3 duplicated efforts to deploy Pacemaker:

* puppetlabs/corosync, not much maintained and not suitable for Red
Hat for some reasons related to the way to use pcs.
* openstack/puppet-pacemaker, only working on Red Hat systems,
suitable for TripleO and previous Red Hat installers.
* fuel-infra/puppet-pacemaker, which looks like a more robust
implementation of puppetlabs/corosync.

It's pretty clear Mirantis and Red hat, both OpenStack major
contributors who deploy Pacemaker do not use puppetlabs/corosync but
have their own implementations.
Maybe it would be time to converge at some point. I see a lot of
potential in fuel-infra/puppet-pacemaker to be honest. After reading
the code, I think it's still missing some features we might need to
make it work on TripleO but we could work together at establishing the
list of missing pieces and discuss about implementing them, so our
modules would converge.

I don't mind using X or Y tool, I want the best one and it seems both
of our groups have some expertise that could help to maybe one day
replace puppetlabs/corosync code by Fuel & Red Hat's module.
What do you think?

>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Sat, Feb 13, 2016 at 6:20 AM, Emilien Macchi 
> wrote:
>>
>>
>> On Feb 12, 2016 11:06 PM, "Spencer Krum"  wrote:
>> >
>> > The module would also be welcome under the voxpupuli[0] namespace on
>> > github. We currently have a puppet-corosync[1] module, and there is some
>> > overlap there, but a pure pacemaker module would be a welcome addition.
>> >
>> > I'm not sure which I would prefer, just that VP is an option. For
>> > greater openstack integration, gerrit is the way to go. For greater
>> > participation from the wider puppet community, github is the way to go.
>> > Voxpupuli provides testing and releasing infrastructure.
>>
>> The thing is, we might want to gate it on tripleo since it's the first
>> consumer right now. Though I agree VP would be a good place too, to attract
>> more puppet users.
>>
>> Dilemma!
>> Maybe we could start using VP, with good testing and see how it works.
>>
>> Iterate later if needed. Thoughts?
>>
>> >
>> > [0] https://voxpupuli.org/
>> > [1] https://github.com/voxpupuli/puppet-corosync
>> >
>> > --
>> >   Spencer Krum
>> >   n...@spencerkrum.com
>> >
>> > On Fri, Feb 12, 2016, at 09:44 AM, Emilien Macchi wrote:
>> > > Please look and vote:
>> > > https://review.openstack.org/279698
>> > >
>> > >
>> > > Thanks for your feedback!
>> > >
>> > > On 02/10/2016 04:04 AM, Juan Antonio Osorio wrote:
>> > > > I like the idea of moving it to use the OpenStack infrastructure.
>> > > >
>> > > > On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec > > > > > wrote:
>> > > >
>> > > > On 02/09/2016 08:05 AM, Emilien Macchi wrote:
>> > > > > Hi,
>> > > > >
>> > > > > TripleO is currently using puppet-pacemaker [1] which is a
>> > > > module
>> > > > hosted
>> > > > > & managed by Github.
>> > > > > The module was created and mainly maintained by Redhat. It
>> > > > tends to
>> > > > > break TripleO quite often since we don't have any gate.
>> > > > >
>> > > > > I propose to move the module to OpenStack so we'll use
>> > > > OpenStack Infra
>> > > > > benefits (Gerrit, Releases, Gating, etc). Another idea would
>> > > > be to
>> > > > gate
>> > > > > the module with TripleO HA jobs.
>> > > > >
>> > > > > The question is, under which umbrella put the module? Puppet ?
>> > > > TripleO ?
>> > > > >
>> > > > > Or no umbrella, like puppet-ceph. <-- I like this idea
>> > > >
>> > > >
>> > > > I think the module not being under an umbrella makes sense.
>> > > >
>> > > >
>> > > > >
>> > > > > Any feedback is welcome,
>> > > > >
>> > > > > [1] https://github.com/redhat-openstack/puppet-pacemaker
>> > > >
>> > > > Seems like a module that would be useful outside of TripleO, so
>> > > > it
>> > > > doesn't seem like it should live under that.  Other than that I
>> > > > don't
>> > > > have enough knowledge of the organization of the puppet modules
>> > > > to
>> > > > comment.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > __
>> > > > OpenStack Development Mailing List (not for usage questions)
>> > > > Unsubscribe:
>> > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > >
>> > > > 
>> > > >
>> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > > >
>> > > 

[openstack-dev] [Solum] PTL candidacy

2016-03-18 Thread Devdatta Kulkarni
Hi,

I would like to submit my candidacy to continue as PTL of Solum for the Newton 
cycle.

In Mitaka we accomplished several things including,
- completion of app and workflow API model
- ability to scale application instances
- ability to deploy pre-built application containers
- made Solum's devstack and vagrant development environments stable
- kept up with upstream in regards to release tools and various plugins
- got new contributors

For Newton cycle, I believe we have following challenges in front of us.
- Code stability
  We need to focus on adding more tests towards improving code stability

- Documentation
  Our getting started guide can be improved considerably so that
  it further lowers the barrier for new contributors to get a working
  Solum environment. We need to improve documentation for sample applications
  which we provide within solum repository.

- Multi-container apps
  We need to enhance Solum to build and deploy multi-container applications 
which may
  follow different kinds of container patterns.

- Deprecating plans and assemblies
  When we started Solum, the abstractions of plans and assemblies provided good
  starting points for modeling applications and their deployments. Over time
  it was felt by the team that these abstractions did not align with
  similar abstractions from other systems, and hence the team developed and 
implemented
  abstractions of 'app' and 'workflow'. Time has come for us to deprecate plans 
and
  assemblies. This will greatly simplify our codebase.

- Horizon plugin
  We have a horizon plugin, but we have not actively maintained it in last
  few cycles. Now that the new API is in place,
  it is time for us to revisit the plugin and get it working again.

- Migrate off of oslo incubator
  Currently we are lagging behind in adoption of oslo libraries. In the current
  cycle we got help from several members of the Oslo team towards migrating 
some parts
  of our codebase to use oslo libraries. I am really grateful to them for this 
help.
  I plan to engage with their team more on this in this cycle.

Let us work together on achieving these goals in this cycle.

Best regards,
Devdatta Kulkarni


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] PTL candidacy

2016-03-18 Thread stuart . mclaren

Hi everyone,

I'd like to be considered for Glance PTL.

First: thanks (again) to Flavio for his leadership over the last cycle.

Glance is in pretty good shape right now: we have a coherent team which
is focussed on a well defined set of priorities. It's a relatively small
team, but we're fortunate to have a good range of strengths; every one
brings something valuable.

I've been contributing to Glance since Essex. I've probably touched
most areas of the code at some point since then, and I hope I've built
up a good understanding of the code base, the thinking behind most of it
... and the quirks too. I can't claim to know all the OpenStack code base
(who can?) but I've good relationships with folks from a bunch of other
projects I can reach out to, especially the ones Glance interacts with
(Keystone, Swift, Cinder etc).

For several years I was also an operator. Starting from Diablo (back when
we were still using integers, not uuids) I was responsible for running
Glance in HP's Public Cloud (24x7x365). That gave me a good understanding
of the issues operators face and, where I can, I've contributed back
based on my experience.

As a developer and operator my goal has been the same: to give the best
possible experience for users.

If elected I will be able to devote myself full time to the project,
be available on IRC, and hope to increase my own review rate.

Priorities for the Newton cycle:

* Image import refactor

We need to do everything we can to ensure DefCore can include an image
upload capability.

* v2 API adoption

We need to make it possible to deploy OpenStack without enabling Glance's
v1 API. There were some heroic efforts on the Nova side of this during
Mitaka (thanks Mike).

* Maintain focus.

Yes, we will work on other things during the cycle. But we shouldn't
commit to anything that could impact on our priorities. It's ok to say
"no" now and again. There will be other cycles.

Thanks for your consideration.

-Stuart

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] centralize-config-options-newton bp is open

2016-03-18 Thread Matt Riedemann
For those working on the centralize-config-options blueprint in mitaka, 
there is a new blueprint for newton to track your work:


https://blueprints.launchpad.net/nova/+spec/centralize-config-options-newton

Please use that for new changes and when rebasing existing changes.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-18 Thread Emilien Macchi
On Wed, Mar 16, 2016 at 5:56 AM, Thierry Carrez  wrote:
> Hi PTLs,
>
> Here is the proposed slot allocation for project teams at the Newton Design
> Summit in Austin. This is based on the requests the mitaka PTLs have made,
> space availability and project activity & collaboration needs.
>
> | fb: fishbowl 40-min slots
> | wr: workroom 40-min slots
> | cm: Friday contributors meetup
> | | full: full day, half: only morning or only afternoon
>
> Neutron: 9fb, cm:full
> Nova: 18fb, cm:full
> Fuel: 3fb, 11wr, cm:full
> Horizon: 1fb, 7wr, cm:half
> Cinder: 4fb, 5wr, cm:full
> Keystone: 5fb, 8wr; cm:full
> Ironic: 5fb, 5wr, cm:half
> Heat: 4fb, 8wr, cm:half
> TripleO: 2fb, 3wr, cm:half
> Kolla: 4fb, 10wr, cm:full
> Oslo: 3fb, 5wr
> Ceilometer: 2fb, 7wr, cm:half
> Manila: 2fb, 4wr, cm:half
> Murano: 1fb, 2wr
> Rally: 2fb, 2wr
> Sahara: 2fb, 6wr, cm:half
> Glance: 3fb, 5wr, cm:full
> Magnum: 5fb, 5wr, cm:full
> Swift: 2fb, 12wr, cm:full
> OpenStackClient: 1fb, 1wr, cm:half
> Senlin: 1fb, 5wr, cm:half
> Monasca: 5wr
> Trove: 3fb, 6wr, cm:half
> Dragonflow: 1fb, 4wr, cm:half*
> Mistral: 1fb, 3wr
> Zaqar: 1fb, 3wr, cm:half
> Barbican: 2fb, 6wr, cm:half
> Designate: 1fb, 5wr, cm:half
> Astara: 1fb, cm:full
> Freezer: 1fb, 2wr, cm:half
> Congress: 1fb, 3wr
> Tacker: 1fb, 3wr, cm:half
> Kuryr: 1fb, 5wr, cm:half*
> Searchlight: 1fb, 2wr
> Cue: no space request received
> Solum: 1fb, 1wr
> Winstackers: 1wr
> CloudKitty: 1fb
> EC2API: 2wr
>
> Infrastructure: 3fb, 4wr, cm:day**
> Documentation: 4fb, 4wr, cm:half
> Quality Assurance: 4fb, 4wr, cm:day**
> PuppetOpenStack: 2fb, 3wr, cm:half
> OpenStackAnsible: 1fb, 8wr, cm:half
> Release mgmt: 1fb, cm:half
> Security: 3fb, 2wr, cm:half
> ChefOpenstack: 1fb, 2wr
> Stable maint: 1fb
> I18n: cm:half
> Refstack: 3wr
> OpenStack UX: 2wr
> RpmPackaging: 1fb***, 1wr
> App catalog: 1fb, 2wr
> Packaging-deb: 1fb***, 1wr
>
> *: shared meetup between Kuryr and Dragonflow
> **: shared meetup between Infra and QA
> ***: shared fishbowl between RPM packaging and DEB packaging, for collecting
> wider packaging feedback
>
> We'll start working on laying out those sessions over the available
> rooms and time slots. Most of you have communicated constraints together
> with their room requests (like Manila not wanting overlap with Cinder
> sessions), and we'll try to accommodate them the best we can. If you have
> extra constraints you haven't communicated yet, please reply to me ASAP.

If PuppetOpenStack and Infra could avoid overlaps, that would be
awesome. Some people are working on both groups and to continue the
best of our collaboration I would like to set this constraint.

Thanks a lot,

> Now is time to think about the content you'd like to cover during those
> sessions and fire up those newton etherpads :)
>
> Cheers,
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Let's do presentations/sessions on Mitaka's new complex features in Design Summit

2016-03-18 Thread Gorka Eguileor
On 16/03, D'Angelo, Scott wrote:
> I can do a presentation on microversions.
> 

"I love it when a plan comes together".
This one had your name written all over it.  ;-)

> 
> 
> Scott D'Angelo (scottda)
> 
> -- Original Message --
> From : Gorka Eguileor
> Subject : [openstack-dev] [cinder] Let's do presentations/sessions on 
> Mitaka's new complex features in Design Summit
> 
> 
> Hi,
> 
> As you all probably know, during this cycle we have introduced quite a
> big number of changes in cinder that will have a great impact in the
> development of the new functionality as well as changes to existing ones
> moving forward from an implementation perspective.
> 
> These changes to the cinder code include, but are not limited to,
> microversions, rolling upgrades, and conditional DB update functionality
> to remove API races, and while the latter has a good number of examples
> already merged and more patches under review, the other 2 have just been
> introduced and there are no patches in cinder that can serve as easy
> reference on how to use them.
> 
> As cinder developers we will all have to take these changes into account
> in our new patches, but it is hard to do so when one doesn't have an
> in-depth knowledge of them, and while we all probably know quite a bit
> about them, it will take some time to get familiar enough to be aware of
> *all* the implications of the changes made by newer patches.
> 
> And it's for this reason that I would like to suggest that during this
> summit's cinder design sessions we take the time to go through the
> changes giving not only an example of how they should be used in a
> patch, but also the do's, dont's and gotchas.
> 
> A possible format for these explanations could be a presentation -around
> 30 minutes- by the people that were involved in the development,
> followed by Q
> 
> I would have expected to see some of these in the "Upstream Dev" track,
> but unfortunately I don't (maybe I'm just missing them with all the cool
> title names).  And maybe these talks are not relevant for that track,
> being so specific and only relevant to cinder developers and all.
> 
> I believe these presentations would help the cinder team increase the
> adoption speed of these features while reducing the learning curve and
> the number of bugs introduced in the code caused by gaps in our
> knowledge and misinterpretations of the new functionality.
> 
> I would take lead on the conditional DB updates functionality, and I
> would have no problem doing the Rolling upgrades presentation as well.
> But I believe there are people more qualified and more deserving of
> doing that one; though I offer my help if they want it.
> 
> I have added those 3 topics to the Etherpad with Newton Cinder Design
> Summit Ideas [1] so people can volunteer and express their ideas in
> there.
> 
> Cheers,
> Gorka.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar-ui][zaqar] Nominating Thai Tran for zaqar-ui core

2016-03-18 Thread Fei Long Wang

Hi team,

I would like to propose adding Thai Tran(tqtran) for the Zaqar UI core
team. Thai has done amazing work since the beginning of Zaqar UI project.
He is currently the most active contributor on Zaqar UI projects for
the last 90 days[1]. If no one objects, I'll proceed and add him in a week
from now.

[1] http://stackalytics.com/report/contribution/zaqar-ui/90

--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Douglas,

I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
Barbican). What I am opposed to is a Barbican lock-in, which already has a 
negative impact on Magnum adoption based on our feedback. I also want to see an 
increase of Barbican adoption in the future, and all our users have Barbican 
installed in their clouds. If that happens, I have no problem to have a hard 
dependency on Barbican.

Best regards,
Hongbin

-Original Message-
From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com] 
Sent: March-18-16 9:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I think Adrian makes some excellent points regarding the adoption of Barbican.  
As the PTL for Barbican, it's frustrating to me to constantly hear from other 
projects that securing their sensitive data is a requirement but then turn 
around and say that deploying Barbican is a problem.

I guess I'm having a hard time understanding the operator persona that is 
willing to deploy new services with security features but unwilling to also 
deploy the service that is meant to secure sensitive data across all of 
OpenStack.

I understand one barrier to entry for Barbican is the high cost of Hardware 
Security Modules, which we recommend as the best option for the Storage and 
Crypto backends for Barbican.  But there are also other options for securing 
Barbican using open source software like DogTag or SoftHSM.

I also expect Barbican adoption to increase in the future, and I was hoping 
that Magnum would help drive that adoption.  There are also other projects that 
are actively developing security features like Swift Encryption, and DNSSEC 
support in Desginate.  Eventually these features will also require Barbican, so 
I agree with Adrian that we as a community should be encouraging deployers to 
adopt the best security practices.

Regarding the Keystone solution, I'd like to hear the Keystone team's feadback 
on that.  It definitely sounds to me like you're trying to put a square peg in 
a round hole.

- Doug

On 3/17/16 8:45 PM, Hongbin Lu wrote:
> Thanks Adrian,
> 
>  
> 
> I think the Keystone approach will work. For others, please speak up 
> if it doesn't work for you.
> 
>  
> 
> Best regards,
> 
> Hongbin
> 
>  
> 
> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-17-16 9:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
> 
>  
> 
> Hongbin,
> 
>  
> 
> I tweaked the blueprint in accordance with this approach, and approved 
> it for Newton:
> 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
> re
> 
>  
> 
> I think this is something we can all agree on as a middle ground, If 
> not, I'm open to revisiting the discussion.
> 
>  
> 
> Thanks,
> 
>  
> 
> Adrian
> 
>  
> 
> On Mar 17, 2016, at 6:13 PM, Adrian Otto  > wrote:
> 
>  
> 
> Hongbin,
> 
> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> 
> Keystone credentials store:
> 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-ap
> i-v3.html#credentials-v3-credentials
> 
> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key
> to decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount
> of code in Magnum would be small, as the API already exists. We
> would need a library function to encrypt and decrypt the data, and
> ideally a way to select different encryption algorithms in case one
> is judged weak at some point in the future, justifying the use of an
> alternate.
> 
> Adrian
> 
> 
> On Mar 17, 2016, at 4:55 PM, Adrian Otto  > wrote:
> 
> Hongbin,
> 
> 
> On Mar 17, 2016, at 2:25 PM, Hongbin Lu  > wrote:
> 
> Adrian,
> 
> I think we need a boarder set of inputs in this matter, so I moved
> the discussion from whiteboard back to here. Please check my replies
> inline.
> 
> 
> I would like to get a clear problem statement written for this.
> As I see it, the problem is that there is no safe place to put
> certificates in clouds that do not run Barbican.
> It seems the solution is to make it easy to add Barbican such that
> it's included in the setup for Magnum.
> 
> No, the solution is to explore an non-Barbican solution to store
> certificates 

Re: [openstack-dev] How do I calculate the semantic version prior to a release?

2016-03-18 Thread Alan Pevec
2016-02-29 9:32 GMT+01:00 Thomas Bechtold :
>> >>   python setup.py rpm_version
>> >
>> > The output for i.e. for Manila is here "1...b3.dev138"

And in Tempest rpm_version outputs 4.0.0.dev22 while setup.py
--version says 10.0.1.dev79 ?!

>> > Which is not really correct. The version is "2.0.0.0b3.dev138" .
>> > rpm supports the tilde ("~") for pre versions. Converting a PEP440
>> > compatible version to a rpm version can be done with code like:

Fedora does not use ~ for pre-versions[1] instead Release: starting
with 0 is used for pre-releases [2]

>> Does it? Thats great. It didn't as far as anyone here knew when we
>> wrote the spec -
>> http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/pbr-semver.rst
>> - or we'd have avoiding a tonne of complexity.

I did mention Fedora pre-release versioning in that spec review
https://review.openstack.org/#/c/96608/10/specs/juno/pbr-semver.rst@235

>> I see from http://www.rpm.org/ticket/56 that it only came in in RPM 4.10 - 
>> is RPM
>> 4.10 available on all the RPM platforms we support? Including I guess
>> old RHEL's and stuff?
>
> Good question. For SUSE, SLE12 has rpm 4.11 which is fine. No idea about RHEL.

EL7 has rpm 4.11 too but ~ is not used, it follows above Fedora
guidelines for pre-releases.


Cheers,
Alan

[1] https://fedoraproject.org/wiki/Packaging:NamingGuidelines#Version_Tag
[2] 
https://fedoraproject.org/wiki/Packaging:NamingGuidelines#Pre-Release_packages

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][release] reno 1.6.1 release

2016-03-18 Thread no-reply
We are pleased to announce the release of:

reno 1.6.1: RElease NOtes manager

With source available at:

http://git.openstack.org/cgit/openstack/reno

Please report issues through launchpad:

http://bugs.launchpad.net/reno

For more details, please see below.

Changes in reno 1.6.0..1.6.1


8df79bf handle deleted notes properly
2a3b26a refactor argument buildup to make it more reusable
abad748 improve test coverage
627a1da always show coverage report for test runs

Diffstat (except docs and test files)
-

.gitignore   |  1 +
reno/main.py | 85 +-
reno/scanner.py  | 47 +
reno/utils.py|  8 ++--
tox.ini  |  4 +-
8 files changed, 243 insertions(+), 69 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] [Devstack] Where to keep tempest configuration?

2016-03-18 Thread Vasyl Saienko
Hello Community,

We started using tempest/devstack plugins. They allows to do not bother
other teams when Project specific changes need to be done. Tempest
configuration is still performed at devstack [0].
So I would like to rise the following questions:


   - Where we should keep Projects specific tempest configuration? Example
   [1]
   - Where to keep shared between projects tempest configuration? Example
   [2]

As for me it would be good to move Projects related tempest configuration
to Projects repositories.

[0] https://github.com/openstack-dev/devstack/blob/master/lib/tempest
[1]
https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L509-L513
[2]
https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L514-L523

Thank you in advance,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] purplerbot irc bot for logs and transclusion

2016-03-18 Thread Paul Belanger
On Wed, Mar 16, 2016 at 02:50:38PM +, Chris Dent wrote:
> On Wed, 16 Mar 2016, Paul Belanger wrote:
> 
> >So, I cannot comment on the how useful the bot is but if projects are in fact
> >using it I would like to see it added to openstack-infra so we can properly
> >manage it.
> 
> I was waiting to see if there's sufficient interest. The channels
> that it is already in thus far are just experiments. Nobody has
> stepped up and said either of:
> 
> * "This is something we should make sure we have around"
> * "I'd want this to be around if we just added feature X"
> 
> If there's not, I can avoid all that and continue using it for my own
> purposes.
> 
I'd still sync up with openstack-infra on IRC, if only to help spread the word.
I know personally, I'd love to see some sort of reminder functionality added
into an IRC bot, which pings me each time there is some sort of IRC meeting I
need to attend.
> -- 
> Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
> freenode: cdent tw: @anticdent

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2016-03-18 Thread Andrew Beekhof
On Tue, Feb 16, 2016 at 2:58 AM, Bogdan Dobrelya 
wrote:

> Hello!
> A quick status update inline:
>

[snip]


> So, what's next?
>
> - I'm open for merging both [5], [6] of the existing OCF RA solutions,
> as it was proposed by Andrew Beekhof. Let's make it happen.
>

Great :-)
Oyvind (CC'd) is the relevant contact from our side, should he talk to you
or someone else?


>
> - Would be nice to make Travis CI based gate to the upstream
> rabbitmq-server's HA OCF RA. As for now, it relies on Fuel CI gates and
> manual testing with atlas boxes.
>
> - Please also consider Travis or a suchlike CI for the resource-agents'
> rabbit-cluster OCF RA as well.
>
> [1] https://github.com/bogdando/rabbitmq-cluster-ocf-vagrant
> [2] https://github.com/bogdando/packer-atlas-example
> [3] https://hub.docker.com/r/bogdando/rabbitmq-cluster-ocf/
> [4] https://hub.docker.com/r/bogdando/rabbitmq-cluster-ocf-wily/
> [5]
>
> https://github.com/rabbitmq/rabbitmq-server/blob/master/scripts/rabbitmq-server-ha.ocf
> [6]
>
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
>
> >
> > I'm also planning to refer this official RabbitMQ cluster setup guide in
> > the OpenStack HA guide as well [2].
>
> Done, see [7]
>
> [7] http://docs.openstack.org/ha-guide/controller-ha-rabbitmq.html
>
> >
> > PS. Original rabbitmq-users mail thread is here [3].
> > [openstack-operators] cross posted as well.
> >
> > [0] http://www.rabbitmq.com/pacemaker.html
> > [1] https://atlas.hashicorp.com/bogdando/boxes/rabbitmq-cluster-ocf
> > [2] https://bugs.launchpad.net/openstack-manuals/+bug/1497528
> > [3] https://groups.google.com/forum/#!topic/rabbitmq-users/BnoIQJb34Ao
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-18 Thread Valeriy Ponomaryov
Main reason to split negative tests out, that I see, is attempt to remove
"corner, not integrational" tests out of Tempest tree. If so, then
"negative" tests is not proper choice, I think.
Tempest has lots of positive "corner" test cases. So, if move something,
then we need to move exactly "corner" tests and leave only "integrational"
tests of any kind - either it is negative or positive.
It is said, that doing so, all single-project related tests will be in its
own repo as a plugin and only integrational tests will be in Tempest.

Valeriy

On Thu, Mar 17, 2016 at 7:31 AM, Qiming Teng 
wrote:

> >
> > I'd love to see this idea explored further. What happens if Tempest
> > ends up without tests, as a library for shared code as well as a
> > centralized place to run tests from via plugins?
> >
>
> Also curious about this. It seems weird to separate the 'positive' and
> the 'negative' ones, assuming those patches are mostly contributed by
> the same group of developers.
>
> Qiming
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Let's do presentations/sessions on Mitaka's new complex features in Design Summit

2016-03-18 Thread Kendall J Nelson

If people are interested I could do some small presentation on genconfig.

All the Best,
  
   Kendall J. Nelson  
   Software Engineer &
   OpenStack Contributor  
  


   
   
   
   E-mail: kjnel...@us.ibm.com IBM 
   Cell Phone: (952) 215- 4025 
   IRC Nickname: diablo_rojo 3605 Hwy 52 N 
 Rochester, MN 
55901-1407 
 United States 
   






From:   Gorka Eguileor 
To: OpenStack Development Mailing List

Date:   03/14/2016 03:57 PM
Subject:[openstack-dev] [cinder] Let's do presentations/sessions on
Mitaka's new complex features in Design Summit



Hi,

As you all probably know, during this cycle we have introduced quite a
big number of changes in cinder that will have a great impact in the
development of the new functionality as well as changes to existing ones
moving forward from an implementation perspective.

These changes to the cinder code include, but are not limited to,
microversions, rolling upgrades, and conditional DB update functionality
to remove API races, and while the latter has a good number of examples
already merged and more patches under review, the other 2 have just been
introduced and there are no patches in cinder that can serve as easy
reference on how to use them.

As cinder developers we will all have to take these changes into account
in our new patches, but it is hard to do so when one doesn't have an
in-depth knowledge of them, and while we all probably know quite a bit
about them, it will take some time to get familiar enough to be aware of
*all* the implications of the changes made by newer patches.

And it's for this reason that I would like to suggest that during this
summit's cinder design sessions we take the time to go through the
changes giving not only an example of how they should be used in a
patch, but also the do's, dont's and gotchas.

A possible format for these explanations could be a presentation -around
30 minutes- by the people that were involved in the development,
followed by Q

I would have expected to see some of these in the "Upstream Dev" track,
but unfortunately I don't (maybe I'm just missing them with all the cool
title names).  And maybe these talks are not relevant for that track,
being so specific and only relevant to cinder developers and all.

I believe these presentations would help the cinder team increase the
adoption speed of these features while reducing the learning curve and
the number of bugs introduced in the code caused by gaps in our
knowledge and misinterpretations of the new functionality.

I would take lead on the conditional DB updates functionality, and I
would have no problem doing the Rolling upgrades presentation as well.
But I believe there are people more qualified and more deserving of
doing that one; though I offer my help if they want it.

I have added those 3 topics to the Etherpad with Newton Cinder Design
Summit Ideas [1] so people can volunteer and express their ideas in
there.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] PTL non-candidacy

2016-03-18 Thread Chad Roberts
Absolutely +1, elmiko.  It's going to seem weird having a new PTL :)

On Wed, Mar 16, 2016 at 3:27 PM, michael mccune  wrote:

> i know you are not leaving the project Sergey, but i just wanted to say
> that it's been a true pleasure working with you as the PTL.
>
> regards,
> mike
>
>
> On 03/15/2016 07:33 PM, Sergey Lukjanov wrote:
>
>> Hi folks,
>>
>> the PTL self-nomination period is opened and I want to note that I won’t
>> be running again as the Data Processing PTL. I’ve been the Sahara PTL
>> from the beginning of the project (oh, already more then 3 years) and
>> I’m very proud of the things we’ve achieved over this time. The project
>> community grown from a few people working not always in open source way
>> to the successfully incubated and then integrated OpenStack project.
>>
>> The main reason I’m stepping down is that I have another inter-company
>> priorities and don’t have enough time to continue being the good PTL.
>> Another reason - I've started thinking that it’s a healthy flow to
>> change PTLs each cycle and I’m going to propose this approach on the
>> upcoming summit for Sahara community.
>>
>> I’m not planning to fully leave project, I’d like to keep reviewing
>> (especially specs) and helping with release management, so, I’ll be
>> around and will help the next PTLs make a transitions where it’ll be
>> needed.
>>
>> Thanks.
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Principal Software Engineer
>> Mirantis Inc.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][ptls] tagging reviews, making tags searchable

2016-03-18 Thread John Dickinson


On 18 Mar 2016, at 7:15, Amrith Kumar wrote:

> As we were working through reviews for the Mitaka release, the Trove team was 
> trying to track groups of reviews that were needed for a specific milestone, 
> like m-1, or m-3 or in the recent days for rc1.
>
> The only way we could find was to have someone (in this instance, me) 'star' 
> the reviews that we wanted and then have people look for reviews with 
> 'starredby:amrith' and status:open.
>
> How do other projects do this? Is there a simple way to tag reviews in a 
> searchable way?
>
> Thanks,
>
> -amrith


We've had 3 iterations of tracking/prioritizing reviews in Swift (and it's 
still a work-in-progress).

1) I write down stuff on a wiki page. 
https://wiki.openstack.org/wiki/Swift/PriorityReviews Currently, this is 
updated for the work we're getting done over this next week for the Mitaka 
release.

2) Gerrit dashboards like https://goo.gl/mtEv1C. This has a section at the top 
which included "starred by the PTL" patches.

3) A dashboard generated from gerrit and git data. 
http://not.mn/swift/swift_community_dashboard.html The "Community Starred 
Patches" is a list of the top 20 commonly starred patches by the Swift 
community. Basically, for every person who has gerrit activity around Swift, I 
pull their stars. From that I can see which patches are more commonly starred. 
I also weight each person's stars according to their level of activity in the 
project. This gives a very good idea of what the community as a whole finds 
important.

I've found a role for all of these tools at different times--I don't think one 
is generally better than another. Right now as we're finishing up a release for 
Mitaka, all 3 tools are useful for helping coordinate the remaining work.

My generate community dashboard isn't done. There's a lot more information I 
can pull and use to help prioritize reviews. I plan on working on this after 
the Mitaka release gets cut. Here's a teaser for the next thing I'll be doing: 
given a person's email, generate an ordered list of patches that person should 
review or work on to be most effective.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Mitaka RC1 available

2016-03-18 Thread Thierry Carrez
And.. we now have Trove ready and producing a release candidate for the 
end of the Mitaka cycle! You can find the RC1 source code tarball at:


https://tarballs.openstack.org/trove/trove-5.0.0.0rc1.tar.gz

Unless release-critical issues are found that warrant a release 
candidate respin, this RC1 will be formally released as the final Mitaka 
Trove release on April 7th. You are therefore strongly encouraged to 
test and validate this tarball !


Alternatively, you can directly test the stable/mitaka release branch at:

http://git.openstack.org/cgit/openstack/trove/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please 
file it at:


https://bugs.launchpad.net/trove/+filebug

and tag it *mitaka-rc-potential* to bring it to the Trove release crew's 
attention.


Note that the "master" branch of Trove is now open for Newton 
development, and feature freeze restrictions no longer apply there !


Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-18 Thread Ken'ichi Ohmichi
Hi

I have one proposal[1] related to negative tests in Tempest, and
hoping opinions before doing that.

Now Tempest contains negative tests and sometimes patches are being
posted for adding more negative tests, but I'd like to propose
removing them from Tempest instead.

Negative tests verify surfaces of REST APIs for each component without
any integrations between components. That doesn't seem integration
tests which are scope of Tempest.
In addition, we need to spend the test operating time on different
component's gate if adding negative tests into Tempest. For example,
we are operating negative tests of Keystone and more
components on the gate of Nova. That is meaningless, so we need to
avoid more negative tests into Tempest now.

If wanting to add negative tests, it is a nice option to implement
these tests on each component repo with Tempest plugin interface. We
can avoid operating negative tests on different component gates and
each component team can decide what negative tests are valuable on the
gate.

In long term, all negative tests will be migrated into each component
repo with Tempest plugin interface. We will be able to operate
valuable negative tests only on each gate.

Any thoughts?

Thanks
Ken Ohmichi

---
[1]: https://review.openstack.org/#/c/293197/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][dvr]What does table 9 used for in br-tun?

2016-03-18 Thread Zhi Chang
hi guys.
In DVR mode, I have some questions about flows of table 9 in br-tun. I see 
these flows in br-tun:


 cookie=0x9a5f42115762fc76, duration=392186.503s, table=9, n_packets=1, 
n_bytes=90, idle_age=247, hard_age=65534, priority=1,dl_src=fa:16:3f:64:82:2f 
actions=output:1
 cookie=0x9a5f42115762fc76, duration=392186.447s, table=9, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=1,dl_src=fa:16:3f:76:6c:6f 
actions=output:1
 cookie=0x9a5f42115762fc76, duration=392186.388s, table=9, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=1,dl_src=fa:16:3f:87:39:a3 
actions=output:1
 cookie=0x9a5f42115762fc76, duration=392186.333s, table=9, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=1,dl_src=fa:16:3f:93:41:45 
actions=output:1



Port 1 is a patch interface which is connecting to br-int. 


Question  A: there are for mac addresses in these flows. But these mac 
addresses don't in my Neutron. What do they come from??


Question B: The action of each flow is output:1, why? Why put the packets to 
br-int?




Thanks
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-18 Thread Kevin L. Mitchell
On Fri, 2016-03-18 at 20:58 +, Jeremy Stanley wrote:
> Yes, we had a session on it several summits ago, a group of
> contributors said they were going to work on developing it, pushed
> up a skeleton repo, and then we never heard back from them after
> that. Unfortunate.

Yeah, unfortunately, we weren't able to get priority for the project,
and I'm afraid it's probably not going to go anywhere now :/
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Reminder to move implemented nova specs from mitaka

2016-03-18 Thread Markus Zoeller
Matt Riedemann  wrote on 03/18/2016 03:20:23 
PM:

> From: Matt Riedemann 
> To: openstack-dev@lists.openstack.org
> Date: 03/18/2016 03:22 PM
> Subject: Re: [openstack-dev] [nova] Reminder to move implemented nova 
> specs from mitaka
> 
> 
> 
> On 3/18/2016 5:46 AM, Markus Zoeller wrote:
> > Matt Riedemann  wrote on 03/16/2016 
09:49:06
> > PM:
> >
> >> From: Matt Riedemann 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Date: 03/16/2016 09:50 PM
> >> Subject: [openstack-dev] [nova] Reminder to move implemented nova
> >> specs from mitaka
> >>
> >> Specs are proposed to the 'approved' subdirectory and when they are
> >> completely implemented in launchpad (the blueprint status is
> >> 'Implemented'), we should move the spec from the 'approved' 
subdirectory
> >
> >> to the 'implemented' subdirectory in the nova-specs repo.
> >>
> >> For example:
> >>
> >> https://review.openstack.org/#/c/248142/
> >>
> >> These are the mitaka series blueprints from launchpad:
> >>
> >> https://blueprints.launchpad.net/nova/mitaka
> >>
> >> If anyone is really daring they could go through and move all of the
> >> implemented ones in a single change.
> >>
> >> --
> >>
> >> Thanks,
> >>
> >> Matt Riedemann
> >>
> >
> > Is there a best practice how to handle a partially implemented bp 
(with
> > spec file)? For example [1] needs additional effort during Newton to
> > finish it.
> >
> > References:
> > [1] 
https://blueprints.launchpad.net/nova/+spec/centralize-config-options
> >
> > Regards, Markus Zoeller (markus_z)
> >
> >
> >
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> John was just telling me about this yesterday. I guess one thing we can 
> do is add a "(partial)" suffix to the title and mark the blueprint 
> complete for mitaka, and then the idea is to create a new blueprint for 
> newton for continuing the work, e.g. centralize-config-options-newton.
> 
> The idea being we show that something was completed in mitaka when 
> you're looking at blueprints in launchpad for mitaka.
> 
> I'm generally OK with that approach, the thing I don't really like is 
> when we have to re-propose specs and/or if there are dependent 
> blueprints in launchpad. Because creating the new blueprint means you 
> have to update the link in the spec when re-proposing it and you need to 

> update all of the dependent specs in launchpad for the new newton spec. 
> Maybe it's not a big deal, I can see benefits to either approach. 
> Personally I don't like to consider a blueprint complete until it's 
> actually complete, like has been the case with some of the cells v2 
> blueprints we've re-proposed for newton.
> 
> With long cleanup efforts like objects and config options though, I can 
> see how having release-specific blueprints is good.
> 
> Markus, so to answer your original question, :), I'd probably mark the 
> existing bp as complete for mitaka and create a new 
> centralize-config-options-newton blueprint.
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann

OK, a new blueprint "centralize-config-options-newton" plus a copy
of the spec (with updates according to [1]). The already pushed changes
need to be updated then. As many of them need to be rebased anyway, that 
should be fine I think. I'm going to communicate that. Thanks Matt!

[1] 
https://specs.openstack.org/openstack/nova-specs/readme.html#previously-approved-specifications




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-18 Thread Boris Pavlovic
Hi everybody,

What about if we just create new project for alternative Gerrit WebUI and
use it?
I don't think that with current set of web frameworks it would be too hard.

Best regards,
Boris Pavlovic

On Fri, Mar 18, 2016 at 9:50 AM, Andrew Laski  wrote:

>
>
>
> On Fri, Mar 18, 2016, at 10:31 AM, Andrey Kurilin wrote:
>
> Hi all!
> I want to start this thread because I'm tired. I spent a lot of time, but
> I can't review as easy as it was with old interface. New Gerrit is awful.
> Here are several issues:
>
> * It is not possible to review patches at mobile phone. "New" "modern"
> theme is not adopted for small screens.
> * Leaving comments is a hard task. Position of page can jump anytime.
> * It is impossible to turn off hot-keys. position of page changed->I don't
> see that comment pop-up is closed->continue type several letters->make
> unexpected things(open edit mode, modify something, save, exit...)
> * patch-dependency tree is not user-friendly
> * summary table doesn't include status of patch(I need list to the end of
> a page to know if patch is merged or not)
> * there is no button "Comment"/"Reply" at the end of page(after all
> comments).
> * it is impossible to turn off "new" search mechanism
>
> Does it possible to return old, classic theme? It was a good time when we
> have old and new themes together...
>
>
> I spent a little bit of time investigating the possibility of a chrome
> extension to turn off the keyboard shortcuts and search mechanism a little
> while ago. I eventually gave up but what I learned is that those changes
> are apparently related to the inline edit ability that was added. The edit
> window only loads in what is currently visible so regular browser search
> would not work.
>
> I've adapted to the new interface and really like some of the new
> capabilities it provides, but having the page jump around while I'm
> commenting has been a huge annoyance.
>
>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> *__*
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/trove failed

2016-03-18 Thread Amrith Kumar
I just ran stable/liberty through these tests on a sandbox and they passed. I'm 
not sure what caused this failure but if it persists, I'll take another look.

Some changes are coming soon on master and I'll backport them to mitaka and 
liberty. They'll make debugging these tests a whole lot easier.

-amrith

> -Original Message-
> From: A mailing list for the OpenStack Stable Branch test reports.
> [mailto:openstack-stable-ma...@lists.openstack.org]
> Sent: Friday, March 18, 2016 5:17 AM
> To: openstack-stable-ma...@lists.openstack.org
> Subject: [Openstack-stable-maint] Stable check of openstack/trove failed
> 
> Build failed.
> 
> - periodic-trove-docs-kilo http://logs.openstack.org/periodic-
> stable/periodic-trove-docs-kilo/3e52908/ : SUCCESS in 6m 25s
> - periodic-trove-python27-kilo http://logs.openstack.org/periodic-
> stable/periodic-trove-python27-kilo/c8534f1/ : SUCCESS in 3m 16s
> - periodic-trove-docs-liberty http://logs.openstack.org/periodic-
> stable/periodic-trove-docs-liberty/e91cc6c/ : SUCCESS in 3m 15s
> - periodic-trove-python27-liberty http://logs.openstack.org/periodic-
> stable/periodic-trove-python27-liberty/23381da/ : FAILURE in 11m 56s
> 
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar-ui][zaqar] Nominating Thai Tran for zaqar-ui core

2016-03-18 Thread Flavio Percoco

On 18/03/16 08:49 -0400, Ryan Brown wrote:

On 03/18/2016 06:24 AM, Fei Long Wang wrote:

Hi team,

I would like to propose adding Thai Tran(tqtran) for the Zaqar UI core
team. Thai has done amazing work since the beginning of Zaqar UI project.
He is currently the most active contributor on Zaqar UI projects for
the last 90 days[1]. If no one objects, I'll proceed and add him in a week
from now.

[1] http://stackalytics.com/report/contribution/zaqar-ui/90


+1, and thank you for your work on Zaqar's UI!


+1


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-18 Thread Vladimir Kozhukalov
It would be great if at least one Fuel slot did not overlap with Ironic
sessions.

Vladimir Kozhukalov

On Fri, Mar 18, 2016 at 6:26 PM, Jeremy Stanley  wrote:

> On 2016-03-18 08:53:11 +0530 (+0530), Armando M. wrote:
> > It's be nice if Neutron didn't overlap as much with Nova, Ironic, QA and
> > infra sessions, but I appreciate this could be a tall order.
>
> It seems like every project wants to avoid overlap with Infra
> sessions, so I'm not sure it will be easy to accommodate since we
> have to have our sessions _sometime_. However, we do usually try to
> make sure we have scouts in any Infra-relevant sessions for other
> teams, so give us a heads up on which ones those are and we will try
> to get someone to volunteer to sit in if we're not already spread
> too thin.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-03-18 Thread Emilien Macchi
On Fri, Mar 18, 2016 at 10:55 AM, Dmitry Ilyin  wrote:
> Hello.
>
> I'm the author of fuel-infra/puppet-pacemaker and I guess I would be able to
> merge the code from "fuel-infra/puppet-pacemaker" to
> "openstack/puppet-pacemaker"
> We will be having a single set of pcmk_* types and two providers for the
> each type: "pcs" and "xml", there is also a "noop" provider.
>
> It would be possible to choose the implementation by specifying:
>
> pcmk_resource { 'my-resource' :
>   provider => 'pcs',
> }
>
> or
>
> pcmk_resource { 'my-resource' :
>   provider => 'xml',
> }

I think that's our major differences indeed.
If you are interested, we can start to work together on
openstack/puppet-pacemaker, and add some experimental CI jobs for Fuel
(we already have TripleO jobs) gating the puppet-pacemaker module.
So together we could iterate by adding the pieces without breaking both modules.

If you want, we can chat about it on IRC, during a meeting or not, so
we can make progress on it during Newton cycle.

Thanks a lot for your collaboration,

>
> 2016-03-18 2:50 GMT+03:00 Andrew Woodward :
>>
>> I'd be happy to see more collaboration here as well, I'd like to hear from
>> the maintainers on both sides identify some of what isn't implemented on
>> each so we can better decide which one to continue from, develop feature
>> parity and then switch to.
>>
>> On Thu, Mar 17, 2016 at 12:03 PM Emilien Macchi 
>> wrote:
>>>
>>> On Thu, Mar 17, 2016 at 2:22 PM, Sergii Golovatiuk
>>>  wrote:
>>> > Guys,
>>> >
>>> > Fuel has own implementation of pacemaker [1]. It's functionality may be
>>> > useful in other projects.
>>> >
>>> > [1] https://github.com/fuel-infra/puppet-pacemaker
>>>
>>> I'm afraid to see 3 duplicated efforts to deploy Pacemaker:
>>>
>>> * puppetlabs/corosync, not much maintained and not suitable for Red
>>> Hat for some reasons related to the way to use pcs.
>>> * openstack/puppet-pacemaker, only working on Red Hat systems,
>>> suitable for TripleO and previous Red Hat installers.
>>> * fuel-infra/puppet-pacemaker, which looks like a more robust
>>> implementation of puppetlabs/corosync.
>>>
>>> It's pretty clear Mirantis and Red hat, both OpenStack major
>>> contributors who deploy Pacemaker do not use puppetlabs/corosync but
>>> have their own implementations.
>>> Maybe it would be time to converge at some point. I see a lot of
>>> potential in fuel-infra/puppet-pacemaker to be honest. After reading
>>> the code, I think it's still missing some features we might need to
>>> make it work on TripleO but we could work together at establishing the
>>> list of missing pieces and discuss about implementing them, so our
>>> modules would converge.
>>>
>>> I don't mind using X or Y tool, I want the best one and it seems both
>>> of our groups have some expertise that could help to maybe one day
>>> replace puppetlabs/corosync code by Fuel & Red Hat's module.
>>> What do you think?
>>>
>>> >
>>> > --
>>> > Best regards,
>>> > Sergii Golovatiuk,
>>> > Skype #golserge
>>> > IRC #holser
>>> >
>>> > On Sat, Feb 13, 2016 at 6:20 AM, Emilien Macchi
>>> > 
>>> > wrote:
>>> >>
>>> >>
>>> >> On Feb 12, 2016 11:06 PM, "Spencer Krum"  wrote:
>>> >> >
>>> >> > The module would also be welcome under the voxpupuli[0] namespace on
>>> >> > github. We currently have a puppet-corosync[1] module, and there is
>>> >> > some
>>> >> > overlap there, but a pure pacemaker module would be a welcome
>>> >> > addition.
>>> >> >
>>> >> > I'm not sure which I would prefer, just that VP is an option. For
>>> >> > greater openstack integration, gerrit is the way to go. For greater
>>> >> > participation from the wider puppet community, github is the way to
>>> >> > go.
>>> >> > Voxpupuli provides testing and releasing infrastructure.
>>> >>
>>> >> The thing is, we might want to gate it on tripleo since it's the first
>>> >> consumer right now. Though I agree VP would be a good place too, to
>>> >> attract
>>> >> more puppet users.
>>> >>
>>> >> Dilemma!
>>> >> Maybe we could start using VP, with good testing and see how it works.
>>> >>
>>> >> Iterate later if needed. Thoughts?
>>> >>
>>> >> >
>>> >> > [0] https://voxpupuli.org/
>>> >> > [1] https://github.com/voxpupuli/puppet-corosync
>>> >> >
>>> >> > --
>>> >> >   Spencer Krum
>>> >> >   n...@spencerkrum.com
>>> >> >
>>> >> > On Fri, Feb 12, 2016, at 09:44 AM, Emilien Macchi wrote:
>>> >> > > Please look and vote:
>>> >> > > https://review.openstack.org/279698
>>> >> > >
>>> >> > >
>>> >> > > Thanks for your feedback!
>>> >> > >
>>> >> > > On 02/10/2016 04:04 AM, Juan Antonio Osorio wrote:
>>> >> > > > I like the idea of moving it to use the OpenStack
>>> >> > > > infrastructure.
>>> >> > > >
>>> >> > > > On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec
>>> >> > > > >> >> > > > > wrote:
>>> >> > > >
>>> >> > > 

Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
OK. If using Keystone is not acceptable, I am going to propose a new approach:

· Store data in Magnum DB

· Encrypt data before writing it to DB

· Decrypt data after loading it from DB

· Have the encryption/decryption key stored in config file

· Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don’t want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store data I think that's 
the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] PTL non-candidacy

2016-03-18 Thread michael mccune
i know you are not leaving the project Sergey, but i just wanted to say 
that it's been a true pleasure working with you as the PTL.


regards,
mike

On 03/15/2016 07:33 PM, Sergey Lukjanov wrote:

Hi folks,

the PTL self-nomination period is opened and I want to note that I won’t
be running again as the Data Processing PTL. I’ve been the Sahara PTL
from the beginning of the project (oh, already more then 3 years) and
I’m very proud of the things we’ve achieved over this time. The project
community grown from a few people working not always in open source way
to the successfully incubated and then integrated OpenStack project.

The main reason I’m stepping down is that I have another inter-company
priorities and don’t have enough time to continue being the good PTL.
Another reason - I've started thinking that it’s a healthy flow to
change PTLs each cycle and I’m going to propose this approach on the
upcoming summit for Sahara community.

I’m not planning to fully leave project, I’d like to keep reviewing
(especially specs) and helping with release management, so, I’ll be
around and will help the next PTLs make a transitions where it’ll be needed.

Thanks.

--
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Daneyon Hansen (danehans)


> On Mar 17, 2016, at 11:41 AM, Ricardo Rocha  wrote:
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same
> way all openstack services do here - this part seems to work fine

I expected the API to work. Thanks for the confirmation. 
> 
> For the conductor we're stopped due to bay certificates - we don't
> currently have barbican so local was the only option. To get them
> accessible on all nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of
> credentials in the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)

How funny. I had this concern and proposed a similar solution to hongbin over 
irc yesterday. I suggested we discuss this issue at Austin, as Barbican is 
becoming a barrier to Magnum adoption. Please keep this thread updated as you 
progress with your deployment and I'll do the same. 
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)
>  wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest unknown
>> is the Conductor service. Any insight you can provide is greatly
>> appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-18 Thread Oleksii Chuprykov
Thank you all! I would do my best as Heat core.

On Thu, Mar 17, 2016 at 10:18 AM, Sergey Kraynev 
wrote:

> Looks like it was unanimously decision :)
> Oleksii, my congratulations !
> Good work. I will add you to necessary groups ;)
>
> On 17 March 2016 at 04:34, Huangtianhua  wrote:
> > +1 :)
> >
> > -邮件原件-
> > 发件人: Sergey Kraynev [mailto:skray...@mirantis.com]
> > 发送时间: 2016年3月16日 18:58
> > 收件人: OpenStack Development Mailing List (not for usage questions)
> > 主题: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core
> reviewer
> >
> > Hi Heaters,
> >
> > The Mitaka release is close to finish, so it's good time for reviewing
> results of work.
> > One of this results is analyze contribution results for the last release
> cycle.
> > According to the data [1] we have one good candidate for nomination to
> core-review team:
> > Oleksii Chuprykov.
> > During this release he showed significant value of review metric.
> > His review were valuable and useful. Also He has enough level of
> expertise in Heat code.
> > So I think he is worthy to join to core-reviewers team.
> >
> > I ask you to vote and decide his destiny.
> >  +1 - if you agree with his candidature
> >  -1  - if you disagree with his candidature
> >
> > [1] http://stackalytics.com/report/contribution/heat-group/120
> >
> > --
> > Regards,
> > Sergey.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Regards,
> Sergey.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [jacket] Introduction to jacket, a new project

2016-03-18 Thread phuon...@vn.fujitsu.com
Hi Kevin,

I am interesting in Jacket too, so I would like to contribute once the work 
starts.

Thanks,
Phuong.

From: Janki Chhatbar [mailto:jankihchhat...@gmail.com]
Sent: Wednesday, March 16, 2016 8:21 PM
To: zs
Subject: Re: [openstack-dev] [jacket] Introduction to jacket, a new project

Hi Kevin

I read the wiki and quite liked it. Good going. I would ‎like to contribute to 
it once the work starts
 Do let me know about it.


Thanks
Janki

Sent from my BlackBerry 10 smartphone.
From: zs
Sent: Wednesday, 16 March 2016 18:30
To: OpenStack Development Mailing List (not for usage questions)
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [jacket] Introduction to jacket, a new project


Hi Gordon,

Thank you for your suggestion.

I think jacket is different from tricircle. Because tricircle focuses on 
OpenStack deployment across multiple sites, but jacket focuses on how to manage 
the different clouds just like one cloud.  There are some differences:
1. Account management and API model: Tricircle faces multiply OpenStack 
instances which can share one Keystone and have the same API model, but jacket 
will face the different clouds which have the respective service and different 
API model. For example, VMware vCloud Director has no volume management like 
OpenStack and AWS, jacket will offer a fake volume management for this kind of 
cloud.
2. Image management: One image just can run in one cloud, jacket need consider 
how to solve this problem.
3. Flavor management: Different clouds have different flavors which can not be 
operated by users. Jacket will face this problem but there will be no this 
problem in tricircle.
4. Legacy resources adoption: Because of the different API modles, it will be a 
huge challenge for jacket.

I think it is maybe a good solution that jacket works to unify the API model 
for different clouds, and then using tricircle to offer the management of  a 
large scale VMs.

Best Regards,
Kevin (Sen Zhang)


At 2016-03-16 19:51:33, "gordon chung" > 
wrote:

>

>

>On 16/03/2016 4:03 AM, zs wrote:

>> Hi all,

>>

>> There is a new project "jacket" to manage multiply clouds. The jacket

>> wiki is: https://wiki.openstack.org/wiki/Jacket

>>   Please review it and give your comments. Thanks.

>>

>> Best Regards,

>>

>> Kevin (Sen Zhang)

>>

>>

>

>i don't know exact details of either project, but i suggest you

>collaborate with tricircle project[1] because it seems you are

>addressing the same user story (and in a very similar fashion). not sure

>if it's a user story for OpenStack itself, but no point duplicating efforts.

>

>[1] https://wiki.openstack.org/wiki/Tricircle

>

>cheers,

>

>--

>gord

>

>__

>OpenStack Development Mailing List (not for usage questions)

>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][security][tc] Next steps for Kolla vulnerability:managed tag

2016-03-18 Thread Steven Dake (stdake)
The Technical Committee,

Please see this review which offers threat analysis as an option for the 
vulnerability:managed tag:
https://review.openstack.org/#/c/294212/

Kolla coresec team:

In this IRC meeting on Security, I requested an audit, and the security team 
came back with a threat analysis suggestion, hence  the above review.  What the 
security team needs from us is architecture diagrams.  Please read the full irc 
log here if your interested in Kolla security:

http://eavesdrop.openstack.org/meetings/security/2016/security.2016-03-17-17.00.log.html#l-183

We may have this threat analysis during Austin ODS sessions, or via a web 
conference to be decided.  I'll start working on architecture diagrams that 
help facilitate this discussion and share them via a shared diagramming tool.  
The architecture diagrams help facilitate this discussion:


https://openstack-security.github.io/collaboration/2016/01/16/threat-analysis.html

Regards,
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mitaka RC1 available

2016-03-18 Thread John Garbutt
On 16 March 2016 at 18:17, Thierry Carrez  wrote:
> Hello everyone,
>
> Nova is next to produce a release candidate for the end of the Mitaka cycle!
> Congratulations to all the Nova devs. You can find the RC1 source code
> tarball at:
>
> https://tarballs.openstack.org/nova/nova-13.0.0.0rc1.tar.gz
>
> Unless release-critical issues are found that warrant a release candidate
> respin, this RC1 will be formally released as the final Mitaka Nova release
> on April 7th. You are therefore strongly encouraged to test and validate
> this tarball !

So I am expecting to cut RC2 to include translations that happen now
and release week.

To help this, we now have hit Hard String Freeze on the stable/mitaka branch.
https://wiki.openstack.org/wiki/StringFreeze#Hard_String_Freeze

> Alternatively, you can directly test the stable/mitaka release branch at:
>
> http://git.openstack.org/cgit/openstack/nova/log/?h=stable/mitaka
>
> If you find an issue that could be considered release-critical, please file
> it at:
>
> https://bugs.launchpad.net/nova/+filebug
>
> and tag it *mitaka-rc-potential* to bring it to the Nova release crew's
> attention.
>
> Note that the "master" branch of Nova is now open for Newton development,
> and feature freeze restrictions no longer apply there !

Although we have a few upgraded related DB changes, including:
https://review.openstack.org/#/c/289449/4
More on that from dansmith in a sec.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Reminder to move implemented nova specs from mitaka

2016-03-18 Thread Matt Riedemann



On 3/18/2016 5:46 AM, Markus Zoeller wrote:

Matt Riedemann  wrote on 03/16/2016 09:49:06
PM:


From: Matt Riedemann 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 03/16/2016 09:50 PM
Subject: [openstack-dev] [nova] Reminder to move implemented nova
specs from mitaka

Specs are proposed to the 'approved' subdirectory and when they are
completely implemented in launchpad (the blueprint status is
'Implemented'), we should move the spec from the 'approved' subdirectory



to the 'implemented' subdirectory in the nova-specs repo.

For example:

https://review.openstack.org/#/c/248142/

These are the mitaka series blueprints from launchpad:

https://blueprints.launchpad.net/nova/mitaka

If anyone is really daring they could go through and move all of the
implemented ones in a single change.

--

Thanks,

Matt Riedemann



Is there a best practice how to handle a partially implemented bp (with
spec file)? For example [1] needs additional effort during Newton to
finish it.

References:
[1] https://blueprints.launchpad.net/nova/+spec/centralize-config-options

Regards, Markus Zoeller (markus_z)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



John was just telling me about this yesterday. I guess one thing we can 
do is add a "(partial)" suffix to the title and mark the blueprint 
complete for mitaka, and then the idea is to create a new blueprint for 
newton for continuing the work, e.g. centralize-config-options-newton.


The idea being we show that something was completed in mitaka when 
you're looking at blueprints in launchpad for mitaka.


I'm generally OK with that approach, the thing I don't really like is 
when we have to re-propose specs and/or if there are dependent 
blueprints in launchpad. Because creating the new blueprint means you 
have to update the link in the spec when re-proposing it and you need to 
update all of the dependent specs in launchpad for the new newton spec. 
Maybe it's not a big deal, I can see benefits to either approach. 
Personally I don't like to consider a blueprint complete until it's 
actually complete, like has been the case with some of the cells v2 
blueprints we've re-proposed for newton.


With long cleanup efforts like objects and config options though, I can 
see how having release-specific blueprints is good.


Markus, so to answer your original question, :), I'd probably mark the 
existing bp as complete for mitaka and create a new 
centralize-config-options-newton blueprint.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2016-03-18 Thread Bogdan Dobrelya
On 03/17/2016 12:17 AM, Andrew Beekhof wrote:
> 
> 
> On Tue, Feb 16, 2016 at 2:58 AM, Bogdan Dobrelya  > wrote:
> 
> Hello!
> A quick status update inline:
> 
> 
> [snip] 
> 
> 
> So, what's next?
> 
> - I'm open for merging both [5], [6] of the existing OCF RA solutions,
> as it was proposed by Andrew Beekhof. Let's make it happen.
> 
> 
> Great :-)
> Oyvind (CC'd) is the relevant contact from our side, should he talk to
> you or someone else?

Yes, we should make a follow-up perhaps and define the plan for merging
agents

>  
> 
> 
> - Would be nice to make Travis CI based gate to the upstream
> rabbitmq-server's HA OCF RA. As for now, it relies on Fuel CI gates and
> manual testing with atlas boxes.
> 
> - Please also consider Travis or a suchlike CI for the resource-agents'
> rabbit-cluster OCF RA as well.
> 
> [1] https://github.com/bogdando/rabbitmq-cluster-ocf-vagrant
> [2] https://github.com/bogdando/packer-atlas-example
> [3] https://hub.docker.com/r/bogdando/rabbitmq-cluster-ocf/
> [4] https://hub.docker.com/r/bogdando/rabbitmq-cluster-ocf-wily/
> [5]
> 
> https://github.com/rabbitmq/rabbitmq-server/blob/master/scripts/rabbitmq-server-ha.ocf
> [6]
> 
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
> 
> >
> > I'm also planning to refer this official RabbitMQ cluster setup guide in
> > the OpenStack HA guide as well [2].
> 
> Done, see [7]
> 
> [7] http://docs.openstack.org/ha-guide/controller-ha-rabbitmq.html
> 
> >
> > PS. Original rabbitmq-users mail thread is here [3].
> > [openstack-operators] cross posted as well.
> >
> > [0] http://www.rabbitmq.com/pacemaker.html
> > [1] https://atlas.hashicorp.com/bogdando/boxes/rabbitmq-cluster-ocf
> > [2] https://bugs.launchpad.net/openstack-manuals/+bug/1497528
> > [3] https://groups.google.com/forum/#!topic/rabbitmq-users/BnoIQJb34Ao
> >
> 
> 
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Upgrading from Kilo to Liberty, missing database entries

2016-03-18 Thread Nolan Brubaker

On Mar 17, 2016, at 12:17 PM, Ihar Hrachyshka  wrote:
> 
> Yeah. To trigger the bug, you don’t need to upgrade. Just create a 
> network/port without the extension enabled; then enable the extension; then 
> try to start an instance using the network/port.

Thanks I appreciate the clarification. This is indeed what happen in 
openstack-ansible: Kilo environments created networks, then on Liberty upgrades 
the extension was enabled.
> 
> I posted a patch that should solve it: https://review.openstack.org/294132

Thank you. I’ve added some patches to work around the issue for 
openstack-ansible’s side using the same topic/launchpad bug.
> 
> I believe the patch should be backported back to the first release that 
> introduced the extension (Kilo). Also, with the patch, alembic script is 
> redundant and only adds to downtime during upgrade, so we could probably kill 
> it in stable branches in addition to this patch.

I would agree with this, but admittedly don’t do much neutron development, so 
I’ll leave that up to people more involved to decide.

> 
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sahara Job Binaries Storage

2016-03-18 Thread Jerico Revote
Hi Trevor,

Thanks for the explanation, 
looking forward to that on/off option and/or internal db deprecation.

Regards,

Jerico



> On 18 Mar 2016, at 12:55 AM, Trevor McKay  wrote:
> 
> Hi Jerico,
> 
>  Internal db storage for job binaries was added at
> the start of EDP as an alternative for sites that do
> not have swift running. Since then, we've also added
> integration with manila so that job binaries can be
> stored in manila shares.
> 
>  You are correct, storing lots of binaries in the
> sahara db could make the database grow very large.
> Swift or manila should be used for production, internal
> storage is a good option for development/test.
> 
>  There is currently no way to disable internal storage.
> We can took a look at adding such an option -- in fact
> we have talked informally about the possibility of
> deprecating internal db storage since swift and manila
> are both mature at this point. We should discuss that
> at the upcoming summit.
> 
> Best,
> 
> Trevor
> 
> On Thu, 2016-03-17 at 10:27 +1100, Jerico Revote wrote:
>> Hello,
>> 
>> 
>> When deploying Sahara, Sahara docos suggests to
>> increase max_allowed_packet to 256MB,
>> for internal database storing of job binaries.
>> There could be hundreds of job binaries to be uploaded/created into
>> Sahara,
>> which would then cause the database to grow as well.
>> Does anyone using Sahara encountered database sizing issues using
>> internal db storage?
>> 
>> 
>> It looks like swift is the more logical place for storing job
>> binaries 
>> (in our case we have a global swift cluster), and this is also
>> available to the user.
>> Is there a way to only enable the swift way for storing job binaries?
>> 
>> Thanks,
>> 
>> 
>> Jerico
>> 
>> 
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-18 Thread Joshua Harlow

On 03/16/2016 05:42 AM, Sean Dague wrote:

On 03/16/2016 08:27 AM, Amrith Kumar wrote:

Nikhil, thank you for the very timely posting. This is a topic that has
been discussed quite a bit recently within the Trove team. I've read the
document you reference as [1] and I have not been part of earlier
conversations on this subject so I may be missing some context here.

I feel that the conversation (in [1], in the email thread) has gone to a
discussion of implementation details (library vs. service, quota
enforcement engine, interface, ...) when I think there is still some
ambiguity in my mind about the requirements. What is it that this
capability will provide and what is the contract implied when a service
adopts this model.

For example, consider this case that we see in Trove. In response to a
user request to create a cluster of databases, Trove must provision
storage (cinder), compute (nova), networks (let's say neutron), and so
on. As stated by Boris in his email, it would be ideal if Trove had a
confirmation from all projects that there was quota available for the
requests that would be made before the requests actually are made. This
implies therefore that participating projects (cinder, nova, neutron,
...) would have to support some reservations scheme and subsequently
honor requests based on a reservation. So, I think there's more to this
than just another library or project, there's an implication for
projects that wish to participate in this scheme. Or am I wrong in this
understanding?


I think you have to wind it back further. While Trove wants to get a
consistent lock on quotas in all the projects below it, any single one
of those is massively racy on it's internal quota.

It's totally possible to have nova believe it has enough cpu, memory,
disk, security_groups, floating_ips, instances available for your user,
fail on a reschedule, and end up leaking off chunks of this, and
eventually fail you. So before asking the question about "Can Trove get
a unified quota answer" we have to solve "can the underlying projects
guaruntee consistent quota answers".

There is a giant pile of bugs in Nova about these races, has been
forever, until we solve this in the lower level projects there is no
hope of solving the Trove use case.


+1 I think/thought the goal was more of to solve all the above bugs 
first by doing something/thinking a little different and figuring out 
how to make this quota service (or library) a reality (finally!!)


Then sure, of course, we can work on those other problems as well 
(obviously we should try to design a initial solution that should not 
make solving those kinds of problems impossible).


-josh



-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-18 Thread Chris Dent

On Fri, 18 Mar 2016, Andrew Laski wrote:


I spent a little bit of time investigating the possibility of a chrome
extension to turn off the keyboard shortcuts and search mechanism a
little while ago. I eventually gave up but what I learned is that those
changes are apparently related to the inline edit ability that was
added. The edit window only loads in what is currently visible so
regular browser search would not work.


You can avoid much of the weirdness by going to preferences and
switching to a unified instead of side-by-side diff. This also turns
off the editing functions as far as I can tell. Thus removing a lot
of magical things that happen.

You can still get to the side-by-side view by pressing some buttons,
if you need to for some reason.


--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][ptls] tagging reviews, making tags searchable

2016-03-18 Thread Jim Rollenhagen
On Fri, Mar 18, 2016 at 08:16:00AM -0700, John Dickinson wrote:


> 3) A dashboard generated from gerrit and git data. 
> http://not.mn/swift/swift_community_dashboard.html The "Community Starred 
> Patches" is a list of the top 20 commonly starred patches by the Swift 
> community. Basically, for every person who has gerrit activity around Swift, 
> I pull their stars. From that I can see which patches are more commonly 
> starred. I also weight each person's stars according to their level of 
> activity in the project. This gives a very good idea of what the community as 
> a whole finds important.

I've been thinking of making something similar for ironic - is the
source for this available somewhere? :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] report from Brno code sprint, Mar 14-16

2016-03-18 Thread Ihar Hrachyshka

Hi all,

Just giving my fellows an update from the event where we gathered our  
upgrades subteam to plan for Newton and do some coding.


==

We had folks from multiple companies participating in the event (Red Hat,  
SuSE, Intel, IBM). We also had some folks joining us online. Thanks  
everyone who gave us a hand at these days!


For the most part, we were working on the plan to adopt versioned objects  
for neutron db codebase. Just in case someone is not aware about the end  
goal for the effort: we want to eventually get to the point where you can  
upgrade your neutron-server processes between major releases without  
shutting all of them down to run database migration scripts. [There are  
other applications for objects, but that’s currently out of immediate scope  
for the N effort.]


Long story short, the current plan of the team for the near future is:

- N: provide objects for all major neutron resources;
- N: get most if not all the db code in neutron repo switched to using  
objects;
- N: start using objects to handle database live data migration (aka  
‘contract’ scripts);

- start of O: forbid contract alembic scripts;
- O and beyond: introduce gating for HA neutron-servers; complete objects  
adoption; look into utilizing objects for plugin API; API version pinning;  
...


If all goes well, this plan should allow ops to upgrade Newton  
neutron-server to O without API endpoint downtime.


==

We also discussed current gating for upgrades. The plan for that would be  
getting the current l3 legacy job voting next weeks; adding dvr flavor into  
check queue (non-voting); get voting for the latter (probably while  
removing the former from check gate).


There are still some things to improve in multinode grenade that we have.

One thing is that we should look into running dvr tests for dvr flavour of  
the job. Though there are some dvr tests in tempest, they are not tagged as  
smoke, and hence are not executed in the job. Also, as per Assaf, those dvr  
tests go away from tempest middle term, leaving just tests inside neutron  
tree. So we should come up with some way to run dvr tests that are  
maintained inside neutron tree. One way is utilizing a tempest plugin  
(there is a patch for review for that).


Another thing we may want to consider is moving some more services into  
‘old’ node of the multinode setup. Specifically, dhcp and l3 agent (even  
for l3 legacy job). There are questions though whether we would then  
effectively hide some potential places to break (due to  
external-to-internal routing not leaving the node, or no tunneling needed  
between l3/dhcp and instances). So it may require introducing a separate  
‘networking’ node in the multinode setup to host l3/dhcp agents there.


==

We realize that subteam plans are not well known to other community  
members. We will work on raising awareness about use cases and features we  
target for next releases. That will include posting proper ‘ops oriented’  
RFEs, working on documentation (devref as well as networking-guide), etc.


One thing that we discussed on the event is updating networking-guide with  
detailed description of the upgrade process for neutron. We already have  
some pieces scattered there [f.e. we have some coverage for  
neutron-db-manage tool] but it’s nothing complete or deep enough. That’s  
something we will look into improving at the start of the N cycle.


==

As for actual coding, we focused on objects. We track the effort using  
‘ovo’ topic in gerrit:


https://review.openstack.org/#/q/topic:ovo

We landed some patches already, and we will land a lot more in the next  
months. Reviews from outside the subteam are highly appreciated. If  
anything, you may learn something new. :)


==

It was the second time we organized a highly focused sprint for Neutron  
(the previous one was on QoS during Liberty cycle). I hope we as a  
community start learning how to manage those events more effectively. I  
would be glad to talk about how the events go, what are the obstacles and  
mistakes we make along the line, if anything cares to hear. :) For example,  
the timing that we chose for this event this time was really unfortunate,  
and that slowed down some progress. Still, we are in a better shape coding  
wise as well as being on the same page for the upcoming work for the next 6  
months.


==

All in all, it was a great experience, and I look forward to continue the  
tradition of focused events in neutron community.


If you attended the event either IRL or virtually, and you have anything to  
point out that would help us to get it better next time, please don’t  
hesitate to comment. Feedback is very appreciated.


I also encourage other participants to comment on the report if I missed or  
mixed or messed something.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Fuel] fuel-web/docs to fuel-docs migration

2016-03-18 Thread Aleksandra Fedorova
Hi, everyone,

we've merged migration changes.

Please send all new documentation patches to openstack/fuel-docs
repository. Development documentation is available there in devdocs/
subfolder.

On Tue, Mar 15, 2016 at 2:41 PM, Aleksandra Fedorova
 wrote:
> Hi, everyone,
>
> this week we plan to migrate the content of fuel-web/docs to fuel-docs
> repo to setup one place for all Fuel documentation.
>
> See related blueprint
> https://blueprints.launchpad.net/fuel/+spec/fuel-docs-migration
>
> There are two patches on review:
>
> * https://review.openstack.org/#/c/292403/ - sync fuel-web/docs to fuel-docs
> * https://review.openstack.org/#/c/292446/ - remove fuel-web/docs content
>
> There is only one remaining non-merged patch in fuel-web/docs
>
>   
> https://review.openstack.org/#/q/project:openstack/fuel-web+file:docs+status:open
>
> I suggest that we merge it right away, then declare fuel-web/docs
> freeze and perform the migration.
>
> All new changes to dev docs should then be sent to fuel-docs
> repository, in devdocs subfolder.
>
> --
> Aleksandra Fedorova
> Fuel CI Engineer
> bookwar



-- 
Aleksandra Fedorova
CI Team Lead
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-18 Thread Anita Kuno
On 03/18/2016 06:03 PM, Kevin L. Mitchell wrote:
> On Fri, 2016-03-18 at 20:58 +, Jeremy Stanley wrote:
>> Yes, we had a session on it several summits ago, a group of
>> contributors said they were going to work on developing it, pushed
>> up a skeleton repo, and then we never heard back from them after
>> that. Unfortunate.
> 
> Yeah, unfortunately, we weren't able to get priority for the project,
> and I'm afraid it's probably not going to go anywhere now :/
> 
Well there is no legacy code standing in the way, the only commit to the
repo is a .gitreview file. Anyone can contribute.

Thanks Kevin,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Neutron] Mitaka RC1 available

2016-03-18 Thread Brian Haley

On 03/17/2016 09:13 PM, Armando M. wrote:



On 18 March 2016 at 00:16, Jeremy Stanley > wrote:

On 2016-03-17 09:44:59 +0530 (+0530), Armando M. wrote:
> Unfortunately, Neutron is also going to need an RC2 due to
> upstream CI issues triggered by infra change [1] that merged right
> about the same time RC1 was being cut.

Do you have any details on the impact that caused for Neutron? I
don't think I heard about it. Was there another ML thread I missed?


No, I didn't think a discussion was necessary. We did file bugs though:

https://bugs.launchpad.net/neutron/+bug/1558289
https://bugs.launchpad.net/neutron/+bug/1558397
https://bugs.launchpad.net/neutron/+bug/1558355


And the stable/liberty change for those still seeing netcat errors there:

https://review.openstack.org/#/c/294591/

I'll get it in stable/kilo too.

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/trove failed

2016-03-18 Thread Tony Breeds
Hi Amrith,
Thanks for keeping an eye on the periodic jobs :)

On Fri, Mar 18, 2016 at 01:01:10PM +, Amrith Kumar wrote:

> I just ran stable/liberty through these tests on a sandbox and they passed.
> I'm not sure what caused this failure but if it persists, I'll take another
> look.

Good plan.  After a very quick look I think it's a race rather than a systemic
issue.

So if it happens on today's run we can start in vestigating.

> Some changes are coming soon on master and I'll backport them to mitaka and
> liberty. They'll make debugging these tests a whole lot easier.

Cool.  As long as they're appropriate for stable :)


Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][ptls] tagging reviews, making tags searchable

2016-03-18 Thread Amrith Kumar
Jim,

Maybe we could collaborate and build something that'd work for multiple 
projects? Happy to help with this. It is clearly a problem that some projects 
seem to be facing and there aren't any good solutions there.

Thanks,

-amrith

> -Original Message-
> From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> Sent: Friday, March 18, 2016 6:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [all][infra][ptls] tagging reviews, making
> tags searchable
> 
> On Fri, Mar 18, 2016 at 08:16:00AM -0700, John Dickinson wrote:
> 
> 
> > 3) A dashboard generated from gerrit and git data.
> http://not.mn/swift/swift_community_dashboard.html The "Community Starred
> Patches" is a list of the top 20 commonly starred patches by the Swift
> community. Basically, for every person who has gerrit activity around
> Swift, I pull their stars. From that I can see which patches are more
> commonly starred. I also weight each person's stars according to their
> level of activity in the project. This gives a very good idea of what the
> community as a whole finds important.
> 
> I've been thinking of making something similar for ironic - is the source
> for this available somewhere? :)
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-18 Thread Dave Walker
On 18 March 2016 at 16:21, Andrey Kurilin  wrote:

> > If you haven't tried gertty yet, I highly recommend it.
>
> I have an opened tab with it and I should try it, but I wonder why we need
> gerrit if it a bad thing for a lot of contributors? Is there an alternative
> for it?
> Btw, I'm ready to have local instance of gerrit for myself. But I didn't
> find a way to configure it to use upstream API.
>
> > Someone should make a mobile-native app (android/ios) for gerrit now
> since the new interface is so bad. Hopefully someone somewhere can come up
> with the time for it. (I haven't had the time myself).
>
> It looks like there is an app for android -
> https://play.google.com/store/apps/details?id=com.jbirdvegas.mgerrit ,
> but, unfortunately, I have a phone from Apple and I didn't find anything
> for it.
>
>
I found this to be pretty effective with a friendly upstream.  Last summer
I pushed a change to their upstream to have OpenStack gerrit included in
the default app and it was warmly received  I used to use this app' quite a
bit when commuting.  Would recommend.

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev