[openstack-dev] [k8s] [manila] Manila CSI driver plan

2018-11-21 Thread Tom Barron

[Robert Vasek, Red Hat colleagues working in this space: please
correct any mis-understandings or omissions below]

At the Berlin Summit SIG-K8s Working session [1] [2], we agreed to
follow up with a note to the larger community summarizing our plan
to enable Manila as RWX storage provider for k8s and other container
orchestrators.  Here it is :)

Today there are kubernetes external service providers [3] [4] for
Manila with NFS protocol back ends, as well as a way to use an
external service provider on the master host in combination with a CSI
CephFS node-host plugin [5].

We propose to target an end-to-end, multi-protocol Manila CSI plugin
-- aiming at CSI 1.0, which should get support in container
orchestrators early in 2019.

Converging on CSI will:
* provide support for multiple Container Orchestrators, not just k8s
* de-couple storage plugin development from k8s life-cycle going forwards
* unify development efforts and distributions and set clear expectations
 for operators and deployers

Since manila needs to support multiple file system protocols such as
CephFS native and NFS, we propose that work align to the multiplexing
CSI architecture outlined here [6].  High level work plan:

* Write master host multiplexing controller and node proxy plugins.

* Use CephFS node-only plugin from [5]

* Write NFS node-only plugin

NFS and CephFS are immediate priorities - other file system protocols
supported by manila can be added over time if there is interest.

-- Tom Barron (irc: tbarron)

[1] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22752/sig-k8s-working-session
[2] https://etherpad.openstack.org/p/sig-k8s-2018-berlin-summit
[3] https://github.com/kubernetes-incubator/external-storage
[4] https://github.com/kubernetes/cloud-provider-openstack
[5] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21997/dynamic-storage-provisioning-of-manilacephfs-shares-on-kubernetes
[6] 
https://github.com/container-storage-interface/spec/issues/263#issuecomment-411471611

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] no meeting this week

2018-11-21 Thread Tom Barron
Just a reminder that there will be no manila community meeting this 
week.


Next manila meeting will be Thursday, 29 November, at 1500 UTC in 
#openstack-meeting-alt on freenode.


Agenda here [1] - Feel free to add ...

-- Tom Barron (tbarron)

[1] https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] manila community meeting this week is *on*

2018-11-15 Thread Tom Barron

On 15/11/18 02:29 +0100, Erik McCormick wrote:

Are you gathering somewhere in the building for in-person discussion?


We have a Forum session at 9:50 where we'll cover similar material:

https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22830/setting-the-compass-for-manila-rwx-cloud-storage

The weekly meeting will be on irc, as usual, with meeting minutes 
logged there, with no concurrent sidebar conversations.


It'd be great to meet you in person here, so if you can't get to the 
Forum session please ping me at t...@dyncloud.net or tbarron on irc and 
we can chat a bit.


Cheers,

-- Tom (tbarron)



On Wed, Nov 14, 2018, 10:11 PM Tom Barron 
As we discussed last week, we *will* have our normal weekly manila
community meeting this week, at the regular time and place

   Thursday, 15 November, 1500 UTC, #openstack-meetings-alt on
freenode

Some of us are at Summit but we need to continue to discuss/review
outstanding specs, links for which can be found in our next meeting
agenda [1].

It was great meeting new folks at the project onboarding and project
update sessions at Summit this week -- please feel free to join the
IRC meeting tomorrow!

Cheers,

-- Tom Barron (tbarron)

[1]
https://wiki.openstack.org/w/index.php?title=Manila/Meetings=edit=2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] manila community meeting this week is *on*

2018-11-14 Thread Tom Barron
As we discussed last week, we *will* have our normal weekly manila 
community meeting this week, at the regular time and place


  Thursday, 15 November, 1500 UTC, #openstack-meetings-alt on 
freenode


Some of us are at Summit but we need to continue to discuss/review 
outstanding specs, links for which can be found in our next meeting

agenda [1].

It was great meeting new folks at the project onboarding and project 
update sessions at Summit this week -- please feel free to join the 
IRC meeting tomorrow!


Cheers,

-- Tom Barron (tbarron)

[1] 
https://wiki.openstack.org/w/index.php?title=Manila/Meetings=edit=2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [contribute]

2018-11-09 Thread Tom Barron

On 08/11/18 11:07 -0300, Sofia Enriquez wrote:

Hi Leni, welcome!

1) Devstack[1] plays a *main *role in the development workflow.
It's an easier way to get a full environment to work in Manila, we use it
every day. I recommend you to use it in a VM.
You can find many tutorials about how to use Devstack, I just let you one
[2]


Nice blog/guide, Sofia!  I'll just add [4] as a followup for anyone 
particularly wanting to install devstack with manila and a cephfs with 
nfs back end.




2) I can't find *low-hanging-fruit* bugs in Manila. However,
good-first-bugs are tagged as *low-hanging-fruit *for example, cinder's[3]


And Goutham followed up with some manila low-hanging-fruit too.  
Thanks, Goutham!




Today at *15:00 UTC *It's  Weekly Manila Team Meeting at IRC (channel
#openstack-meeting-alt) on Freenode.


And as we may have mentioned, you can ask questions on irc [5] [6]
#openstack-manila any time.  Ask even if no one is responding right 
then, most of us have bouncers and will see and get back to you.


-- Tom Barron

[4] https://github.com/tombarron/vagrant-libvirt-devstack

[5] https://docs.openstack.org/contributors/common/irc.html

[6] https://docs.openstack.org/infra/manual/irc.html



Have fun!
Sofia
irc: enriquetaso
[1]
https://docs.openstack.org/zun/latest/contributor/quickstart.html#exercising-the-services-using-devstack
[2]
https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/
[3] https://bugs.launchpad.net/cinder/+bugs?field.tag=low-hanging-fruit

On Thu, Nov 8, 2018 at 7:41 AM Leni Kadali Mutungi 
wrote:


Hi Tom

Thanks for the warm welcome. I've gone through the material and I would
like to understand a few things:

1. What's the role of devstack in the development workflow?
2. Where can I find good-first-bugs? A bug that is simple to do
(relatively ;)) and allows me to practice what I've read up on in
Developer's Guide. I looked through the manila bugs on Launchpad but I
didn't see anything marked easy or good-first-bug or its equivalent for
manila. I am a bit unfamiliar with Launchpad so that may have played a
role :).

Your guidance is appreciated.

On 10/19/18 5:55 PM, Tom Barron wrote:
> On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote:
>> Hi all.
>>
>> I've downloaded the manila project from GitHub as a zip file, unpacked
>> it and have run `git fetch --depth=1` and been progressively running
>> `git fetch --deepen=5` to get the commit history I need. For future
>> reference, would a shallow clone e.g. `git clone depth=1` be enough to
>> start working on the project or should one have the full commit
>> history of the project?
>>
>> --
>> -- Kind regards,
>> Leni Kadali Mutungi
>
> Hi Leni,
>
> First I'd like to extend a warm welcome to you as a new manila project
> contributor!  We have some contributor/developer documentation [1] that
> you may find useful. If you find any gaps or misinformation, we will be
> happy to work with you to address these.  In addition to this email
> list, the #openstack-manila IRC channel on freenode is a good place to
> ask questions.  Many of us run irc bouncers so we'll see the question
> even if we're not looking right when it is asked.  Finally, we have a
> meeting most weeks on Thursdays at 1500UTC in #openstack-meeting-alt --
> agendas are posted here [2].  Also, here is our work-plan for the
> current Stein development cycle [3].
>
> Now for your question about shallow clones.  I hope others who know more
> will chime in but here are my thoughts ...
>
> Although having the full commit history for the project is useful, it is
> certainly possible to get started with a shallow clone of the project.
> That said, I'm not sure if the space and download-time/bandwidth gains
> are going to be that significant because once you have the workspace you
> will want to run unit tests, pep8, etc. using tox as explained in the
> developer documentation mentioned earlier.   That will download virtual
> environments for manila's dependencies in your workspace (under .tox
> directory) that dwarf the space used for manila proper.
>
> $ git clone --depth=1 g...@github.com:openstack/manila.git shallow-manila
> Cloning into 'shallow-manila'...
> ...
> $ git clone g...@github.com:openstack/manila.git deep-manila
> Cloning into 'deep-manila'...
> ...
> $ du -sh shallow-manila deep-manila/
> 20Mshallow-manila
> 35Mdeep-manila/
>
> But after we run tox inside shallow-manila and deep-manila we see:
>
> $ du -sh shallow-manila deep-manila/
> 589Mshallow-manila
> 603Mdeep-manila/
>
> Similarly, you are likely to want to run devstack locally and that will
> clone the repositories for the other openstack components you need and
> the savings from shallow clones won't be that significant relative to
> 

Re: [openstack-dev] [manila][tc] Seeking feedback on the OpenStack cloud vision

2018-10-31 Thread Tom Barron

On 24/10/18 11:14 -0400, Zane Bitter wrote:

Greetings, Manila team!
As you may be aware, I've been working with other folks in the 
community on documenting a vision for OpenStack clouds (formerly known 
as the 'Technical Vision') - essentially to interpret the mission 
statement in long-form, in a way that we can use to actually help 
guide decisions. You can read the latest draft here: 
https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals 
that we want OpenStack as a whole to be able to meet - ideally each 
project would be contributing toward one or more of these design 
goals.


I think that, like Cinder, Manila would qualify as contributing to the 
'Basic Physical Data Center Management' goal, since it also allows 
users to access external storage providers through a standardised API.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, 
please reply to this thread to set it up. You are also welcome to 
bring up any questions in the TC IRC channel, #openstack-tc - there's 
more of us around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk 
to us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't 
have any specific feedback, that's cool but I'd like to request that 
at least the PTL leave a vote on the review. It's important to know 
whether we are actually developing a consensus in the community or 
just talking to ourselves :)


many thanks,
Zane.


Zane and I chatted on IRC and he is going to attend the manila 
community meeting tomorrow, November 1, at 1500 UTC in 
#openstack-meeting-alt to follow up and solicit feedback.  If you are 
unable to attend the meeting and have points to make or questions 
please follow up in this thread or in the review mentioned above.


Cheers,

-- Tom


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Next Manila meeting cancelled

2018-10-19 Thread Tom Barron
We have a number of manila cores and regular participants who cannot 
attend the regular Thursday Manila meeting this coming week so it is 
cancelled.


We will meet as normal the following Thursday, 1 November, at 1500 UTC 
on #openstack-meeting-alt [1].


Cheers,

-- Tom Barron (tbarron)

[1] https://wiki.openstack.org/wiki/Manila/Meetings


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [contribute]

2018-10-19 Thread Tom Barron

On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote:

Hi all.

I've downloaded the manila project from GitHub as a zip 
file, unpacked it and have run `git fetch --depth=1` and 
been progressively running `git fetch --deepen=5` to get 
the commit history I need. For future reference, would a 
shallow clone e.g. `git clone depth=1` be enough to start 
working on the project or should one have the full commit 
history of the project?


--
-- Kind regards,
Leni Kadali Mutungi


Hi Leni,

First I'd like to extend a warm welcome to you as a new manila project 
contributor!  We have some contributor/developer documentation [1] 
that you may find useful. If you find any gaps or misinformation, we 
will be happy to work with you to address these.  In addition to this 
email list, the #openstack-manila IRC channel on freenode is a good 
place to ask questions.  Many of us run irc bouncers so we'll see the 
question even if we're not looking right when it is asked.  Finally, 
we have a meeting most weeks on Thursdays at 1500UTC in 
#openstack-meeting-alt -- agendas are posted here [2].  Also, here is 
our work-plan for the current Stein development cycle [3].


Now for your question about shallow clones.  I hope others who know 
more will chime in but here are my thoughts ...


Although having the full commit history for the project is useful, it 
is certainly possible to get started with a shallow clone of the 
project.  That said, I'm not sure if the space and 
download-time/bandwidth gains are going to be that significant because 
once you have the workspace you will want to run unit tests, pep8, 
etc. using tox as explained in the developer documentation mentioned 
earlier.   That will download virtual environments for manila's 
dependencies in your workspace (under .tox directory) that dwarf the 
space used for manila proper.


$ git clone --depth=1 g...@github.com:openstack/manila.git shallow-manila
Cloning into 'shallow-manila'...
...
$ git clone g...@github.com:openstack/manila.git deep-manila
Cloning into 'deep-manila'...
...
$ du -sh shallow-manila deep-manila/
20M shallow-manila
35M deep-manila/

But after we run tox inside shallow-manila and deep-manila we see:

$ du -sh shallow-manila deep-manila/
589Mshallow-manila
603Mdeep-manila/

Similarly, you are likely to want to run devstack locally and that 
will clone the repositories for the other openstack components you 
need and the savings from shallow clones won't be that significant 
relative to the total needed.


Happy developing!

-- Tom Barron (Manila PTL) irc: tbarron

[1] https://docs.openstack.org/manila/rocky/contributor/index.html
[2] https://wiki.openstack.org/wiki/Manila/Meetings
[3] https://wiki.openstack.org/wiki/Manila/SteinCycle

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-09 Thread Tom Barron

On 02/10/18 13:58 -0400, Tom Barron wrote:
Amit Oren has contributed high quality reviews in the last 
couple of cycles so I would like to nominated him for manila 
core.


Please respond with your +1 or -1 votes.  We'll hold voting 
open for 7 days.


Thanks,

-- Tom Barron (tbarron)



We've had lots of +1s for Amit Oren as manila core and no -1s so I've 
added him.


Welcome, Amit!

-- Tom


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Stein mid-cycle and bug smash dates

2018-10-08 Thread Tom Barron
In a recent weekly manila community meeting [1] we tentatively agreed 
to have a virtual mid-cycle Wednesday and Thursday 16-17 January 2019.
This would be the week after the Stein-2 miletone and a month before 
Manila Feature proposal Freeze.


Also, given the success of the China-based bug-smashes in the last few 
years, we are planning an Americas-timezone-friendly bug-smash as 
well, aiming for 13-14 March 2019, the week after the Stein-3 
milestone and Feature Freeze.


Please respond to the list if you have objections or counter-proposals 
to these proposed dates.


Thanks!

-- Tom Barron (tbarron)

[1] 
http://eavesdrop.openstack.org/meetings/manila/2018/manila.2018-09-27-15.00.log.txt



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch]

2018-10-05 Thread Tom Barron

On 05/10/18 13:06 -0700, Clark Boylan wrote:

On Fri, Oct 5, 2018, at 12:44 PM, Tom Barron wrote:

Clark, would you be so kind, at your conveniencew, as to remove the
manila driverfixes/ocata branch?

There are no open changes on the branch and `git log
origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline`
reveals no commits that we need to preserve.

Thanks much!



Done. The old head of that branch was d9c0f8fa4b15a595ed46950b6e5b5d1b4514a7e4.

Clark


Awesome, and thanks again!

-- Tom


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch]

2018-10-05 Thread Tom Barron
Clark, would you be so kind, at your conveniencew, as to remove the 
manila driverfixes/ocata branch?


There are no open changes on the branch and `git log 
origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` 
reveals no commits that we need to preserve.


Thanks much!

-- Tom Barron (tbarron)

On 17/09/18 08:36 -0700, Clark Boylan wrote:

On Mon, Sep 17, 2018, at 8:00 AM, Sean McGinnis wrote:

Hello Cinder and Infra teams. Cinder needs some help from infra or some
pointers on how to proceed.

tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for
fixes that no longer met the more restrictive phase II stable policy criteria.
Extended maintenance has changed that and we want to delete driverfixes/ocata
to make sure patches are going to the right place.

Background
--
Before the extended maintenance changes, the Cinder team found a lot of vendors
were maintaining their own forks to keep backported driver fixes that we were
not allowing upstream due to the stable policy being more restrictive for older
(or deleted) branches. We created the driverfixes/* branches as a central place
for these to go so distros would have one place to grab these fixes, if they
chose to do so.

This has worked great IMO, and we do occasionally still have things that need
to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot of
fixes to driverfixes/ocata, but with the changes to stable policy with extended
maintenance, that is no longer needed.

Extended Maintenance Changes

With things being somewhat relaxed with the extended maintenance changes, we
are now able to backport bug fixes to stable/ocata that we couldn't before and
we don't have to worry as much about that branch being deleted.

I had gone through and identified all patches backported to driverfixes/ocata
but not stable/ocata and cherry-picked them over to get the two branches in
sync. The stable/ocata should now be identical or ahead of driverfixes/ocata
and we want to make sure nothing more gets accidentally merged to
driverfixes/ocata instead of the official stable branch.

Plan

We would now like to have the driverfixes/ocata branch deleted so there is no
confusion about where backports should go and we don't accidentally get these
out of sync again.

Infra team, please delete this branch or let me know if there is a process
somewhere I should follow to have this removed.


The first step is to make sure that all changes on the branch are in a non open 
state (merged or abandoned). 
https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
 shows that there are no open changes.

Next you will want to make sure that the commits on this branch are preserved 
somehow. Git garbage collection will delete and cleanup commits if they are not 
discoverable when working backward from some ref. This is why our old stable 
branch deletion process required we tag the stable branch as $release-eol 
first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata 
--no-merges --oneline` there are quite a few commits on the driverfixes branch 
that are not on the stable branch, but that appears to be due to cherry pick 
writing new commits. You have indicated above that you believe the two branches 
are in sync at this point. A quick sampling of commits seems to confirm this as 
well.

If you can go ahead and confirm that you are ready to delete the 
driverfixes/ocata branch I will go ahead and remove it.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-02 Thread Tom Barron
Amit Oren has contributed high quality reviews in the last couple of 
cycles so I would like to nominated him for manila core.


Please respond with your +1 or -1 votes.  We'll hold voting open for 7 
days.


Thanks,

-- Tom Barron (tbarron)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Tom Barron

On 26/09/18 18:55 +, Tim Bell wrote:


Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose this 
for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila).


Tim,

First, I endorse this goal.

That said, lack of coverage of Manila in the OpenStack client was 
articulated as a need (by CERN and others) during the Vancouver Forum.


At the recent Manila PTG we set addressing this technical debt as a 
Stein cycle goal, as well as OpenStack SDK integration for Manila.


-- Tom Barron (tbarron)


In other cases, there are subsets of the function which require the native 
project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

   It's time to start thinking about community-wide goals for the T series.

   We use community-wide goals to achieve visible common changes, push for
   basic levels of consistency and user experience, and efficiently improve
   certain areas where technical debt payments have become too high -
   across all OpenStack projects. Community input is important to ensure
   that the TC makes good decisions about the goals. We need to consider
   the timing, cycle length, priority, and feasibility of the suggested
   goals.

   If you are interested in proposing a goal, please make sure that before
   the summit it is described in the tracking etherpad [1] and that you
   have started a mailing list thread on the openstack-dev list about the
   proposal so that everyone in the forum session [2] has an opportunity to
   consider the details.  The forum session is only one step in the
   selection process. See [3] for more details.

   Doug

   [1] https://etherpad.openstack.org/p/community-goals
   [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
   [3] https://governance.openstack.org/tc/goals/index.html

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] why use different "bug" tags per project?

2018-09-26 Thread Tom Barron

On 26/09/18 09:45 -0500, Ben Nemec wrote:



On 9/26/18 8:20 AM, Jeremy Stanley wrote:

On 2018-09-26 00:50:16 -0600 (-0600), Chris Friesen wrote:

At the PTG, it was suggested that each project should tag their bugs with
"-bug" to avoid tags being "leaked" across projects, or something
like that.

Could someone elaborate on why this was recommended?  It seems to me that
it'd be better for all projects to just use the "bug" tag for consistency.

If you want to get all bugs in a specific project it would be pretty easy to
search for stories with a tag of "bug" and a project of "X".


Because stories are a cross-project concept and tags are applied to
the story, it's possible for a story with tasks for both
openstack/nova and openstack/cinder projects to represent a bug for
one and a new feature for the other. If they're tagged nova-bug and
cinder-feature then that would allow them to match the queries those
teams have defined for their worklists, boards, et cetera. It's of
course possible to just hand-wave that these intersections are rare
enough to ignore and go ahead and use generic story tags, but the
recommendation is there to allow teams to avoid disagreements in
such cases.


Would it be possible to automate that tagging on import? Essentially 
tag every lp bug that is not wishlist with $PROJECT-bug and wishlists 
with $PROJECT-feature. Otherwise someone has to go through and 
re-categorize everything in Storyboard.


I don't know if everyone would want that, but if this is the 
recommended practice I would want it for Oslo.


I would think this is a common want at least for the projects in the 
central box in the project map [1]


-- Tom Barron (tbarron)

[1] https://www.openstack.org/openstack-map



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] User Survey Results

2018-09-21 Thread Tom Barron

More PTG follow up :)

The foundation shared results of the User Survey for Manila, where 
users were asked "Which OpenStack Shared File Systems (Manila) 
driver(s) are you using?"


I've uploaded these in a Google Sheets document here [1].  The first 
tab has the raw results as passed to me by the Foundation, the second 
tabulates these, and the third summarizes with a bar chart.


Do let me know if you see any errors :)

-- Tom Barron (tbarron)

[1] 
https://docs.google.com/spreadsheets/d/1J83vnnuuADVwACeJq1g8snxMRM5VAfTbBD6svVeH4yg/edit?usp=sharing



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Team Photos

2018-09-21 Thread Tom Barron

Manila team photos from the recent Stein PTG in Denver [1].

-- Tom Barron (tbarron)

[1] 
https://www.dropbox.com/sh/2pmvfkstudih2wf/AADI7Yo-wuJ2nmAIuYFEun5Ea/Manila?dl=0_nav_tracking=1


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Stein PTG summary

2018-09-21 Thread Tom Barron
We've summarized the manila PTG sessions in this etherpad [1] and I've 
included its contents below.


Please feel free to supplement/correct as appropriate, or to follow up 
on this mailing list.


-- Tom Barron (tbarron)

We'll use this etherpad to distill AIs, focus areas, etc. from the Stein PTG

Source: https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018

Retrospective:
*  https://etherpad.openstack.org/p/manila-rocky-retrospective
* Is Dustin willing to be our official bug czar [dustins] Yes! ++
* Should plan regional bug smash days and participate in and publish 
regional OpenStack/open source events
* Need someone to lead? (See AI for tbarron)
* Need more/earlier review attention for approved specs - PTL needs to 
keep attention on review priorities
* Use our wiki rather than misc. etherpads to track work, review focus, 
liaisons, etc.
* Add "Bug Czar", bug deputies info on the wiki

Survey Results: 
https://docs.google.com/spreadsheets/d/1J83vnnuuADVwACeJq1g8snxMRM5VAfTbBD6svVeH4yg/edit#gid=843813633


Work planned for Stein:
* governance goals
* convert gate jobs to python3
* Assignee: vkmc
* need to track progress, including 3rd party jobs, on 
the wiki
* upgrade health checker (governance goal)
* Assignee: ?
* Rocky backlog
* continue priority for access rules
* Assignee: zhongjun
* need to track driver side work on the wiki
* open to continuing json schema request validation
* Assignee: no one appears to be working it currently 
though
* gouthamr will reach out to original authors and 
report status during the next week's weekly meeting
* Manila CSI
* Assignee: gouthamr, vkmc, tbarron
* hodgepodge is setting up biweekly meeting to drive 
convergence of disparate efforts
* https://etherpad.openstack.org/p/sig-k8s-2018-denver-ptg
* Active-active share service
* Assignee: gouthamr
* we were going to wait for cinder action because of downstream 
tooz back end dependencies
* but later Cinder decided to move ahead aggressively on this 
since it's needed for Edge distributed control plane topology so we may start 
work in manila on this in this cycle too
* openstack client integration
* Assignee: gouthamr will drive, distribute work
* We might be able to get an Outreachy intern to help with this 
+++
* openstack sdk integration
* Assignee: amito will drive, distribute work
* We might be able to get an Outreachy intern to help with this 
+++
* telemetry extension
* Assignee: vkmc
* share usage meter, doc, testing
* manage/unmanage for DHSS=True
* Assignee: ganso
* create share from snapshot in another pool/backend
* Asignee: ganso
* replication for DHSS=True
* Assignee: ganso
* OVS -> OVN
* Asignee: tbarron/gouthamr
* 3rd party backends may only work with OVS?
* Manila UI Plugin
* Assignee: vkmc
* Features mapping: how outdated are we?
* Selenium test enablement [dustins: I can help/do this] ++
* uWSGI enablement
* Assignee: vkmc
* By default in devstack?

Agreements:
* Hold off on storyboard migration till attachments issue is resolved
* Don't need placement service but (like cinder) can use Jay Pipes DB 
generation synchronization technique to avoid scheduler races
* Can publish info to placement if / when that is useful for 
nova
* Use "low-hanging-fruit" as a bug tag on trivial bugs
* Pro-active backport policy for non-vendor fixes (Also see AIs for 
gouthamr/ganso)
* Vendor fixes need a +1 or +2 from a vendor reviewer if they 
are active

Action Items:
* tbarron: follow up on user survey question, how to refresh it, talk 
with manila community
* tbarron: refurbish the manila wiki as central landing spot for review 
focus, roles (liaisons, bug czar, etc. rather than us maintaining miscellaneous 
etherpads that get lost.
* tbarron: get agreement on mid-cycle meetup (maybe virtual) and 
America's bug smash +1
* ganso: post proposed stable branch policy for review & talk with 
gouthamr about it
* gouthamr: post proposed revised review policy for review
* gouthamr: drive question of graduation of Experimental features via 
weekly manila meeting/dev mail list
* dustins: Send email to the Manila 

[openstack-dev] [manila] manila core team cleanup

2018-09-20 Thread Tom Barron
Mark Sturdevant recently contacted me to say that due to changes in 
his job responsibilities he isn't currently able to stay sufficiently 
involved in the manila project to serve as a core reviewer.


Thanks to Mark for his great service in the past!  We'd love to have 
him back if things change.


-- Tom Barron (tbarron)



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] initial schedule for PTG

2018-09-09 Thread Tom Barron
Manila meets Monday and Tuesday this week in Steamboat [1] from 9am to 
5pm UTC-0600.


The team came up with a rich set of discussion topics and I've 
arranged the manila PTG etherpad [2] so that they are all included in 
a schedule for the two days.  We'll need to make adjustments if some 
topics go fast and others need more time, but please take a 
preliminary look now and let me know if you see anything that you 
think will need more time or that is scheduled for a bad time.


I tried to start with stuff like backlog, cross-project goals, etc. 
and Stein deadlines so that we have a framework for decisions about 
what work we can fit into the Stein cycle.


Also, we have remote attendees participating, I think all to the east 
of Denver, so I tried to shift topics known to be of interest to those 
folks earlier in the day.


-- Tom Barron (tbarron)

[1] https://www.openstack.org/assets/ptg/Denver-map.pdf

[2] https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] No meeting 13 September 2018

2018-09-06 Thread Tom Barron

Manila folks,

You likely already know, but we won't have our regular community 
meeting on irc next week because we'll be doing the PTG.


See you there!

-- Tom Barron (tbarron)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] retrospective and forum brainstorm etherpads

2018-09-06 Thread Tom Barron

Devs, Ops, community:

We're going to start off the manila PTG sessions Monday with a 
retrospective on the Rocky cycle, using this etherpad [1].  Please
enter your thoughts on what went well and what we should improve in 
Stein so that we take it into consideration.


It's also time (until next Wednesday) to brainstorm topics for Berlin 
Forum.  Please record these here [2].  We'll discuss this subject at 
the PTG as well.


Thanks!

-- Tom Barron (tbarron)

[1] https://etherpad.openstack.org/p/manila-rocky-retrospective

[2] https://etherpad.openstack.org/p/manila-berlin-forum-brainstorm

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] no meeting today

2018-08-30 Thread Tom Barron
I've had to travel unexpectedly, won't be able to chair the meeting 
today, and no one posted any agenda topics this week.


The rocky release is imminent so we'll open up stable/rocky for 
backports soon.


Stein specs repo is open.

Please put PTG planning ideas in the etherpad [1].  PTG is less than 
two weeks away!


-- Tom Barron (tbarron)

[1] https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-17 Thread Tom Barron

On 17/08/18 14:09 -0500, Jay S Bryant wrote:



On 8/17/2018 1:34 PM, Sean McGinnis wrote:

Has there been a discussion on record of how use of placement by cinder
would affect "standalone" cinder (or manila) initiatives where there is a
desire to be able to run cinder by itself (with no-auth) or just with
keystone (where OpenStack style multi-tenancy is desired)?

Tom Barron (tbarron)


A little bit. That would be one of the pieces that needs to be done if we were
to adopt it.

Just high level brainstorming, but I think we would need something like we have
now with using tooz where if it is configured for it, it will use etcd for
distributed locking. And for single node installs it just defaults to file
locks.


Sean and Tom,

That brief discussion was in Vancouver: 
https://etherpad.openstack.org/p/YVR-cinder-placement


Thanks, Jay.



But as Sean indicated I think the long story short was that we would 
make it so that we could use the placement service if it was available 
but would leave the existing functionality in the case it wasn't 
there.


I think that even standalone if I'm running a scheduler (i.e., not 
doing emberlib version of standalone) then I'm likely to want to run 
them active-active on multiple nodes and will need a solution for the 
current races.  So even standalone we face the question of do we use 
placement to solve that issue or do we introduce some coordination 
among the schedulers themselves to solve it.


-- Tom Barron (tbarron)



Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-17 Thread Tom Barron

On 17/08/18 13:34 -0500, Sean McGinnis wrote:


Has there been a discussion on record of how use of placement by cinder
would affect "standalone" cinder (or manila) initiatives where there is a
desire to be able to run cinder by itself (with no-auth) or just with
keystone (where OpenStack style multi-tenancy is desired)?

Tom Barron (tbarron)



A little bit. That would be one of the pieces that needs to be done if we were
to adopt it.

Just high level brainstorming, but I think we would need something like we have
now with using tooz where if it is configured for it, it will use etcd for
distributed locking. And for single node installs it just defaults to file
locks.


So I want to understand better what problems placement would solve and 
whether those problems need to be solved even in the cinder/manila 
standalone case.  And if they do have to be solved in both cases, why 
not use the same solution for both cases?


That *might* mean running the placement service even in the standalone 
case if it's sufficiently lightweight and can be run without the rest 
of nova.  (Whether it's "under" nova umbrella doesn't matter for this 
decoupling - nothing I'm saying here is intended to argue against e.g. 
Mel's or Dan's points in this thread.)


-- Tom Barron (tbarron)





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-17 Thread Tom Barron

On 17/08/18 11:47 -0500, Jay S Bryant wrote:



On 8/17/2018 10:59 AM, Ed Leafe wrote:

On Aug 17, 2018, at 10:51 AM, Chris Dent  wrote:

One of the questions that has come up on the etherpad is about how
placement should be positioned, as a project, after the extraction.
The options are:

* A repo within the compute project
* Its own project, either:
 * working towards being official and governed
 * official and governed from the start

I would like to hear from the Cinder and Neutron teams, especially those who 
were around when those compute sub-projects were split off into their own 
projects. Did you feel that being independent of compute helped or hindered 
you? And to those who are in those projects now, is there any sense that things 
would be better if you were still part of compute?

Ed,

I started working with Cinder right after the split had taken place.  
I have had several discussions as to how the split took place and why 
over the years since.


In the case of Cinder we split because the pace at which things were 
changing in the Cinder project had exceeded what could be handled by 
the Nova team.  Nova has always been a busy project and the changes 
coming in for Nova Volume were getting lost in the larger Nova 
picture.  So, Nova Volume was broken out to become Cinder so that 
people could focus on the storage aspect of things and get change 
through more quickly.


So, I think, for the most part that it has been something that has 
benefited the project.  The exception would be all the challenges that 
have come working cross project on changes that impact both Cinder and 
Nova but that has improved over time.  Given the good leadership I 
envision for the Placement Service I think that is less of a concern.


For the placement service, I would expect that there will be a greater 
rate of change once more projects are using it.  This would also 
support splitting the service out.

My opinion has been that Placement should have been separate from the start. 
The longer we keep Placement inside of Nova, the more painful it will be to 
extract, and hence the likelihood of that every happening is greatly diminished.


I do agree that pulling the service out sooner than later is probably best.


Has there been a discussion on record of how use of placement by 
cinder would affect "standalone" cinder (or manila) initiatives where 
there is a desire to be able to run cinder by itself (with no-auth) or 
just with keystone (where OpenStack style multi-tenancy is desired)?


Tom Barron (tbarron)


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-17 Thread Tom Barron

On 17/08/18 09:05 -0500, Jay S Bryant wrote:



On 8/16/2018 4:03 PM, Kendall Nelson wrote:


Hello :)

On Thu, Aug 16, 2018 at 12:47 PM Jay S Bryant <mailto:jungleb...@gmail.com>> wrote:


   Hey,

   Well, the attachments are one of the things holding us up along
   with reduced participation in the project and a number of other
   challenges.  Getting the time to prepare for the move has been
   difficult.


I wouldn't really say we have reduced participation- we've always 
been a small team. In the last year, we've actually seen more 
involvement from new contributors (new and future users of sb) which 
has been awesome :) We even had/have an outreachy intern that has 
been working on making searching and filtering even better.


Prioritizing when to invest time to migrate has been hard for 
several projects so Cinder isn't alone, no worries :)
Sorry, I wasn't clear here.  I was referencing greatly reduced 
participation in Cinder.  I had been hoping to get more time to dig 
into StoryBoard and prepare the team for migration but that has been 
harder given an increased need to do other work in Cinder.


I have noticed that the search in StoryBoard was better so that was 
encouraging.


   I am planning to take some time before the PTG to look at how
   Ironic has been using Storyboard and take this forward to the team
   at the PTG to try and spur the process along.


Glad to hear it! Once I get the SB room on the schedule, you are 
welcome to join the conversations there.  We would love any feedback 
you have on what the 'other challenges' are that you mentioned 
above.
Yeah, I think it would be good to have time at the PTG to get Manila, 
Cinder, Oslo, etc. together to talk about this.  This will give me 
incentive to do some more experimenting before the PTG.  :-)


+1

- Tom Barron  (tbarron)



See you in Denver.  :-)


   Jay Bryant - (jungleboyj)


   On 8/16/2018 2:22 PM, Kendall Nelson wrote:

   Hey :)

   Yes, I know attachments are important to a few projects. They are
   on our todo list and we plan to talk about how to implement them
   at the upcoming PTG[1].

   Unfortunately, we have had other things that are taking priority
   over attachments. We would really love to migrate you all, but if
   attachments is what is really blocking you and there is no other
   workable solution, I'm more than willing to review patches if you
   want to help out to move things along a little faster :)

   -Kendall Nelson (diablo_rojo)

   [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning

   On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant
   mailto:jungleb...@gmail.com>> wrote:



   On 8/15/2018 11:43 AM, Chris Friesen wrote:
   > On 08/14/2018 10:33 AM, Tobias Urdin wrote:
   >
   >> My goal is that we will be able to swap to Storyboard
   during the
   >> Stein cycle but
   >> considering that we have a low activity on
   >> bugs my opinion is that we could do this swap very easily
   anything
   >> soon as long
   >> as everybody is in favor of it.
   >>
   >> Please let me know what you think about moving to Storyboard?
   >
   > Not a puppet dev, but am currently using Storyboard.
   >
   > One of the things we've run into is that there is no way to
   attach log
   > files for bug reports to a story. There's an open story on
   this[1]
   > but it's not assigned to anyone.
   >
   > Chris
   >
   >
   > [1] https://storyboard.openstack.org/#!/story/2003071
   <https://storyboard.openstack.org/#%21/story/2003071>
   >
   Cinder is planning on holding on any migration, like Manila,
   until the
   file attachment issue is resolved.

   Jay
   >
   
__

   >
   > OpenStack Development Mailing List (not for usage questions)
   > Unsubscribe:
   >
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
   >
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   
__
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Thanks!

- Kendall (diablo_rojo)





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.o

Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-15 Thread Tom Barron

On 15/08/18 10:43 -0600, Chris Friesen wrote:

On 08/14/2018 10:33 AM, Tobias Urdin wrote:


My goal is that we will be able to swap to Storyboard during the Stein cycle but
considering that we have a low activity on
bugs my opinion is that we could do this swap very easily anything soon as long
as everybody is in favor of it.

Please let me know what you think about moving to Storyboard?


Not a puppet dev, but am currently using Storyboard.

One of the things we've run into is that there is no way to attach log 
files for bug reports to a story.  There's an open story on this[1] 
but it's not assigned to anyone.




Yeah, given that gerrit logs are ephemeral and given that users often 
don't have the savvy to cut and paste exactly the right log fragments 
for their issues I think this is a pretty big deal.  When I triage 
bugs I often ask for logs to be uploaded.  This may be less of a big 
deal for puppet than for projects like manila or cinder where there 
are a set of ongoing services in a custom configuration and there's no 
often no clear way for the bug triager to set up a reproducer.


We're waiting on resolution of [1] before moving ahead with Storyboard 
for manila.


-- Tom


Chris


[1] https://storyboard.openstack.org/#!/story/2003071

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila][PTL][Election] PTL candidacy for the Stein cycle

2018-07-29 Thread Tom Barron

Fellow Stackers,

I just served a term as Manila PTL for Rocky and am writing to say 
that if you choose me I'd like to also take on that role for the Stein 
release cycle.


I think I've learned the mechanics now and can focus more energy on 
priorities.


Today manila itself is pretty solid.  It doesn't need lots of new 
features.  Back end vendors always want to expose new bells and 
whistles, and that's fine if they help with the review load and 
contribute to the community.  Reciprocity makes the world go around.


But I see the adoption curve for manila just now ramping up and my own 
focus will be to enable that by working to harden manila and to make 
it easier to use, both within and outside of openstack itself.


Manila offers file-shares as a service -- self-service, RWX, random 
access storage -- and abstracts over a variety of file-systems and 
sharing protocols.


Manila doesn't care if the consumers of the file systems live within 
openstack or not.  It's just a matter of network reachability and the 
access rights that manila manages.


Besides being able to run as one part of a full openstack deployment, 
manila can run on its own, with keystone to enable multi-tenancy, or 
completely standalone.


So I see manila as a true Open Infrastructure project.  It can turn a 
rack of unconfigured equipment into self-service shared file systems 
without limiting itself to the (very important) Virtual Private Server 
use case [1].


I will, accordingly, work to position manila as *the* open source 
solution for deploying RWX random access storage across data centers 
and across clouds.  To that end we need to:


* get manila into generalized cloud providers like CSI [2]

* get manila into the openstack sdk and openstack client

* get more of the almost thirty manila back ends exposed in
  production-quality deployment tools like tripleo, kolla-*,
  and juju.

* continue to fix bugs, improve our CI, run more stuff in gate with
  python 3

These are the things that will drive me if you choose me as manila PTL.

Thanks for listening,

-- Tom Barron (tbarron)

[1] 
https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations

[2] https://github.com/container-storage-interface/spec/blob/master/spec.md

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Tom Barron
I don't do enough in TripleO to chime in on the list, but I can't 
think of a more helpful PTL!


Thank you for your service.

On 25/07/18 10:31 -0700, Wesley Hayutin wrote:

On Wed, Jul 25, 2018 at 9:24 AM Alex Schultz  wrote:


Hey folks,

So it's been great fun and we've accomplished much over the last two
cycles but I believe it is time for me to step back and let someone
else do the PTLing.  I'm not going anywhere so I'll still be around to
focus on the simplification and improvements that TripleO needs going
forward.  I look forwards to continuing our efforts with everyone.

Thanks,
-Alex



Thanks for all the hard work, long hours and leadership!
You have done a great job, congrats on a great cycle.

Thanks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Wes Hayutin

Associate MANAGER

Red Hat



w hayu...@redhat.comT: +1919 <+19197544114>4232509
  IRC:  weshay


View my calendar and check my availability for meetings HERE




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Planning Etherpad for Denver 2018 PTG

2018-07-09 Thread Tom Barron
Here's an etherpad we can use for planning for the Denver PTG in 
September [1].  Please add topics as they occur to you!


-- Tom Barron (tbarron)

[1] https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] No meeting today July 5

2018-07-05 Thread Tom Barron
We have a fair number of team members taking a holiday today and no 
new agenda items were added this week so let's skip today's community 
meeting.  Next manila community meeting will be July 12 at 1500 UTC.


https://wiki.openstack.org/wiki/Manila/Meetings

Let's keep up with the reviews on outstanding work that needs to 
complete by Milestone 3:


https://etherpad.openstack.org/p/manila-rocky-review-focus

Thanks!

-- Tom Barron (tbarron)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Rocky Review Focus

2018-06-25 Thread Tom Barron
It's less than a month till Milestone 3 so I've posted an etherpad 
with the new Manila driver and feature work that we've agreed to try 
to merge in Rocky:


 https://etherpad.openstack.org/p/manila-rocky-review-focus

These are making good progress but in general need more review 
attention.  Please take a look and add your name to the etherpad next 
to particular reviews so that we can all see where we need more 
reviewers.


Please don't feel like you need to be a current core reviewer or 
established in the manila community to add your name to the etherpad 
or to do reviews!


-- Tom Barron


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-12 Thread Tom Barron

On 12/06/18 15:25 +0200, Kendall Nelson wrote:

Yes! I can definitely set Manila up in storyboard-dev. I'll get the imports
done before the end of the week :)

-Kendall (diablo_rojo)


Thanks much!



On Tue, 12 Jun 2018, 2:53 pm Tom Barron,  wrote:


On 12/06/18 10:57 +0200, Kendall Nelson wrote:
>Another option for playing around things- I am happy to do a test
migration
>and populate our storyboard-dev instance with your real data from lp. The
>last half a dozen teams we have migrated have been handled this way.
>

Can we do this for manila?  I believe you did a test migration already
but not to a sandbox that we could play with?  Or maybe you did the
sandbox as well but I missed that and didn't play with it?

Before we cutover I want to:
 * add some new sample bugs and blueprints
 * set up worklists for our release milestones
 * set up some useful worklists and boards and search queries for
   stuff that we track only ad-hoc today
 * figure a place to document these publicly

-- Tom

>Playing around with StoryBoard ahead of time is a really good idea
>because
>it does work differently from lp. I don't think its more complicated, it
>just takes some getting used to. It forces a lot less on its users in
terms
>of constructs and gives users a lot more flexibility to use it in a way
>that is most effective for them. For a lot of people this involves a
mental
>re-frame of task management and organization of work but its not a
>herculean effort.
>
>-Kendall (diablo_rojo)
>
>On Mon, Jun 11, 2018 at 1:31 PM Doug Hellmann 
wrote:
>
>> Excerpts from CARVER, PAUL's message of 2018-06-11 19:53:47 +:
>> > Jeremy Stanley  wrote:
>> >
>> > >I'm just going to come out and call bullshit on this one. How many of
>> the >800 official OpenStack deliverable repos have a view like that with
>> any actual relevant detail? If it's "standard" then certainly more than
>> half, right?
>> >
>> > Well, that's a bit rude, so I'm not going to get in a swearing contest
>> over whether Nova, Neutron and Cinder are more "important" than 800+
other
>> projects. I picked a handful of projects that I'm most interested in and
>> which also happened to have really clear, accessible and easy to
understand
>> information on what they have delivered in the past and are planning to
>> deliver in the future. If I slighted your favorite projects I apologize.
>> >
>> > So, are you saying the information shown in the examples I gave is not
>> useful?
>> >
>> > Or just that I've been lucky in the past that the projects I'm most
>> interested in do a better than typical job of managing releases but the
>> future is all downhill?
>> >
>> > If you're saying it's not useful info and we're better off without it
>> then I'll just have to disagree. If you're saying that it has been
replaced
>> with something better, please share the URLs.
>> >
>> > I'm all for improvements, but saying "only a few people were doing
>> something useful so we should throw it out and nobody do it" isn't a
path
>> to improvement. How about we discuss alternate (e.g.
>> better/easier/whatever) ways of making the information available.
>> >
>>
>> This thread isn't going in a very productive direction. Please
>> consider your tone as you reply.
>>
>> The release team used to (help) manage the launchpad series data.
>> We stopped doing that a long time ago, as Jeremy pointed out, because
>> it was not useful to *the release team* in the way we were managing
>> the releases. We stopped tracking blueprints and bug fixes to try
>> to predict which release they would land in and built tools to make
>> it easier for teams to declare what they had completed through
>> release notes instead.
>>
>> OpenStack does not have a bunch of project managers signed up to
>> help this kind of information, so it was left up to each project
>> team to track any planning information *they decided was useful*
>> to do their work.  If that tracking information happens to be useful
>> to anyone other than contributors, I consider that a bonus.
>>
>> As we shift teams over to Storyboard, we have another opportunity
>> to review the processes and to decide how to use the new tool. Some
>> teams with lightweight processes will be able to move directly with
>> little impact. Other teams who are doing more tracking and planning
>> will need to think about how to do that. The new tool provides some
>> flexibility, and as with any other big change in our community,
>> we're likely to see a bit of divergence before we collectively
>> discover wh

Re: [openstack-dev] [tc] [summary] Organizational diversity tag

2018-06-12 Thread Tom Barron

On 12/06/18 11:44 +0200, Thierry Carrez wrote:

Hi!

We had a decently-sized thread on how to better track organizational 
diversity, which I think would benefit from a summary.


The issue is that the current method (which uses a formula to apply 
single-vendor and diverse-affiliation tags) is not working so well 
anymore, with lots of low-activity projects quickly flapping between 
states.


I wonder if there's a succint way to present the history rather than 
just the most recent tag value.  As a deployer I can then tell the 
difference between a project that consistently lacks 
diverse-affiliation and a project that occasionally or only recently 
lacks diverse-affiliation.




Suggestions included:

- Drop tags, write a regular report instead that can account for the 
subtlety of each situation (ttx). One issue here is that it's 
obviously a lot more work than the current situation.


- Creating a "low-activity" tag that would clearly exempt some teams 
from diversity tagging (mnaser). One issue is that this tag may drive 
contributors away from those teams.


- Drop existing tags, and replace them by voluntary tagging on how 
organizationally-diverse core reviewing is in the team (zaneb). This 
suggestion triggered a sort of side thread on whether this is actually 
a current practice. It appears that vertical, vendor-sensitive teams 
are more likely to adopt such (generally unwritten) rule than 
horizontal teams where hats are much more invisible.


One important thing to remember is that the diversity tags are 
supposed to inform deployers, so that they can make informed choices 
on which component they are comfortable to deploy. So whatever we come 
up with, it needs to be useful information for deployers, not just a 
badge of honor for developers, or a statement of team internal policy.


Thoughts on those suggestions? Other suggestions?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-12 Thread Tom Barron

On 12/06/18 10:57 +0200, Kendall Nelson wrote:

Another option for playing around things- I am happy to do a test migration
and populate our storyboard-dev instance with your real data from lp. The
last half a dozen teams we have migrated have been handled this way.



Can we do this for manila?  I believe you did a test migration already 
but not to a sandbox that we could play with?  Or maybe you did the 
sandbox as well but I missed that and didn't play with it?


Before we cutover I want to:
* add some new sample bugs and blueprints
* set up worklists for our release milestones
* set up some useful worklists and boards and search queries for
  stuff that we track only ad-hoc today
* figure a place to document these publicly

-- Tom

Playing around with StoryBoard ahead of time is a really good idea 
because

it does work differently from lp. I don't think its more complicated, it
just takes some getting used to. It forces a lot less on its users in terms
of constructs and gives users a lot more flexibility to use it in a way
that is most effective for them. For a lot of people this involves a mental
re-frame of task management and organization of work but its not a
herculean effort.

-Kendall (diablo_rojo)

On Mon, Jun 11, 2018 at 1:31 PM Doug Hellmann  wrote:


Excerpts from CARVER, PAUL's message of 2018-06-11 19:53:47 +:
> Jeremy Stanley  wrote:
>
> >I'm just going to come out and call bullshit on this one. How many of
the >800 official OpenStack deliverable repos have a view like that with
any actual relevant detail? If it's "standard" then certainly more than
half, right?
>
> Well, that's a bit rude, so I'm not going to get in a swearing contest
over whether Nova, Neutron and Cinder are more "important" than 800+ other
projects. I picked a handful of projects that I'm most interested in and
which also happened to have really clear, accessible and easy to understand
information on what they have delivered in the past and are planning to
deliver in the future. If I slighted your favorite projects I apologize.
>
> So, are you saying the information shown in the examples I gave is not
useful?
>
> Or just that I've been lucky in the past that the projects I'm most
interested in do a better than typical job of managing releases but the
future is all downhill?
>
> If you're saying it's not useful info and we're better off without it
then I'll just have to disagree. If you're saying that it has been replaced
with something better, please share the URLs.
>
> I'm all for improvements, but saying "only a few people were doing
something useful so we should throw it out and nobody do it" isn't a path
to improvement. How about we discuss alternate (e.g.
better/easier/whatever) ways of making the information available.
>

This thread isn't going in a very productive direction. Please
consider your tone as you reply.

The release team used to (help) manage the launchpad series data.
We stopped doing that a long time ago, as Jeremy pointed out, because
it was not useful to *the release team* in the way we were managing
the releases. We stopped tracking blueprints and bug fixes to try
to predict which release they would land in and built tools to make
it easier for teams to declare what they had completed through
release notes instead.

OpenStack does not have a bunch of project managers signed up to
help this kind of information, so it was left up to each project
team to track any planning information *they decided was useful*
to do their work.  If that tracking information happens to be useful
to anyone other than contributors, I consider that a bonus.

As we shift teams over to Storyboard, we have another opportunity
to review the processes and to decide how to use the new tool. Some
teams with lightweight processes will be able to move directly with
little impact. Other teams who are doing more tracking and planning
will need to think about how to do that. The new tool provides some
flexibility, and as with any other big change in our community,
we're likely to see a bit of divergence before we collectively
discover what works and teams converge back to a more consistent
approach.  That's normal, expected, and desirable.

I recommend that people spend a little time experimenting on their
own before passing judgement or trying to set standards.

Start by looking at the features of the tool itself.  Set up a work
list and add some stories to it. Set up a board and see how the
automatic work lists help keep it up to date as the story or task
states change. Do the same with a manually managed board. If you
need a project to assign a task to because yours hasn't migrated
yet, use openstack/release-test.

Then think about the workflows you actually use -- not just the
ones you've been doing because that's the way the project has always
been managed. Think about how those workflows might translate over
to the new tool, based on its features. If you're not sure, ask and
we can see what other teams are doing or 

Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Tom Barron

On 04/06/18 17:52 -0400, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:

On 02/06/18 13:23, Doug Hellmann wrote:
> Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:
>> On 01/06/18 12:18, Doug Hellmann wrote:
>
> [snip]
>
>>> Is that rule a sign of a healthy team dynamic, that we would want
>>> to spread to the whole community?
>>
>> Yeah, this part I am pretty unsure about too. For some projects it
>> probably is. For others it may just be an unnecessary obstacle, although
>> I don't think it'd actually be *un*healthy for any project, assuming a
>> big enough and diverse enough team (which should be a goal for the whole
>> community).
>
> It feels like we would be saying that we don't trust 2 core reviewers
> from the same company to put the project's goals or priorities over
> their employer's.  And that doesn't feel like an assumption I would
> want us to encourage through a tag meant to show the health of the
> project.

Another way to look at it would be that the perception of a conflict of
interest can be just as damaging to a community as somebody actually
acting on a conflict of interest, and thus having clearly-defined rules
to manage conflicts of interest helps protect everybody (and especially
the people who could be perceived to have a conflict of interest but
aren't, in fact, acting on it).


That's a reasonable perspective. Thanks for expanding on your original
statement.


Apparently enough people see it the way you described that this is
probably not something we want to actively spread to other projects at
the moment.


I am still curious to know which teams have the policy. If it is more
widespread than I realized, maybe it's reasonable to extend it and use
it as the basis for a health check after all.


Just some data.  Manila has the policy (except for very trivial or 
urgent commits, where one +2 +W can be sufficient).


When the project originated NetApp cores and a Mirantis core who was a 
contractor for NetApp predominated.  I doubt that there was any 
perception of biased decisions -- the PTL at the time, Ben 
Swartzlander, is the kind of guy who is quite good at doing what he 
thinks is best for the project and not listening to any folks within 
his own company who might suggest otherwise, not that I have any 
evidence of anything like that either :).  But at some point someone 
suggested that our +2 +W rule, already in place, be augmented with a 
requirement that the two +2s come from different affiliations and the 
rule was adopted.


So far that seems to work OK though affiliations have shifted and 
NetApp cores are no longer quantitatively dominant in the project. 
There are three companies with two cores and so far as I can see they 
don't tend to vote together more than any other two cores, on the one 
hand, but on the other hand it isn't hard to get another core +2 if a 
change is ready to be merged.


None of this is intended as an argument that this rule be expanded to 
other projects, it's just data as I said.


-- Tom



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Core team updates

2018-06-01 Thread Tom Barron

Hi all,

Clinton Knight and Valeriy Ponomaryov have been focusing on projects 
outside Manila for some time so I'm removing them from the core team. 

Valeriy and Clinton made great contributions to Manila over the years 
both as reviewers and as contributors.  We are fortunate to have been 
able to work with them and they are certainly welcome back to the core 
team in the future if they return to active reviewing.


Clinton & Valeriy, thank you for your contributions!

-- Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceph multiattach support

2018-05-31 Thread Tom Barron

On 31/05/18 10:00 +0800, fengyd wrote:

Hi,

I'm using Ceph for cinder backend.
Do you have any plan to support multiattach for Ceph backend?

Thanks

Yafeng


Yafeng,

Would you describe your use case for cinder multi-attach with ceph 
backend?


I'd like to understand better whether manila (file share 
infrastructure as a service) with  CephFS native or CephFS-NFS 
backends would (as Erik McCormick also suggested) meet your needs.


-- Tom Barron


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] No meeting 31 May 2018

2018-05-30 Thread Tom Barron
We don't have anything on the agenda yet for this week's manila 
meeting and my travel plans just got shuffled so I'm in the air at our 
regular time so let's cancel this week's meeting and start 
up again the following week.


We'll have a summary of relevant Summit events then.

-- Tom Barron (tbarron)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] manila operator's feedback forum etherpad available

2018-05-17 Thread Tom Barron
Next week at the Summit there is a forum session dedicated to Manila 
opertors' feedback on Thursday from 1:50-2:30pm [1] for which we have 
started an etherpad [2].  Please come and help manila developers do 
the right thing!  We're particularly interested in experiences running 
the OpenStack share service at scale and overcoming any obstacles to 
deployment but are interested in getting any and all feedback from 
real deployments so that we can tailor our development and maintenance 
efforts to real world needs.


Please feel free and encouraged to add to the etherpad starting now.

See you there!

-- Tom Barron
  Manila PTL
  irc: tbarron

[1] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21780/manila-ops-feedback-running-at-scale-overcoming-barriers-to-deployment
[2] https://etherpad.openstack.org/p/YVR18-manila-forum-ops-feedback

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] no community meeting Thurs 24 May 2018

2018-05-17 Thread Tom Barron
There will be no Manila weekly meeting, Thursday May 24, given the 
Vancouver Summit is going on that week.


-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Forum Schedule

2018-04-26 Thread Tom Barron

Jimmy,

Also can we 's/barriers/overcoming barriers/' in the title of the 
manila session?


Thanks!

-- Tom

On 26/04/18 15:27 -0500, Jimmy McArthur wrote:

No problem.  Done :)


Colleen Murphy 
April 26, 2018 at 1:23 PM
Hi Jimmy,

I have a conflict on Thursday afternoon. Could I propose swapping 
these two sessions:


Monday 11:35-12:15 Manila Ops feedback: running at scale, barriers 
to deployment

Thursday 1:50-2:30 Default Roles

I've gotten affirmation from Tom and Lance on the swap, though if 
this causes problems for anyone else I'm happy to retract this 
request.


Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 25, 2018 at 4:07 PM
Hi everyone -

Please have a look at the Vancouver Forum schedule: https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing 
(also attached as a CSV) The proposed schedule was put together by 
two members from UC, TC and Foundation.


We do our best to avoid moving scheduled items around as it tends to 
create a domino affect, but we do realize we might have missed 
something.  The schedule should generally be set, but if you see a 
major conflict in either content or speaker availability, please 
email speakersupp...@openstack.org.


Thanks all,
Jimmy
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLS] Project Updates & Project Onboarding

2018-03-28 Thread Tom Barron

Many apologies for sending this to the openstack-dev list;
I thought I had removed the list from my address list but
clearly did not.

On 28/03/18 14:34 -0400, Tom Barron wrote:
Would you be so kind as to add 

<... snip ...>



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLS] Project Updates & Project Onboarding

2018-03-28 Thread Tom Barron
Would you be so kind as to add Victoria Martinez de la Cruz and Dustin 
Schoenbrun to the manila project Onboarding session [1] ?  They are 
confirmed for conference attendance.


Thanks much!

-- Tom Barron

[1] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21637/manila-project-onboarding


On 21/03/18 22:14 +, Kendall Nelson wrote:

Hello!

Project Updates[1] & Project Onboarding[2] sessions are now live on the
schedule!

We did as best as we could to keep project onboarding sessions adjacent to
project update slots. Though, given the differences in duration and the
number of each we have per day that got increasingly difficult as the days
went on, hopefully what is there will work for everyone.

If there are any speakers you need added to your slots, or any conflicts
you need addressed, feel free to email speakersupp...@openstack.org and
they should be able to help you out.

Thanks!

-Kendall Nelson (diablo_rojo)

[1]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Update
[2]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Onboarding



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports

2018-03-27 Thread Tom Barron

On 27/03/18 15:53 +0200, Thomas Goirand wrote:

Hi,

As some of you already know, after some difficult time after I left my
past employer, I'm back! And I don't plan on giving-up, ever... :)

The repositories:
=
Today, it's my pleasure to announce today the general availability of
Debian packages for the Queens OpenStack release. These are available in
official Debian Sid (as usual), and also as a Stretch (unofficial)
backports. These packages have been tested successfully with Tempest.

Here's the address of the (unofficial) backport repositories:

deb http://stretch-queens.debian.net/debian
stretch-queens-backports main
deb-src http://stretch-queens.debian.net/debian
stretch-queens-backports main
deb http://stretch-queens.debian.net/debian
stretch-queens-backports-nochange main
deb-src http://stretch-queens.debian.net/debian
stretch-queens-backports-nochange main

The repository key is here:
wget -O - http://stretch-queens.debian.net/debian/dists/pubkey.gpg | \
apt-key add

Please note that stretch-queens.debian.net is just a IN CNAME pointer to
the server of my new employer, Infomaniak, and that the real server name is:

stretch-queens.infomaniak.ch

So, that server is of course located in Geneva, Switzerland. Thanks to
my employer for sponsoring that server, and allowing me to build these
packages during my work time.

What's new in this release
==
1/ Python 3
---
The new stuff is ... the full switch Python 3!

As much as I understand, apart from Gentoo, no other distribution
switched to Python 3 yet. Both RDO and Ubuntu are planning to do it for
Rocky (at least that's what I've been told). So once more, Debian is on
the edge. :)

While there is still dual Python 2/3 support for clients (with priority
to Python 3 for binaries in /usr/bin), all services have been switched
to Py3.

Building the packages worked surprisingly well. I was secretly expecting
more failures. The only real collateral damage is:

- manila-ui (no Py3 support upstream)


Just a note of thanks for calling our attention to this issue.  
manila-ui had been rather neglected and is getting TLC now.
We'll certainly get back to you when we've got it working with Python 
3.


-- Tom Barron


As the Horizon package switched to Python 3, it's unfortunately
impossible to keep these plugins to use Python 2, and therefore,
manila-ui is now (from a Debian packaging standpoint) RC buggy, and
shall be removed from Debian Testing.

Also, Django 2 will sooner or later be the only option in Debian Sid.
It'd be great if Horizon's patches could be merged, and plugins adapt ASAP.

Also, a Neutron plugins isn't released upstream yet for Queens, and
since the Neutron package switched to Python 3, the old Pike plugin
packages are also considered RC buggy (and it doesn't build with Queens
anyway):
- networking-mlnx

The faith of the above packages is currently unknown. Hopefully, there's
going to be upstream work to make them in a packageable state (which
means, for today's Debian, Python 3.6 compatible), if not, there will be
no choice but to remove them from Debian.

As for networking-ovs-dpdk, it needs more work on OVS itself to support
dpdk, and I still haven't found the time for it yet.

As a more general thing, it'd be nice if there was Python 3.6 in the
gate. Hopefully, this will happen with Bionic release and the infra
switching to it. It's been a reoccurring problem though, that Debian Sid
is always experiencing issues before the other distros (ie: before
Ubuntu, for example), because it gets updates first. So I'd really love
to have Sid as a possible image in the infra, so we could use it for
(non-voting) gate.

2/ New debconf unified templates

The Debconf templates used to be embedded within each packages. This
isn't the case anymore, all of them are now stored in
openstack-pkg-tools if they are not service specific. Hopefully, this
will help having a better coverage for translations. The postinst
scripts can also optionally create the service tenant and user
automatically. The system also does less by default (ie: it wont even
read your configuration files if the user doesn't explicitly asks for
config handling), API endpoint can now use FQDN and https as well.

3/ New packages/services

We've added Cloudkitty and Vitrage. Coming soon: Octavia and Vitrage.

Unfortunately, at this point, cloudkitty-dashboard still contains
non-free files (ie: embedded minified javascripts). Worse, some of them
cannot even be identified (I couldn't find out what version from
upstream it was). So even if this package is ready, I can't upload it to
Debian in such state.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage

[openstack-dev] [manila][ptg] Rocky PTG summary

2018-03-14 Thread Tom Barron

We had a good showing [1] at the Rocky PTG in Dublin.  Most of us see
each other face-to-face rarely and we had some (even long time)
contributors come to the PTG for the first time or join manila from
other projects!  We had a good time together [2], took on some tough
subjects, and planned out our approach to Rocky.

The following summarizes our main discussions.  For the raw
discussion topic/log etherpad see [3] or for video of the team in
action see [4].  This summary has also been rendered in this
etherpad:

https://etherpad.openstack.org/p/manila-rocky-ptg-summary

Please follow up in the etherpad with corrections or additions,
especially where we've missed a perspective or interpretation.

== Queens Retrospective ==

Summary [5] shows focus on maintaining quality and integrity of the
project while at the same time seeking ways to encourage developer
participation, new driver engagement, and adoption of manila in real
deployments.

== Rocky Schedule ==

- We'll keep the same project specific deadlines as Queens:
* Spec freeze at Rocky-1 milestone
* New Driver Submission Freeze at Rocky-2 milestone
* Feature Proposal Freeze at release-7 week
 (two weeks before Rocky-3 milestone)

== Cross Project Goals ==

- Manila met the queens goals (policy in code [6] and split of tempest
 into its own repos [7]).
- For Rocky mox removal goal [8] we have no direct usage of mox
 anymore but need to track the transitive dependency of the manila-ui
 plugin on mox via horizon [9]
- We have already met the minimum Rocky mutable configuration goal [10]
 in that we have general support for toggle of debug logging without
 restart.  We agreed that additional mutable configuration options
 should be proposed on a case-by-case basis, with use-cases and
 supporting arguments to the effect that they are indeed safe to be
 treated as mutable.

== Documentation Gaps ==

- amito's experience introducing the new Infinidat driver in Queens
 shows significant gaps in our doc for new drivers
- jungleboyj proposed that cinder will clean up its onboarding doc
 including its wiki for how to contribute a driver [11]
- amito will work with the manila community to port over this information
 and identify any remaining gaps
- patrickeast will be adding a Pure back end in Rocky and can help
 identify gaps
- we agreed to work with cinder to drive consistency in 'Contributor
 Guide' format and subject matter.

== Python 3 ==

- Distros are dropping support for python 2, completely, between now
 and 2020 so OpenStack projects we need to start getting ready now 
 [12]

- Our main exposure is in manila-ui where we still run unit tests with
 python 2 only
- Also need to add a good set of python 3 tempest tests for manila proper
- CentOS jobs will need to be replaced with stable Fedora jobs
- vkmc will drive this; overall goal may take more than one release

== NFSExportsHelper ==

- bswartz has a better implementation
- in discussion he developed a preliminary plan for migrating users
 from the old to the new implementation
- impacts generic and lvm drivers, arguably reference only
- bswartz will communicate any impact to openstack-dev, openstack-operators,
 and openstack-users mailing lists

== Quota Resource Usage Tracking ==

- we inherited our reservation/commit/rollback system from Cinder who
 in turn took theirs from Nova
- it is buggy, making reservations in one service and doing commit/rollback
 in scattered places in another service.  Customer bugs with quotas 
 are painful and confidence that they are actually fixed is low.

- melwitt and dansmith explained how Nova has now abandoned this
 system in favor of actual resource counting in the api service
- we intend to explore the possibility of implementing a similar system
 as the new Nova approach; cinder is exploring this as well
- can be implemented as bug fixes if it's clean and easy to understand

== Replacing rootwrap with privsep ==

- What's in it for manila?
- Nova says it improves performance; Cinder says it harms performance :)
- It serializes operations so the performance impact depends on how long
 the elevated privilege operations run.
- We need to study our codebase more to understand impact; not a Rocky
 goal for us to implement this.

== Huawei proposal to support more access rule attributes ==

* access levels like all_squash / no all_squash
 Most but not all vendors can support these.
 We agreed that although opaque metadata on access rules _could_ be
 used to allow manila forks to implement such support opaquely to
 manila proper, this is a generally useful characteristic, not
 something only useful for Huawei private cloud.  So it should be
 implemented using new public extra specs and back end capability
 checking in the scheduler in order to avoid error cases with back
 ends that cannot support the capabilities in question.
* ordering semantics for access rules to dis-ambiguate rule sets
 where incompatible access modes (like r/w and r/o) are applied
 to the same 

[openstack-dev] [manila] [ptg] queens retrospective at the rocky ptg

2018-03-13 Thread Tom Barron

At the Dublin PTG the manila team did a retrospective on the
Queens release -- raw etherpad here [1].

We summarize it here to separate it from the otherwise
forward-looking manila PTG summary (coming soon).

# Keep Doing #

- queens bug smashes, especially the Wuhan bug smash [2]
  + (mostly new) contributors from 5+ companies including
99cloud, fibrehome, chinamobile, H3c, easystack, Huawei
- bug czar role & meeting slot [3]
- trying to tag even earlier than deadlines

# Do less of / Stop Doing #

- blind rechecks
  + reviewers need to pay attention to recheck history and
push back on merges that ignore intermittent issues
  + even if the issue is unrelated to the patch we should have
an associated bug
- letting reviews languish, especially for contributors outside
US timezones
  + need more systematic approach (see suggestions below) to
review backlog

# Do More Of #

- Root cause analysis and fixing of failing tests in gate
- Groom bug lists, especially before bug smashes
- Make contributor's guide with review checklists
- Use mail-list more for asynch communication across timezones
- Help people use IRC bouncers
- Keep etherpad/dashboards for pending reviews
- Improve docs for new driver contributors

# Action Items #

- Tom will develop etherpad for priority reviews
- Ben will share past review etherpads / gerrit dashboards
(Done: see raw etherpad [3] but Tom will work to get these
unified and findable)
- Ganso will create etherpad to collaborate on reviewer/contributor
  checklists
- Tom will check how Cinder fixed log filtering problem

-- Tom Barron

[1] https://etherpad.openstack.org/p/manila-ptg-rocky-retro
[2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Queens-Wuhan
[3] https://etherpad.openstack.org/p/manila-bug-triage-pad


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] [summit] Forum topic proposal etherpad

2018-03-12 Thread Tom Barron
Please add proposed topics for manila to this etherpad [1] for the 
Vancouver Forum.  In a couple weeks we'll use this list to submit 
abstracts for the next stage of the process [2]


As a reminder, the Forum is the part of the Summit conference 
dedicated to open discourse among operators, developers, users -- all 
who have a vested interest in design and planning of the future of
OpenStack [3]. I've added a few topics to prime the pump.   



[1] https://etherpad.openstack.org/p/YVR-manila-brainstorming
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/127944.html

[3] quoting 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128180.html



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [quotas] [cyborg]Dublin Rocky PTG Summary

2018-03-12 Thread Tom Barron

Just a remark below w.r.t. quota support in some other projects fwiw.

On 09/03/18 15:46 +0800, Zhipeng Huang wrote:

Hi Team,

Thanks to our topic leads' efforts, below is the aggregated summary from
our dublin ptg session discussion. Please check it out and feel free to
feedback any concerns you might have.



< -- snip -- >


Quota and Multi-tenancy Support
Etherpad: https://etherpad.openstack.org/p/cyborg-ptg-rocky-quota
Slide:
https://docs.google.com/presentation/d/1DUKWW2vgqUI3Udl4UDvxgJ53Ve5LmyaBpX4u--rVrCc/edit?usp=sharing

1. Provide project and user level quota support
2. Treat all resources as the reserved resource type
3. Add quota engine and quota driver for the quota support
4. Tables: quotas, quota_usage, reservation
5. Transactions operation: reserve, commit, rollback

  - Concerns on rollback


  - Implement a two-stage resevation and rollback


  - reserve - commit - rollback (if failed)



Note that cinder and manila followed the nova implementation of a two-stage
reservation/commit/rollback model but the resulting system has been buggy.
Over time, the quota system's notion of resource usage gets out of sync with
actual resource usage.

Nova has since dropped the reserve/commit/rollback model [0] and cinder and
manila are considering making a similar change.

Currently we create reservation records and update quota usage in
the API service and then remove the reservation records and update
quota usage in another service at commit or rollback time, or on reservation
timeout. Nova now avoids the double bookkeeping of resource usage and 
the need to update these records correctly across separate services by 
directly checking resource counts in the api at the time requests are 
received. If we can do the same thing in cinder and manila a whole 
class of tough, recurrent bugs can be eliminated.


The main concern expressed thus far with this "resource counting"
approach is that there may be some negative performance impact since
the current approach provides cached usage information to the api
service.  As you can see here [1] there probably is not yet agreement on the
degree of performance impact but there does seem to be agreement that we need
first to get a quota system that is correct and reliable, then 
optimize for performance as needed.


Best regards,

-- Tom Barron

[0] 
https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/cells-count-resources-to-check-quota-in-api.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128108.html


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] no weekly meeting March 8

2018-03-08 Thread Tom Barron

Let's skip the manila weekly meeting March 8 since
people are still catching up after travel delays
or still travelling (/me confesses) and the weekly
agenda shows no new non-recurring additions.

We'll plan on meeting as normal at 1500 UTC March 15
in #openstack-meeting-alt.  Add agenda items here [1].

Cheers,

-- Tom Barron

[1] https://wiki.openstack.org/wiki/Manila/Meetings



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] no meeting March 1

2018-02-28 Thread Tom Barron
Just a quick reminder that there will be *no* weekly manila team 
meeting Thursday March 1 since many folks are busy at PTG.


Cheers,

-- Tom



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] queens retrospective as we kickoff Rocky

2018-02-22 Thread Tom Barron
We'll start the manila meetings Tuesday with a retrospective on the 
Queens cycle.


Whether you'll be at the PTG or not, please add your thoughts to the 
retrospective etherpad [1] so we can discuss them and figure out how 
to continuously improve.  Please tag items with your nick.  We'll make 
sure to follow up post-PTG in our weekly meetings on items with owners 
or stakeholders not actually present at the PTG.


Thanks!

-- Tom

[1] https://etherpad.openstack.org/p/manila-ptg-rocky-retro


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] PTG schedule and social event

2018-02-15 Thread Tom Barron

Manila sessions at the PTG are scheduled for Tuesday and Friday.

Ben Swartzlander and I worked together to distribute topics across the two
days and you can see the results here:

https://etherpad.openstack.org/p/manila-rocky-ptg

We'll of course end up shifting topics and times around as required, but this
will be our starting point.  So look it over and if anything has a time slot
that just won't work for you, let me know and we'll likely be able to adjust.

Also, if you have a topic and it's not on the etherpad, add it under
"Proposed Topics" and give me a ping.

Finally, we're planning a social event for Tuesday evening, so stay tuned for
more details.  If you think you can join us, please add your name to the
etherpad above under the "Team Dinner Planned" section.

-- Tom Barron



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][python3] python3 readiness?

2018-02-14 Thread Tom Barron

On 13/02/18 16:53 -0600, Ben Nemec wrote:



On 02/13/2018 01:57 PM, Tom Barron wrote:
Since python 2.7 will not be maintained past 2020 [1] it is a 
reasonable conjecture that downstream distributions
will drop support for python 2 between now and then, perhaps as 
early as next year.


I'm not sure I agree.  I suspect python 2 support will not go quietly 
into that good night.  Personally I anticipate a lot of kicking and 
screaming right up to the end, especially from change averse 
enterprise users.


But that's neither here nor there.  I think we're all in agreement 
that python 3 support is needed. :-)


Yeah, but you raise a good issue.  How likely is it that EL8 will 
choose -- perhaps under duress -- to support both python 2 and python 
3 in the next big downstream release.  If this is done long enough 
that we can support TripleO deployments on CentOS 8 using python2 
while at the same time testing TripleO deployments on CentOS using 
python3 then TripleO support for Fedora wouldn't be necessary.


Perhaps this question is settled, perhaps it is open.  Let's try to 
nail down which for the record.




In Pike, OpenStack projects, including TripleO, added python 3 unit 
tests.  That effort was a good start, but likely we can agree that 
it is *only* a start to gaining confidence that real life TripleO 
deployments will "just work" running python 3.  As agreed in the 
TripleO community meeting, this email is intended to kick off a 
discussion in advance of PTG on what else needs to be done.


In this regard it is worth observing that TripleO currently only 
supports CentOS deployments and CentOS won't have python 3 support 
until RHEL does, which may be too late to test deploying with 
python3 before support for python2 is dropped.  Fedora does have 
support for python 3 and for this reason RDO has decided [2] to 
begin work to run with *stabilized* Fedora repositories in the Rocky 
cycle, aiming to be ready on time to migrate to Python 3 and support 
its use in downstream and upstream CI pipelines.


So that means we'll never have Python 3 on CentOS 7 and we need to 
start supporting Fedora again in order to do functional testing on 
py3? That's potentially messy.  My recollection of running TripleO CI 
on Fedora is that it was, to put it nicely, a maintenance headache.  
Even with the "stabilized" repos from RDO, TripleO has a knack for 
hitting edge case bugs in a fast-moving distro like Fedora.  I guess 
it's not entirely clear to me what the exact plan is since there's 
some discussion of frozen snapshots and such, which might address the 
fast-moving part.


It also means more CI jobs, unless we're okay with dropping CentOS 
support for some scenarios and switching them to Fedora.  Given the 
amount of changes between CentOS 7 and current Fedora that's a pretty 
big gap in our testing.


I guess if RDO has chosen this path then we don't have much choice.  
As far as next steps, the first thing that would need to be done is to 
get TripleO running on Fedora again.  I suggest starting with https://github.com/openstack/instack-undercloud/blob/3e702f3bdfea21c69dc8184e690f26e142a13bff/instack_undercloud/undercloud.py#L1377 
:-)


-Ben


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][python3] python3 readiness?

2018-02-13 Thread Tom Barron
Since python 2.7 will not be maintained past 2020 [1] it is a 
reasonable conjecture that downstream distributions
will drop support for python 2 between now and then, perhaps as early 
as next year. 

In Pike, OpenStack projects, including TripleO, added python 3 unit 
tests.  That effort was a good start, but likely we can agree that it 
is *only* a start to gaining confidence that real life TripleO 
deployments will "just work" running python 3.  As agreed in the 
TripleO community meeting, this email is intended to kick off a 
discussion in advance of PTG on what else needs to be done.


In this regard it is worth observing that TripleO currently only 
supports CentOS deployments and CentOS won't have python 3 support 
until RHEL does, which may be too late to test deploying with python3 
before support for python2 is dropped.  Fedora does have support for 
python 3 and for this reason RDO has decided [2] to begin work to run 
with *stabilized* Fedora repositories in the Rocky cycle, aiming to be 
ready on time to migrate to Python 3 and support its use in downstream 
and upstream CI pipelines.


-- Tom Barron


[1] https://pythonclock.org/
[2] https://lists.rdoproject.org/pipermail/dev/2018-February/008542.html



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] [ptl] announcing PTL candidacy for manila

2018-02-07 Thread Tom Barron

Friends, Stackers, Community,

I write to announce my candidacy for the Manila PTL position for the
Rocky cycle.

I've worked in in OpenStack since Juno and actively in Manila since
Mitaka or so.  I've had more than one employer in that time and think
it's fair to say that I have a reputation for working upstream in the
interests of the community.  I am one of the more active Manila core
reviewers, care about welcoming and engaging new contributors,
encouraging participation, and at the same time preserving code quality
and the integrity of the project.

Ben Swartzlander is moving on to do other cool stuff, including work
as a Manila contributor.  I expect that I share a rather general
perception that no one can fill his shoes as PTL.  That said, I do
think that if we work together to make Manila shine we can make it
truly awesome!

Some areas I'd like us to work on in the near future include:

* python 3 support.  Upstream python 2 support is going away in 2020
 if I understand correctly and between now and then distros are
 likely to drop support for it.  We need to do our part to get manila
 working with python 3 in devstack, and also with python 3 when
 deployed at scale via frameworks like kolla, charms, and TripleO.

* performance and scale. We learned recently that Huawei public cloud 
 runs manila with thousands of shares and that CERN is planning to

 move from 83 shares to over 2000 shares.  Let's get more success
 stories with more back ends, build a common understanding of any
 bottlenecks, and work plans to address these.

* side-by-side deployment with kubernetes and other clouds.  Whether
 running kubernetes on OpenStack, deploying OpenStack services with
 kubernetes, or building standalone software defined storage with
 manila and cinder without other OpenStack services, this is a space
 where we need to explore and be actively engage.

* production quality open source software defined back ends.  Manila
 has great proprietary storage back ends, but shouldn't we have open
 source back ends that work reliably at scale as well?  We could make
 the generic driver great in this regard, or build out distributed
 file system back ends like cephfs with good data path HA and tenant
 separation.  There are perhaps other alternatives that haven't
 surfaced yet.  There's a lot of room here for innovation and
 certainly demand from cloud operators on this front.

* vendor participation: we have a mix of vendors introducing new
 back ends, sustained participation from vendors with existing
 back ends, and some back ends that no longer have attention from
 their vendors even though -- working with a distro -- I see customers
 indicating that they *want* to use those back ends if only the
 vendors were engaged!  Let's welcome new vendors with open arms
 and help all understand the mutual benefit of remaining involved
 with manila as the community evolves and grows.

Those are some of my ideas.  I offer them as much as anything to
stimulate others working on manila to come to PTG and the Rocky cycle
with their own initiatives.

Also, if you haven't been working in manila and any of the above seems
interesting (or just nuts) come on over!  Manila is a great place to
contribute and innovate!

Thanks for listening.

-- Tom Barron (tbarron



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE nfs_ganesha integration

2018-02-04 Thread Tom Barron
Just to follow up, CI is passing for the three patches outstanding
and the last one has a release note for the overall feature.  The trick to
getting CI to pass was to introduce a new variant Controller
role for when we actually deploy with CephNFS and the VIP for the
server on the StorageNFS network.  Using the variant controller role and
'-n'
with network_data_ganesha.yaml (1) enables the new feature to
work correctly while (2) making the new feature entirely optional so
that current CI runs without being affected by it.

I think the three outstanding patches here are ready to merge:

https://review.openstack.org/#/q/status:open+topic:bp/nfs-ganesha

I want to get them in so they'll show in downstream puddles for QE but
my full attention will immediately turn to upstream TripleO CI and doc for
this
new functionality.  In that regard I *think* we'll need Dan Sneddon's work
here:

https://review.openstack.org/#/c/523638

so that actual deployment of the StorageNFS network doesn't have to
involve copying and editing

network/config/*/{ceph,compute,controller}/.yaml

as done in the DNM patch that I've used for testing actual integration of
the
feature here:

https://review.openstack.org/533767

All said, this one seems to be a good poster child for composable roles +
composable networks!

-- Tom Barron


On Tue, Jan 23, 2018 at 2:48 PM, Emilien Macchi <emil...@redhat.com> wrote:

> I agree this would be a great addition but I'm worried about the
> patches which right now don't pass the check pipeline.
> Also I don't see any release notes explaining the changes to our users
> and it's supposed to improve user experience...
>
> Please add release notes, make CI passing and we'll probably grant it for
> FFE.
>
> On Mon, Jan 22, 2018 at 8:34 AM, Giulio Fidente <gfide...@redhat.com>
> wrote:
> > hi,
> >
> > I would like to request an FFE for the integration of nfs_ganesha, which
> > will provide a better user experience to manila users
> >
> > This work was slown down by a few factors:
> >
> > - it depended on the migration of tripleo to the newer Ceph version
> > (luminous), which happened during the queens cycle
> >
> > - it depended on some additional functionalities to be implemented in
> > ceph-ansible which were only recently been made available to tripleo/ci
> >
> > - it proposes the addition of on an additional (and optional) network
> > (storagenfs) so that guests don't need connectivity to the ceph frontend
> > network to be able to use the cephfs shares
> >
> > The submissions are on review and partially testable in CI [1]. If
> accepted,
> > I'd like to reassign the blueprint [2] back to the queens cycle, as it
> was
> > initially.
> >
> > Thanks
> >
> > 1. https://review.openstack.org/#/q/status:open+topic:bp/nfs-ganesha
> > 2. https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha
> > --
> > Giulio Fidente
> > GPG KEY: 08D733BA
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Nominating Zhong Jun (zhongjun) for Manila core

2017-11-19 Thread Tom Barron
A big plus 1 from me!  Zhong as core is well deserved and exactly what our
project needs!


On Nov 19, 2017 6:31 PM, "Ravi, Goutham" 
wrote:

> Hello Manila developers,
>
>
>
> I would like to nominate Zhong Jun (zhongjun on irc, zhongjun2 on gerrit)
> to be part of the Manila core team. Zhongjun has been an important member
> of our community since the Kilo release, and has, in the past few releases
> made significant contributions to the constellation of projects related to
> openstack/manila [1]. She is also our ambassador in the APAC
> region/timezones. Her opinion is valued amongst the core team and I think,
> as a core reviewer and maintainer, she would continue to help grow and
> maintain our project.
>
>
>
> Please respond with a +1/-1.
>
>
>
> We will not be having an IRC meeting this Thursday (23rd November 2017),
> so if we have sufficient quorum, PTL extraordinaire, Ben Swartzlander will
> confirm her nomination here.
>
>
>
> [1] http://stackalytics.com/?user_id=jun-zhongjun=all;
> metric=person-day
>
>
>
> Thanks,
>
> Goutham
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] proposal to remove hdfs job from check queue

2017-11-03 Thread Tom Barron
The manila driver for hdfs back end has no current maintainer but runs
as a non-voting job in the check queue and has been failing 100% of the
time for several months.

We propose to remove it from the check queue via

https://review.openstack.org/#/c/517647/

if there are no objections registered by Monday, November 13.  The job
will remain defined for use in the devstack-hdfs-plugin as long as that
repo remains.

If you have a reason not to do this, please vote in the review with an
explanation.  Best explanation would be that you want to take ownership
of the driver and have a fix in mind!

-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] proposal to remove glusterfs-native job from check queue

2017-11-03 Thread Tom Barron
The manila driver for the glusterfs native back end has no current
maintainer but runs as a non-voting job in the check queue and has been
failing 100% of the time for several months.

We propose to remove it from the check queue via

https://review.openstack.org/#/c/517663/

if there are no objections registered by Monday, November 13.  The job
will remain defined for use in the devstack-glusterfs-plugin as long as
that repo remains.

If you have a reason not to do this, please vote in the review with an
explanation.  Best explanation would be that you want to take ownership
of the driver and have a fix in mind!

-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Garbage patches for simple typo fixes

2017-09-22 Thread Tom Barron


On 09/22/2017 08:34 PM, Zhipeng Huang wrote:
> Hi Paul,
> 
> Unfortunately I know better on this matter and it is not the matter of
> topic dispute as many people on this thread who has been disturbed and
> annoyed by the padding/trolling.
> 
> So yes I'm sticking with stupid because it hurts the OpenStack community
> as a whole and hurts the reputation of the dev community from my country
> which in large are great people with good hearts and skills.
> 
> I'm not giving even an inch of the benefit of doubt to these padding
> activities and people behind it.
> 

I don't want to be naive here: humans tend to stereotype and generalize
in just the way you are talking about.  Some say it's the inherited
"fight or flight" part of our brains that uses heuristics based on
survival from when we lived in packs and tribes that causes us to
over-ride the systematic, analytic reasoning parts which when we use
them shows the statistical invalidity of "reasoning" from a few bad
actors to larger populations.

But I do hope that we in the OpenStack community are building not just
software but a way of doing things so that generalizations about nations
and peoples do not get made because of unhelpful behavior on the part of
a few, however active or prominent they may be.  A big part of why I
like working in this community is that we are learning together not just
how to build better software but also how to work in common purpose
across timezones and cultures based on a willingness to assume good will
as a starting point, to share information, and treat one another fairly.

So I'm still for (1) some published boilerplate that reviewers can point
to without blaming anyone or speculatively attributing motive, and (2)
outreach of the sort that Doug Hellman advocated in cases where #1
doesn't seem sufficient.  Part of that outreach might involve getting an
understanding of what the parties involved *think* is being gained by
unhelpful patches, making sure that OpenStack does not reward or
re-enforce this behavior (like blindly looking at Stackalytics, if that
does indeed happen), and effectively communicating how the unhelpful
behavior does not pay off in our community.

> 
> On Sat, Sep 23, 2017 at 8:16 AM, Paul Belanger  > wrote:
> 
> On Fri, Sep 22, 2017 at 10:26:09AM +0800, Zhipeng Huang wrote:
> > Let's not forget the epic fail earlier on the "contribution.rst fix" 
> that
> > almost melt down the community CI system.
> >
> > For any companies that are doing what Matt mentioned, please be aware 
> that
> > the dev community of the country you belong to is getting hurt by your
> > stupid activity.
> >
> > Stop patch trolling and doing something meaningful.
> >
> Sorry, but I found this comment over the line. Just because you
> disagree with
> the $topic at hand, doesn't mean you should default to calling it
> 'stupid'. Give
> somebody the benefit of not knowing any better.
> 
> This is not a good example of encouraging anybody to contribute to
> the project.
> 
> -Paul
> 
> > On Fri, Sep 22, 2017 at 10:21 AM, Matt Riedemann
> >
> > wrote:
> >
> > > I just wanted to highlight to people that there seems to be a
> series of
> > > garbage patches in various projects [1] which are basically
> doing things
> > > like fixing a single typo in a code comment, or very narrowly
> changing http
> > > to https in links within docs.
> > >
> > > Also +1ing ones own changes.
> > >
> > > I've been trying to snuff these out in nova, but I see it's
> basically a
> > > pattern widespread across several projects.
> > >
> > > This is the boilerplate comment I give with my -1, feel free to
> employ it
> > > yourself.
> > >
> > > "Sorry but this isn't really a useful change. Fixing typos in code
> > > comments when the context is still clear doesn't really help us,
> and mostly
> > > seems like looking for padding stats on stackalytics. It's also
> a drain on
> > > our CI environment.
> > >
> > > If you fixed all of the typos in a single module, or in user-facing
> > > documentation, or error messages, or something in the logs, or
> something
> > > that actually doesn't make sense in code comments, then maybe,
> but this
> > > isn't one of those things."
> > >
> > > I'm not trying to be a jerk here, but this is annoying to the
> point I felt
> > > the need to say something publicly.
> > >
> > > [1] https://review.openstack.org/#/q/author:%255E.*inspur.*
> 
> > >
> > > --
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > >
> __
> > > OpenStack 

Re: [openstack-dev] Garbage patches for simple typo fixes

2017-09-22 Thread Tom Barron


On 09/22/2017 09:26 AM, Matt Riedemann wrote:
> On 9/22/2017 7:10 AM, Tom Barron wrote:
>> FWIW I think it is better not to attribute motivation in these cases.
>> Perhaps the code submitter is trying to pad stats, but perhaps they are
>> just a new contributor trying to learn the process with a "harmless"
>> patch, or just a compulsive clean-upper who hasn't thought through the
>> costs in reviewer time and CI resources.
> 
> I agree. However, the one that set me off last night was a person from
> one company who I've repeatedly -1ed the same types of patches in nova
> for weeks, including on stable branches, and within 10 minutes of each
> other across several repos, so it's clearly part of some daily routine.
> That's what prompted me to send something to the mailing list.
> 

Yup :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Garbage patches for simple typo fixes

2017-09-22 Thread Tom Barron


On 09/21/2017 10:21 PM, Matt Riedemann wrote:
> I just wanted to highlight to people that there seems to be a series of
> garbage patches in various projects [1] which are basically doing things
> like fixing a single typo in a code comment, or very narrowly changing
> http to https in links within docs.
> 
> Also +1ing ones own changes.
> 
> I've been trying to snuff these out in nova, but I see it's basically a
> pattern widespread across several projects.
> 
> This is the boilerplate comment I give with my -1, feel free to employ
> it yourself.
> 
> "Sorry but this isn't really a useful change. Fixing typos in code
> comments when the context is still clear doesn't really help us, and
> mostly seems like looking for padding stats on stackalytics. It's also a
> drain on our CI environment.
> 
> If you fixed all of the typos in a single module, or in user-facing
> documentation, or error messages, or something in the logs, or something
> that actually doesn't make sense in code comments, then maybe, but this
> isn't one of those things."
> 
> I'm not trying to be a jerk here, but this is annoying to the point I
> felt the need to say something publicly.
> 
> [1] https://review.openstack.org/#/q/author:%255E.*inspur.*
> 

The boilerplate is helpful but have we considered putting something
along these lines in official documentation so that reviewers can just
point to it? It should then be clear to all that negative reviews on
these grounds are not simply a function of the individual reviewer's
judgment or personality.

FWIW I think it is better not to attribute motivation in these cases.
Perhaps the code submitter is trying to pad stats, but perhaps they are
just a new contributor trying to learn the process with a "harmless"
patch, or just a compulsive clean-upper who hasn't thought through the
costs in reviewer time and CI resources.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Proposing TommyLikeHu for Cinder core

2017-07-25 Thread Tom Barron
On 07/25/2017 04:07 AM, Sean McGinnis wrote:
> I am proposing we add TommyLike as a Cinder core.
> 
> DISCLAIMER: We work for the same company.
> 
> I have held back on proposing him for some time because of this conflict. But
> I think from his number of reviews [1] and code contributions [2] it's
> hopefully clear that my motivation does not have anything to do with this.
> 
> TommyLike has consistently done quality code reviews. He has contributed a
> lot of bug fixes and features. And he has been available in the IRC channel
> answering questions and helping out, despite some serious timezone
> challenges.
> 
> I think it would be great to add someone from this region so we can get more
> perspective from the APAC area, as well as having someone around that may
> help as more developers get involved in non-US and non-EU timezones.
> 
> Cinder cores, please respond with your opinion. If no reason is given to do
> otherwise, I will add TommyLike to the core group in one week.
> 
> And absolutely call me out if you see any in bias in my proposal.
> 
> Thanks,
> Sean
> 
> [1] http://stackalytics.com/report/contribution/cinder-group/90
> [2] 
> https://review.openstack.org/#/q/owner:%22TommyLike+%253Ctommylikehu%2540gmail.com%253E%22++status:merged
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Not a cinder core but I work there from time to time and Tommy has
helped out with features and bugs that overlap with manila, where I am
more active these days.

+1

I really like the way Tommy spends time to understand -1s and improve
code submissions rather than getting ego involved or just arguing like a
lawyer for the original code.  Good role model.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Feature proposal freeze exception request for change

2017-07-19 Thread Tom Barron


On 07/18/2017 04:33 PM, Ravi, Goutham wrote:
> Hello Manila reviewers,
> 
>  
> 
> It has been a few days past the feature proposal freeze, but I would
> like to request an extension for an enhancement to the NetApp driver in
> Manila. [1] implements a low-impact blueprint [2] that was approved for
> the Pike release. The code change is contained within the driver and
> would be a worthwhile addition to users of this driver in Manila/Pike.
> 
>  
> 
> [1] https://review.openstack.org/#/c/484933/
> 
> [2] https://blueprints.launchpad.net/openstack/?searchtext=netapp-cdot-qos
> 
>  
> 
> Thanks,
> 
> Goutham
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

+1

As mentioned, the BP was approved already.  The review is up and looks
sound, with effects limited to the NetApp backend.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All Hail our Newest Release Name - OpenStack Rocky

2017-04-28 Thread Tom Barron


On 04/28/2017 05:54 PM, Monty Taylor wrote:
> Hey everybody!
> 
> There isn't a ton more to say past the subject. The "R" release of
> OpenStack shall henceforth be known as "Rocky".
> 
> I believe it's the first time we've managed to name a release after a
> community member - so please everyone buy RockyG a drink if you see her
> in Boston.

Deal!


> 
> For those of you who remember the actual election results, you may
> recall that "Radium" was the top choice. Radium was judged to have legal
> risk, so as per our name selection process, we moved to the next name on
> the list.
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] share server frameworks for DHSS=False case

2017-04-04 Thread Tom Barron
question below, not just for Ben.  If you know, please respond.

On 04/03/2017 03:00 PM, Ben Swartzlander wrote:
> On 04/03/2017 02:24 PM, Tom Barron wrote:
...  ...

> While Manila doesn't care about 2 backends potentially sharing an IP,
> you do have to consider how the m-shr services interact with the daemon
> to avoid situations where they fight eachother. I haven't looked into
> the potential issues around sharing a Ganesha instance, but I know that
> the LVM driver, which uses nfs-kernel-server, does have some issues in
> this area which need fixing.

Do we know that there are concurrency issues with LVM driver due to
multiple backend processes messing with the LVM infra on a single host
vs concurrency issues due to multiple operations in multiple threads
within a single backend?

Do we have any bugs raised on issues of either type?

Thanks,

-- Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] share server frameworks for DHSS=False case

2017-04-03 Thread Tom Barron
Thanks, Ben.

One somewhat tangential remark inline ...

On 04/03/2017 03:00 PM, Ben Swartzlander wrote:
> On 04/03/2017 02:24 PM, Tom Barron wrote:
>> We're building an NFS frontend onto the CephFS driver and
>> are considering the relative merits, for the DHSS=False case,
>> of following (1) the lvm driver model, and (2) the generic
>> driver model.
>>
>> With #1 export locations use a configured address in the backend,
>> e.g. lvm_share_export_ip and the exporting is done from the
>> host itself.
>>
>> With #2 export locations use address from (typically at least)
>> a floating-IP assigned to a service VM.  The service VM must
>> be started up externally to manila services themselves -
>> e.g. by devstack plugin, tripleo, juju, whatever - prior to
>> configuring the backend in manila.conf.
>>
>> I lean towards #1 because of its relative simplicity, and
>> because of its smaller resource footprint in devstack gate, but want
>> to make sure that I'm not missing some critical limitations.
>> The main limitation that occurs to me is that multiple backends
>> of the same type, both DHSS=False - so think of London and Paris
>> with the lvm jobs in gate - will typically have the same export
>> location IP.  They'll effectively have the same share server
>> as long as they run from a manila-share service on a single
>> host.  Offhand, that doesn't seem a show-stopper.
>>
>> Am I missing something important about that limitation, and are
>> there other issues that I should think about?
> 
> I think either #1 or #2 could work but #1 will be simpler should have a
> smaller resource footprint as you point out.
> 
> There's no rule that says that 2 different backends can't share an IP
> address. Manila intentionally hides all concepts of "servers" from end
> users such that it's impossible to predict the IP address of the server
> that will host the share until after the share is created and the export
> location(s) are filled in by the backend. Typically backends fully own 1
> or more IPs and it's just a question of whether the will be 1 export
> location or more, but we left this up to the implementer for maximum
> flexibility.
> 
> If 2 backends were to share an IP address then they would need to avoid
> conflicting NFS export locations with some kind of namespacing scheme,
> but a simple directory prefix would be good enough.
> 
> While Manila doesn't care about 2 backends potentially sharing an IP,
> you do have to consider how the m-shr services interact with the daemon
> to avoid situations where they fight eachother. I haven't looked into
> the potential issues around sharing a Ganesha instance, but I know that
> the LVM driver, which uses nfs-kernel-server, does have some issues in
> this area which need fixing.

Yeah, and our dsvm-minimal lvm gate job today uses two lvm backends,
London and Paris, each configured with the same value for
lvm_share_export_IP, namely that of the devstack HOST_IP.

> 
>> Thanks,
>>
>> -- Tom
>>
>> p.s. When I asked this question on IRC vponmaryov suggested that I look
>> at the zfsforlinux driver, which allows for serving shares from the host
>> but which also allows for running a share server on a remote host,
>> accessible via ssh.  The remote host could be an appliance or a service
>> VM.  At the moment I'm leaning in this direction as it allows one to run
>> with a simple configuration as in #1 but also allows for deployment with
>> multiple backends, each with their own share servers.
> 
> What Valeriy was referring to is the ability to run the m-shr service on
> a node other than where the NFS daemon resides. ZFS initially
> implemented this because we were interested in managing potentially
> non-Linux-based ZFS servers (like FreeBSD and Illumos) but we never
> pursued those options due to technical challenges, and we later gave up
> on supporting remote ZFS using SSH altogether.
> 
> -Ben
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] share server frameworks for DHSS=False case

2017-04-03 Thread Tom Barron
We're building an NFS frontend onto the CephFS driver and
are considering the relative merits, for the DHSS=False case,
of following (1) the lvm driver model, and (2) the generic
driver model.

With #1 export locations use a configured address in the backend,
e.g. lvm_share_export_ip and the exporting is done from the
host itself.

With #2 export locations use address from (typically at least)
a floating-IP assigned to a service VM.  The service VM must
be started up externally to manila services themselves -
e.g. by devstack plugin, tripleo, juju, whatever - prior to
configuring the backend in manila.conf.

I lean towards #1 because of its relative simplicity, and
because of its smaller resource footprint in devstack gate, but want
to make sure that I'm not missing some critical limitations.
The main limitation that occurs to me is that multiple backends
of the same type, both DHSS=False - so think of London and Paris
with the lvm jobs in gate - will typically have the same export
location IP.  They'll effectively have the same share server
as long as they run from a manila-share service on a single
host.  Offhand, that doesn't seem a show-stopper.

Am I missing something important about that limitation, and are
there other issues that I should think about?

Thanks,

-- Tom

p.s. When I asked this question on IRC vponmaryov suggested that I look
at the zfsforlinux driver, which allows for serving shares from the host
but which also allows for running a share server on a remote host,
accessible via ssh.  The remote host could be an appliance or a service
VM.  At the moment I'm leaning in this direction as it allows one to run
with a simple configuration as in #1 but also allows for deployment with
multiple backends, each with their own share servers.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Translations removal

2017-03-22 Thread Tom Barron


On 03/22/2017 11:44 AM, Sean McGinnis wrote:
> On Wed, Mar 22, 2017 at 08:42:42AM -0500, Kevin L. Mitchell wrote:
>> On Tue, 2017-03-21 at 22:10 +, Taryma, Joanna wrote:
>>> However, pep8 does not accept passing variable to translation
>>> functions,  so this results in ‘H701 Empty localization string’ error.
>>>
>>> Possible options to handle that:
>>>
>>> 1)  Duplicate messages:
>>>
>>> LOG.error(“”, {: })
>>>
>>> raise Exception(_(“”) % {: })
>>>
>>> 2)  Ignore this error
>>>
>>> 3)  Talk to hacking people about possible upgrade of this check
>>>
>>> 4)  Pass translated text to LOG in such cases
>>>
>>>  
>>>
>>> I’d personally vote for 2. What are your thoughts?
>>
>> When the translators go to translate, they generally only get to see
>> what's inside _(), so #2 is a no-go for translations, and #3 also is a
>> no-go.
>> -- 
> 
> I think the appropriate thing here is to do something like:
> 
> msg = _('') % {: }
> LOG.error(msg)
> raise Exception(msg)
> 
> This results in a translated string going to the log, but I think that's
> OK.
>

Yeah, that is what we are starting to do going forwards unless
instructed otherwise.


> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][release] Final releases for stable/liberty and liberty-eol

2016-11-22 Thread Tom Barron


On 11/21/2016 06:38 AM, Alan Pevec wrote:
>> 1. Final stable/liberty releases should happen next week, probably by
>> Thursday 11/17.
> 
> I see only Cinder did it https://review.openstack.org/397282
> and Nova is under review https://review.openstack.org/397841
> 
> Is that all or other project are not aware Liberty is EOL now?
> 
> Cheers,
> Alan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


manila 1.0.2 has been released for Liberty EOL.

-- Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][cinder] [api] API and entity naming consistency

2016-11-17 Thread Tom Barron


On 11/17/2016 04:03 AM, Ravi, Goutham wrote:
> 
> 
> On>  11/16/16, 8:22 PM, "Ben Swartzlander"  wrote:
> 
> > > On 11/16/2016 11:28 AM, Ravi, Goutham wrote:
> > > + [api] in the subject to attract API-WG attention.
> > >
> > >
> > >
> > > We already have a guideline in the API-WG around resource names for 
> “_”
> > > vs “-“ -
> > > 
> https://specs.openstack.org/openstack/api-wg/guidelines/naming.html#rest-api-resource-names
> > > . With some exceptions (like share_instances that you mention), I see
> > > that we have implemented – across other resources.
> > >
> > > Body elements however, we prefer underscores, i.e, do not have body
> > > elements that follow CamelCase or mixedCase.
> > >
> > >
> > >
> > > My personal preference would be to retain “share-” in the resource
> > > names. As an application developer that has to integrate with block
> > > storage and shared file systems APIs, I would like the distinction if
> > > possible; because at the end of the day, the typical workflow for me
> > > would be:
> > >
> > > -  Get the endpoint from the catalog for the specific version 
> of
> > > the service API I want
> > >
> > > -  Append resource to endpoint and make my REST calls.
> > >
> > >
> > >
> > > The distinction in the APIs would ensure my code is readable. It would
> > > be interesting to see what the API working group prefers around this. 
> We
> > > have in the past realized that /capabilities could to be uniform 
> across
> > > services because it is expected to spew a bunch of strings to the user
> > > (warning: still under contention, see
> > > https://review.openstack.org/#/c/386555/) . However, there is a 
> mountain
> > > of a difference between the underlying intent of /share-networks and
> > > neutron’s /networks resources.
> > 
> > So you'd be in favor of renaming cinder's /snapshots URL to 
> > /volume-snapshots and manila's /snapshots URL to /share-snapshots?
> > 
> > I agree the explicitness is appealing, but we have to recognize that 
> the 
> > existing API has tons of implicitness in the names, and changing the 
> > existing API will cause pain no matter how well-intentioned the changes 
> are.
> > 
> 
> 
> No, I’m not in favor of renaming existing resources. I support the 
> explicitness
> in some if not all manila resources, share-networks, share-metadata, 
> share-servers,
> share-replicas. Renaming snapshots to /share-snapshots won’t fetch us 
> much but
> frustration. To Valeiry’s original question, I support /share-groups and 
> /share-group-types
> over /groups or /types. The roughly equivalent cinder resources are 
> /groups and 
>/group_types.

Agree.  And I wish that on the cinder review I had asked if there might
ever be any other types of groups needed in cinder than volume groups,
the way I did in the manila review for /share-groups rather than just
/groups.

-- Tom


> 
> > > However, whatever we decide there, let’s not overload resources within
> > > the project, an explicit API will be appreciated for application
> > > development. share-types and group-types are not ‘types’ unless
> > > everything about these resources (i.e, database representation) are 
> the
> > > same and all HTTP verbs that you are planning to add correspond to 
> both.
> > >
> > >
> > >
> > > --
> > >
> > > Goutham
> > >
> > >
> > >
> > > *From: *Valeriy Ponomaryov 
> > > *Reply-To: *"OpenStack Development Mailing List (not for usage
> > > questions)" 
> > > *Date: *Wednesday, November 16, 2016 at 4:22 PM
> > > *To: *"OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > *Subject: *[openstack-dev] [manila][cinder] API and entity naming
> > > consistency
> > >
> > >
> > >
> > > For the moment Manila project, as well as Cinder, does have
> > > inconsistency between entity and API naming, such as:
> > >
> > > - "share type" ("volume type" in Cinder) entity has "/types/{id}" URL
> > >
> > > - "share snapshot" ("volume snapshot" in Cinder) entity has
> > > "/snapshots/{id}" URL
> > >
> > >
> > >
> > > BUT, Manila has other Manila-specific APIs as following:
> > >
> > >
> > >
> > > - "share network" entity and "/share-networks/{id}" API
> > >
> > > - "share server" entity and "/share-servers/{id}" API
> > >
> > >
> > >
> > > And with implementation of new features [1] it becomes a problem,
> > > because we start having
> > >
> > > "types" and "snapshots" for different things (share and share groups,
> > > share types and share group types).
>  

Re: [openstack-dev] [manila] Access key via the UI?

2016-11-03 Thread Tom Barron


On 11/03/2016 10:33 AM, Ravi, Goutham wrote:
> Ah. I’d let Tom or Ramana weigh in on that. IIRC, the access key is
> plaintext, so, I thought it can be displayed; that’s what the CLI is doing.
> 
> Reading the spec though, this was an intended part of the feature:
> http://specs.openstack.org/openstack/manila-specs/specs/newton/auth-access-keys.html

Thanks.  I could see going either way - download or display.  Interested
in what Victoria and Ramana think.

-- Tom

> 
> 
>  
> 
>  
> 
> *From: *Arne Wiebalck 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, November 3, 2016 at 10:15 AM
> *To: *"OpenStack Development Mailing List (not for usage questions)"
> 
> *Subject: *Re: [openstack-dev] [manila] Access key via the UI?
> 
>  
> 
> Hi Goutham,
> 
>  
> 
> It’s already being returned in the manila access-list API as of
> 2.21, if you’re using the latest python-manilaclient, you should
> have it there. However, it’s missing in the UI:
> 
>  
> 
> Yes, I was comparing what I can do on the CLI and what was offered with
> the UI.
> 
> 
> 
> 
> https://github.com/openstack/manila-ui/blob/c986a100eecf46af4b597cdcf59b5bc1edc2d1b0/manila_ui/dashboards/project/shares/shares/tables.py#L278
> 
>  
> 
> Are you suggesting to simply display it? (I was more thinking of a
> one-time download when giving access,
> 
> similar to what is offered for keys for instance creation.)
> 
> 
> 
> Could you open a bug?
> 
>  
> 
> Just did it:
> 
> https://bugs.launchpad.net/manila-ui/+bug/1638934
> 
>  
> 
> Thanks!
> 
>  Arne
> 
>  
> 
>  
> 
> 
> 
> On 11/3/16, 7:32 AM, "Arne Wiebalck"  > wrote:
> 
>Hi,
> 
>As cephx has been added as an access type in the dashboard and an
> access key can now
>be part of the API's response to access list requests, is it
> planned to have the access key to
>be returned when adding a cephx access rule via the UI (for
> instance similar to what ‘Create
>Key Pair’ in the instance panel does)? Couldn’t find any mention
> of such an activity.
> 
>Thanks!
> Arne
> 
>--
>Arne Wiebalck
>CERN IT
> 
>
> __
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org
> ?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>  
> 
> --
> Arne Wiebalck
> CERN IT
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Tom Barron
I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
manila core team.  This is a clear case where he's already been doing
the review work, excelling both qualitatively and quantitatively, as
well as being a valuable committer to the project.  Goutham deserves to
be core and we need the additional bandwidth for the project.  He's
treated as a de facto core by the community already.  Let's make it
official!

-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Tom Barron


On 11/02/2016 06:23 AM, Arne Wiebalck wrote:
> Hi Valeriy,
> 
> I wasn’t aware, thanks! 
> 
> So, if each driver exposes the storage_protocols it supports, would it
> be sensible to have
> manila-ui check the extra_specs for this key and limit the protocol
> choice for a given
> share type to the supported protocols (in order to avoid that the user
> tries to create
> incompatible type/protocol combinations)?

Not necessarily tied to share types, but we have this bug open w.r.t.
showing only protocols that are available given available backends in
the actual deployment:

https://bugs.launchpad.net/manila-ui/+bug/1622732

-- Tom

> 
> Thanks again!
>  Arne
> 
> 
>> On 02 Nov 2016, at 10:00, Valeriy Ponomaryov > > wrote:
>>
>> Hello, Arne
>>
>> Each share driver has capability called "storage_protocol". So, for
>> case you describe, you should just define such extra spec in your
>> share type that will match value reported by desired backend[s].
>>
>> It is the purpose of extra specs in share types, you (as cloud admin)
>> define its connection yourself, either it is strong or not.
>>
>> Valeriy
>>
>> On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck > > wrote:
>>
>> Hi,
>>
>> We’re preparing the use of Manila in production and noticed that
>> there seems to be no strong connection
>> between share types and share protocols.
>>
>> I would think that not all backends will support all protocols. If
>> that’s true, wouldn’t it be sensible to establish
>> a stronger relation and have supported protocols defined per type,
>> for instance as extra_specs (which, as one
>> example, could then be used by the Manila UI to limit the choice
>> to supported protocols for a given share
>> type, rather than maintaining two independent and hard-coded tuples)?
>>
>> Thanks!
>>  Arne
>>
>> --
>> Arne Wiebalck
>> CERN IT
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> -- 
>> Kind Regards
>> Valeriy Ponomaryov
>> www.mirantis.com 
>> vponomar...@mirantis.com 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> --
> Arne Wiebalck
> CERN IT
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-18 Thread Tom Barron


On 10/18/2016 03:56 PM, Doug Hellmann wrote:
> Excerpts from Doug Wiegley's message of 2016-10-18 12:53:18 -0600:
>>
>>> On Oct 18, 2016, at 12:42 PM, Doug Hellmann  wrote:
>>>
>>> I expect you could take over a corner of the dev lounge or some
>>> other space to hold a BoF to at least start the discussion and get
>>> some interested folks lined up to lead the WG.
>>
>> Sounds good. In prep to sending an ML invite, what projects use service VMs? 
>>   There’s Octavia, Trove, and Tacker.  What else?
> 
> You hit the main ones I was thinking of. Maybe also Sahara and Magnum?

Manila

> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-23 Thread Tom Barron


On 04/23/2016 03:25 PM, Jay Pipes wrote:
> On 04/23/2016 03:18 PM, Mike Perez wrote:
>> On 14:54 Apr 18, Jay Pipes wrote:
>>> On 04/16/2016 05:51 PM, Amrith Kumar wrote:
  - update_resource(id or resource, newsize)
>>>
>>> Resizing resources is a bad idea, IMHO. Resources are easier to deal
>>> with
>>> when they are considered of immutable size and simple (i.e. not
>>> complex or
>>> nested). I think the problem here is in the definition of resource
>>> classes
>>> improperly.
>>>
>>> For example, a "cluster" is not a resource. It is a collection of
>>> resources
>>> of type node. "Resizing" a cluster is a misnomer, because you aren't
>>> resizing a resource at all. Instead, you are creating or destroying
>>> resources inside the cluster (i.e. joining or leaving cluster nodes).
>>>
>>> BTW, this is also why the "resize instance" API in Nova is such a
>>> giant pain
>>> in the ass. It's attempting to "modify" the instance "resource" when the
>>> instance isn't really the resource at all. The VCPU, RAM_MB, DISK_GB,
>>> and
>>> PCI devices are the actual resources. The instance is a convenient
>>> way to

-- Tom
>>> tie those resources together, and doing a "resize" of the instance
>>> behind
>>> the scenes actually performs a *move* operation, which isn't a
>>> *change* of
>>> the original resources. Rather, it is a creation of a new set of
>>> resources
>>> (of the new amounts) and a deletion of the old set of resources.
>>
>> How about extending a volume? A volume is a resource and can be
>> extended in
>> Cinder today.
> 
> Yep, understood :) I recognize some resource amounts can be modified for
> some resource classes. How about *shrinking* a volume. Is that supported?

manila has apis for both extending and shrinking shares.

FWIW I very much like the notion that we should be able to check actual
up-to-date resource usage, use a single source of truth, and reduce the
amount of races that have to be handled.  The previous one-sentence
paragraph provides a data-point and is not intended as an objection.

-- Tom

> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stackalytics] Gaming the Stackalytics stats

2016-04-08 Thread Tom Barron


On 04/08/2016 05:16 PM, Anita Kuno wrote:
> On 04/08/2016 05:10 PM, gordon chung wrote:
>>
>>
>> On 08/04/2016 1:26 PM, Davanum Srinivas wrote:
>>> Team,
>>>
>>> Steve pointed out to a problem in Stackalytics:
>>> https://twitter.com/stevebot/status/718185667709267969
>>>
>>> It's pretty clear what's happening if you look here:
>>> https://review.openstack.org/#/q/owner:openstack-infra%2540lists.openstack.org+status:open
>>>
>>> Here's the drastic step (i'd like to avoid):
>>> https://review.openstack.org/#/c/303545/
>>>
>>
>> is it actually affecting anything in the community aside from the 
>> reviews being useless. aside from the 'diversity' tags in governance, 
>> does anything else use stackalytics?
>>
>> cheers,
>>
> Some company managers only look as far as stackalytics to calculate
> career decisions for their group.
> 
> Sad but true.
> 

Certainly true, and sad, but "some" is a quantifier that starts at
greater than zero and runs on up from there :-)

IMO we non-managers also have to take responsibility for a system that
uses quantitative measures that "gamify" OpenStack performance.  Humans
respond to this kind of reward system.  They aren't (for that reason at
least) evil and the effect is not all that surprising.  The harder
question is whether there is a better way to set things up, or more to
the point, since many of us are sure there is, exactly what it is.

-- Tom

> Anita.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTL Candidacy

2016-03-14 Thread Tom Barron


On 03/11/2016 01:41 PM, Sean McGinnis wrote:
> Hey everyone,
> 
> Wow, how six months flies! I'd like to announce my candidacy to continue on as
> Cinder PTL for the Newton release cycle.
> 
> A lot has been accomplished in the Mitaka cycle. After a lot of work by many
> folks, over a couple development cycles, we now have what we consider a "tech
> preview" of rolling upgrades. It just hasn't had enough runtime and testing 
> for
> us to say it's "official". We will likely need to fix a few minor things in 
> the
> Newton timeframe before it's fully baked and reliable. But it has come a long
> way and I'm really happy with the progress that has been made.
> 
> Another priority we had identified for Mitaka was active/active high
> availability of the c-vol service. We were not able to complete that work, but
> many pieces have been put in place to support that in Newton. We fixed several
> API races and added the ability to use something like tooz for locking. These
> are foundation pieces for us to be able to start breaking out things and
> running in a reliable active/active configuration.
> 
> Microversion support has been added and there is now a new v3 API endpoint.
> This was a bit of a controversy as we really had just started to get folks to
> move off of v1 to v2. To be safe though I decided it would protect end users
> better to have a clearly separate new API endpoint for the microversion
> compatibility. And now hopefully it is our last.
> 
> Replication was another slightly controversial feature implemented. Late in
> Liberty we finally agreed on a spec for a v2 version of replication. The v2
> spec was approved so late that no one actually had time to implement it for
> that release. As we started to implement it for Mitaka we found that a lot of
> compromises had crept in during the spec review that it had the risk of being
> too complex and having some of the issues we were trying to get rid of by
> moving away from replication v1. At our midcycle we had a lot of discussion on
> replication and finally decided to change course before it was too late.
> Whether that ends up being the best choice when we look back a year from now 
> or
> not, I'm proud of the team that we were willing to put on the brakes and make
> changes - even though it was more work for us - before we released something
> out to end users that would have caused problems or a poor experience.
> 
> Other than that, there's mostly been a lot of good bug fixes. Eight new 
> drivers
> have been added from (I think) five different vendors. The os-brick library is
> now 1.0 (actually 1.1.0) and is in use by both Cinder and Nova for common
> storage management operations so there is not a duplication and disconnect of
> code between the two projects. We were also able to add a Brick cinder client
> extension to be able to perform storage management on nodes without Nova (bare
> metal, etc.).
> 
> None of this goodness was from me.
> 
> We have a bunch of smart and active members of the Cinder community. They are
> the ones that are making a difference, working across the community, and
> making sure Cinder is a solid component in an OpenStack cloud.
> 
> Being part of the Cinder community has been one of the best and most engaging
> parts of my career. I am lucky enough to have support from my company to be
> able to devote time to being a part of this. I would love the opportunity to
> continue as PTL to not just contribute where I can, but to make sure the folks
> doing the heavy lifting have the support and project organization they need to
> avoid distractions and be able to focus on getting the important stuff done.
> 
> I think in Newton we need to continue the momentum and get Active/Active 
> Cinder
> volume service support implemented. We need to continue to work closely with
> the Nova team to make sure our interaction is correct and solid. But also work
> to make Cinder a useful storage management interface in environments without
> Nova. I will continue to encourage developer involvement and vendor support.
> We need to improve the user experience with better error reporting when things
> go wrong. And last, but definitely not least, we need to continue to expand 
> our
> testing - unit, functional, and tempest - to make sure we can avoid those
> errors and deliver a high quality and solid solution.
> 
> I really feel I'm just getting into the swing of things. I would love the
> opportunity to serve as PTL for the Newton release.

FWIW, you have my support.  Thanks for your service!

-- Tom

> 
> Thank you for your consideration.
> 
> Sean McGinnis (smcginnis)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


Re: [openstack-dev] [Cinder] Status of cinder-list bug delay with 1000's of volumes

2016-03-03 Thread Tom Barron


On 03/03/2016 06:38 PM, Walter A. Boring IV wrote:
> Adam,
>   As the bug shows, it was fixed in the Juno release.  The icehouse
> release is no longer supported.  I would recommend upgrading your
> deployment if possible or looking at the patch and see if it can work
> against your Icehouse codebase.
> 
> https://review.openstack.org/#/c/96548/
> 
> Walt

Actually, it looks like we also backported this one to icehouse [1] [2].

You should be able to pull it directly from upstream, or if you are
using an openstack distribution there is a good chance they will have it
in a maintenance release based on icehouse.

-- Tom

[1] https://review.openstack.org/102476
[2]
https://git.openstack.org/cgit/openstack/cinder/commit/?id=fe37a6ee1d85ce07e672f94e395edee81fac80db


> 
> On 03/03/2016 03:12 PM, Adam Lawson wrote:
>> Hey all (hi John),
>>
>> What's the status of this [1]? We're experiencing this behavior in
>> Icehouse - wondering where it was addressed and if so, when. I always
>> get confused when I look at the launchpad/review portals.
>>
>> [1] https://bugs.launchpad.net/cinder/+bug/1317606
>>
>> */
>> Adam Lawson/*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] NFS mount as cinder user instead of root

2015-10-19 Thread Tom Barron
On 10/14/15 6:31 AM, Francesc Pinyol Margalef wrote:
> Hi,
> Yes, that worked! Thanks! :)
> 
> But the process is very slow (about half an hour to create a volume).
> I think the problem is the execution of "du -sb --apparent-size
> --exclude *snapshot*
> /var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51", as shown in the logs:
> 
> 2015-10-13 19:33:14.127 1311 INFO
> cinder.volume.flows.manager.create_volume
> [req-f52e5048-3155-4d49-92c0-4152b8243fd6
> 26e01a732d9e44d4a98305c6aa11860f 36593fc96ab64bc7959eb9e0ff2f2247 - - -]
> Volume 5230104d-68a3-4dc0-95ec-43f5d8fbc5d3: b
> eing created as raw with specification: {'status': u'creating',
> 'volume_size': 1, 'volume_name':
> u'volume-5230104d-68a3-4dc0-95ec-43f5d8fbc5d3'}
> 2015-10-13 19:33:14.140 1311 INFO cinder.brick.remotefs.remotefs
> [req-f52e5048-3155-4d49-92c0-4152b8243fd6
> 26e01a732d9e44d4a98305c6aa11860f 36593fc96ab64bc7959eb9e0ff2f2247 - - -]
> Already mounted: /var/lib/cinder/mnt/9ae799cf301b19940950
> ae49dd800c51
> 2015-10-13 19:40:27.556 1311 WARNING cinder.openstack.common.loopingcall
> [req-0a4a8e09-f10b-4dc6-96bf-f7e333635f99 - - - - -] task u' method Service.periodic_tasks of  0x2c5c650>>' run outlasted int
> erval by 499.80 sec
> 2015-10-13 19:40:27.564 1311 INFO cinder.volume.manager
> [req-5b14e3f3-76d9-484e-819b-46da8f0e29a6 - - - - -] Updating volume status
> 2015-10-13 19:40:27.577 1311 INFO cinder.brick.remotefs.remotefs
> [req-5b14e3f3-76d9-484e-819b-46da8f0e29a6 - - - - -] Already mounted:
> /var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51
> 2015-10-13 19:51:37.371 1311 WARNING cinder.openstack.common.loopingcall
> [req-5b14e3f3-76d9-484e-819b-46da8f0e29a6 - - - - -] task u' method Service.periodic_tasks of  0x2c5c650>>' run outlasted int
> erval by 609.81 sec
> 2015-10-13 19:51:37.378 1311 INFO cinder.volume.manager
> [req-941c5a78-a85d-4fa8-9df9-033b4cc6e6f5 - - - - -] Updating volume status
> 2015-10-13 19:51:37.391 1311 INFO cinder.brick.remotefs.remotefs
> [req-941c5a78-a85d-4fa8-9df9-033b4cc6e6f5 - - - - -] Already mounted:
> /var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51
> 2015-10-13 19:58:18.585 1311 ERROR cinder.openstack.common.periodic_task
> [req-941c5a78-a85d-4fa8-9df9-033b4cc6e6f5 - - - - -] Error during
> VolumeManager._report_driver_status: Unexpected error while running command.
> Command: None
> Exit code: -
> Stdout: u"Unexpected error while running command.\nCommand: du -sb
> --apparent-size --exclude *snapshot*
> /var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51\nExit code:
> -15\nStdout: u''\nStderr: u''"
> Stderr: None
> 2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
> Traceback (most recent call last):
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py",
> line 224, in run_periodic_tasks
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task task(self, context)
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1499,
> in _report_driver_status
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task volume_stats =
> self.driver.get_volume_stats(refresh=True)
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in
> wrapper
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task return f(*args, **kwargs)
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py",
> line 439, in get_volume_stats
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task self._update_volume_stats()
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py",
> line 458, in _update_volume_stats
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task capacity, free, used =
> self._get_capacity_info(share)
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line
> 281, in _get_capacity_info
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task run_as_root=run_as_root)
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/cinder/utils.py", line 143, in execute
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task return
> processutils.execute(*cmd, **kwargs)
> 2015-10-13 19:58:18.585 1311 TRACE
> cinder.openstack.common.periodic_task   File
> "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py",
> line 233, in execute
> 2015-10-13 

Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-14 Thread Tom Barron
On 9/14/15 12:15 PM, Mike Perez wrote:
> Hello all,
> 
> I will not be running for Cinder PTL this next cycle.

Thanks for helping make me feel welcome as I started up,
for your example of gutsy and consistent leadership,
and for some really good coffee!

-- Tom




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] L3 low pri review queue starvation

2015-09-02 Thread Tom Barron
On 9/2/15 5:19 AM, Gorka Eguileor wrote:
> On Tue, Sep 01, 2015 at 09:30:26AM -0600, John Griffith wrote:
>> On Tue, Sep 1, 2015 at 5:57 AM, Tom Barron <t...@dyncloud.net> wrote:
>>
>>> [Yesterday while discussing the following issue on IRC, jgriffith
>>> suggested that I post to the dev list in preparation for a discussion in
>>> Wednesday's cinder meeting.]
>>>
>>> Please take a look at the 10 "Low" priority reviews in the cinder
>>> Liberty 3 etherpad that were punted to Mitaka yesterday. [1]
>>>
>>> Six of these *never* [2] received a vote from a core reviewer. With the
>>> exception of the first in the list, which has 35 patch sets, none of the
>>> others received a vote before Friday, August 28.  Of these, none had
>>> more than -1s on minor issues, and these have been remedied.
>>>
>>> Review https://review.openstack.org/#/c/213855 "Implement
>>> manage/unmanage snapshot in Pure drivers" is a great example:
>>>
>>>* approved blueprint for a valuable feature
>>>* pristine code
>>>* passes CI and Jenkins (and by the deadline)
>>>* never reviewed
>>>
>>> We have 11 core reviewers, all of whom were very busy doing reviews
>>> during L3, but evidently this set of reviews didn't really have much
>>> chance of making it.  This looks like a classic case where the
>>> individually rational priority decisions of each core reviewer
>>> collectively resulted in starving the Low Priority review queue.
>>>
> 
> I can't speak for other cores, but in my case reviewing was mostly not
> based on my own priorities, I reviewed patches based on the already set
> priority of each patch as well as patches that I was already
> reviewing.
> 
> Some of those medium priority patches took me a lot of time to review,
> since they were not trivial (some needed some serious rework).  As for
> patches I was already reviewing, as you can imagine it wouldn't be fair
> to just ignore a patch that I've been reviewing for some time just when
> it's almost ready and the deadline is closing in.
> 

That's why I said that this situation is an outcome of individually
rational decisions.  It should be clear that none of this is intended as
a complaint about reviewers or reviewer's performance.

> Having said that I have to agree that those patches didn't have much
> chances, and I apologize for my part on that.  While it is no excuse I
> have to agree with jgriffith when he says that those patches should have
> pushed cores for reviews (even if this is clearly not the "right" way to
> manage it).

No apology required!

> 
>>> One way to remedy would be for the 11 core reviewers to devote a day or
>>> two to cleaning up this backlog of 10 outstanding reviews rather than
>>> punting all of them out to Mitaka.
>>>
>>> Thanks for your time and consideration.
>>>
>>> Respectfully,
>>>
>>> -- Tom Barron
>>>
>>> [1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
>>> [2] At the risk of stating the obvious, in this count I ignore purely
>>> procedural votes such as the final -2.
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ​Thanks Tom, this is sadly an ongoing problem every release.  I think we
>> have a number of things we can talk about at the summit to try and
>> make some of this better.  I honestly think that if people were to
>> actually "use" launchpad instead of creating tracking etherpads
>> everywhere it would help.  What I mean is that there is a ranked
>> targeting of items in Launchpad and we should use it, core team
>> members should know that as the source of truth and things that must
>> get reviewed.
>>
> 
> I agree, we should use Launchpad's functionality to track BPs and Bugs
> targeted for each milestone, and maybe we can discuss on a workflow that
> helps us reduce starvation at the same time that helps us keep track of
> code reviewers responsible for each item.
> 
> Just spitballing here, but we could add to BP's work items and Bug's
> comments what core members will be responsible for reviewing related
> patches.  Although this means cores will have to check this on every
> review they do that has a BP or Bug number, so if there

[openstack-dev] [cinder] L3 low pri review queue starvation

2015-09-01 Thread Tom Barron
[Yesterday while discussing the following issue on IRC, jgriffith
suggested that I post to the dev list in preparation for a discussion in
Wednesday's cinder meeting.]

Please take a look at the 10 "Low" priority reviews in the cinder
Liberty 3 etherpad that were punted to Mitaka yesterday. [1]

Six of these *never* [2] received a vote from a core reviewer. With the
exception of the first in the list, which has 35 patch sets, none of the
others received a vote before Friday, August 28.  Of these, none had
more than -1s on minor issues, and these have been remedied.

Review https://review.openstack.org/#/c/213855 "Implement
manage/unmanage snapshot in Pure drivers" is a great example:

   * approved blueprint for a valuable feature
   * pristine code
   * passes CI and Jenkins (and by the deadline)
   * never reviewed

We have 11 core reviewers, all of whom were very busy doing reviews
during L3, but evidently this set of reviews didn't really have much
chance of making it.  This looks like a classic case where the
individually rational priority decisions of each core reviewer
collectively resulted in starving the Low Priority review queue.

One way to remedy would be for the 11 core reviewers to devote a day or
two to cleaning up this backlog of 10 outstanding reviews rather than
punting all of them out to Mitaka.

Thanks for your time and consideration.

Respectfully,

-- Tom Barron

[1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
[2] At the risk of stating the obvious, in this count I ignore purely
procedural votes such as the final -2.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor for core

2015-08-13 Thread Tom Barron
On 8/13/15 3:13 PM, Mike Perez wrote:
 It gives me great pleasure to nominate Gorka Eguileor for Cinder core.
 
 Gorka's contributions to Cinder core have been much apprecated:
 
 https://review.openstack.org/#/q/owner:%22Gorka+Eguileor%22+project:openstack/cinder,p,0035b6410002dd11
 
 60/90 day review stats:
 
 http://russellbryant.net/openstack-stats/cinder-reviewers-60.txt
 http://russellbryant.net/openstack-stats/cinder-reviewers-90.txt
 
 Cinder core, please reply with a +1 for approval. This will be left
 open until August 19th. Assuming there are no objections, this will go
 forward after voting is closed.
 

Not a cinder core, but I've found Gorka's reviews helpful and instructive.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] python-cinderclient functional tests

2015-07-06 Thread Tom Barron
On 7/6/15 10:28 PM, sean.mcgin...@gmx.com wrote:
 I support moving it to non-voting from the experimental queue. It will
 be much more visible that way if something breaks.

That makes sense to me.

 - Reply message -
 From: Ivan Kolodyazhny e...@e0ne.info
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Cc: Oleksiy Butenko obute...@mirantis.com, Kyrylo Romanenko
 kromane...@mirantis.com
 Subject: [openstack-dev] [Cinder] python-cinderclient functional tests
 Date: Mon, Jul 6, 2015 1:48 PM
 
 Hi all,
 
 As you may know, we've got experimental job [1] to run functional tests [2]
 for python-cinderclient with devstack setup.
 
 Functional tests for python-cinderclient is very important because it's
 almost the only way to test python-cinderclient with Cinder API. For now,
 we've got only Rally which uses cinderclient to test Cinder. Tempest uses
 own client for all APIs.
 
 Current tests coverage are very low.. That's why I would like to ask
 everyone to contribute to python-cinderclient. I created etherpad [3] with
 current progress. You can find me (e0ne) or any other core in
 #openstack-cinder  in IRC.
 
 
 Aslo, what do you think about moving cinderclient functional tests from
 experimental to non-voting queue to make it more public and run it with
 every patch to python-cinderclient?
 
 
 [1] https://review.openstack.org/#/c/182528/
 [2]
 https://github.com/openstack/python-cinderclient/tree/master/cinderclient/tests/functional
 [3] https://etherpad.openstack.org/p/cinder-client-functional-tests
 
 
 Regards,
 Ivan Kolodyazhny
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Rebranded Volume Drivers

2015-06-03 Thread Tom Barron
On 6/3/15 6:16 PM, Bruns, Curt E wrote:
 
 
 -Original Message-
 From: Eric Harney [mailto:ehar...@redhat.com]
 Sent: Wednesday, June 03, 2015 12:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [cinder] Rebranded Volume Drivers

 On 06/03/2015 01:59 PM, John Griffith wrote:
 On Wed, Jun 3, 2015 at 11:32 AM, Mike Perez thin...@gmail.com wrote:

 There are a couple of cases [1][2] I'm seeing where new Cinder volume
 drivers for Liberty are rebranding other volume drivers. This
 involves inheriting off another volume driver's class(es) and
 providing some config options to set the backend name, etc.

 Two problems:

 1) There is a thought of no CI [3] is needed, since you're using
 another vendor's driver code which does have a CI.

 2) IMO another way of satisfying a check mark of being OpenStack
 supported and disappearing from the community.

 What gain does OpenStack get from these kind of drivers?

 Discuss.

 [1] - https://review.openstack.org/#/c/187853/
 [2] - https://review.openstack.org/#/c/187707/4
 [3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers

 --
 Mike Perez


 ​This case is interesting​ mostly because it's the same contractor
 submitting the driver for all the related platforms.  Frankly I find
 the whole rebranding annoying, but there's certainly nothing really
 wrong with it, and well... why not, it's Open Source.

 What I do find annoying is the lack of give back; so this particular
 contributor has submitted a few drivers thus far (SCST, DotHill and
 some others IIRC), and now has three more proposed. This would be
 great except I personally have spent a very significant amount of time
 with this person helping with development, CI and understanding OpenStack
 and Cinder.

 To date, I don't see that he's provided a single code review (good or
 bad) or contributed anything back other than to his specific venture.

 Anyway... I think your point was for input on the two questions:

 For item '1':
 I guess as silly as it seems they should probably have 3'rd party CI.
 There are firmware differences etc that may actually change behaviors,
 or things my diverge, or maybe their code is screwed up and the
 inheritance doesn't work (doubtful).

 Given that part of the case made for CI was ensure that Cinder ships drivers
 that work, the case of backend behavior diverging over time from what
 originally worked with Cinder seems like a valid concern.  We lose the 
 ability to
 keep tabs on that for derived drivers without CI.


 Yes, it's just a business venture in this case (good or bad, not for
 me to decide).  The fact is we don't discriminate or place a value on
 peoples contributions, and this shouldn't be any different.  I think
 the best answer is follow same process for any driver and move on.
 This does point out that maybe OpenStack/Cinder has grown to a point
 where there are so many options and choices that it's time to think
 about changing some of the policies and ways we do things.

 In my opinion, OpenStack doesn't gain much in this particular case,
 which brings me back to; remove all drivers except the ref-impl and
 have them pip installable and on a certified list based on CI.

 Thanks,
 John


 The other issue I see with not requiring CI for derived drivers is that,
 inevitably, small changes will be made to the driver code, and we will find
 ourselves having to sort out how much change can happen before CI is then
 required.  I don't know how to define that in a way that would be useful as a
 general policy.

 Eric

 
 I haven't been involved in this project too long, but I have learned that if 
 you want a driver included, you need to provide a CI system.  It's a very 
 clearly documented requirement.  I'm all for inheritance and re-use, but 
 along with what Eric said, at some point the HW/FW in those 
 also-supported/re-branded arrays may change and if it's not being tested, 
 then who knows what kind of end-user experience will occur.  I'd be surprised 
 if someone standing up OpenStack with Lenovo Storage would be okay knowing 
 that it's never been tested on actual HW.
 
 - Curt
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Well said.

-- Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Some Changes to Cinder Core

2015-06-01 Thread Tom Barron
Well deserved, and hope you had a nice break.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][tempest] Data-driven testing (DDT) samples

2015-05-04 Thread Tom Barron
On 5/4/15 3:05 AM, Salvatore Orlando wrote:
 Among the OpenStack project of which I have some knowledge, none of them
 uses any DDT library.

FYI, manila uses DDT for unit tests.

 If you think there might be a library from which lbaas, neutron, or any
 other openstack project might take advantage, we should consider it.
 
 Salvatore
 
 On 14 April 2015 at 20:33, Madhusudhan Kandadai
 madhusudhan.openst...@gmail.com
 mailto:madhusudhan.openst...@gmail.com wrote:
 
 Hi,
 
 I would like to start a thread for the tempest DDT in neutron-lbaas
 tree. The problem comes in when we have testcases for both
 admin/non-admin user. (For example, there is an ongoing patch
 activity: https://review.openstack.org/#/c/171832/). Ofcourse it has
 duplication and want to adhere as per the tempest guidelines. Just
 wondering, whether we are using DDT library in other projects, if it
 is so, can someone please point me the sample code that are being
 used currently. It can speed up this DDT activity for neutron-lbaas.


  $ grep -R '@ddt' manila/tests/ | wc -l
198

 In the meantime, I am also gathering/researching about that. Should
 I have any update, I shall keep you posted on the same.
 
 Thanks,
 Madhusudhan
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Regards,

-- Tom



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Clinton Knight for core team

2015-04-02 Thread Tom Barron
On 4/2/15 9:16 AM, Ben Swartzlander wrote:
 Clinton Knight (cknight on IRC) has been working on OpenStack for the
 better part of the year, and starting in February, he shifted his focus
 from Cinder to Manila. I think everyone is already aware of his high
 quality contributions and code reviews. I would like to nominate him to
 join the Manila core reviewer team.
 
 -Ben Swartzlander
 Manila PTL
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] K3 Feature Freeze Exception request for bp/nfs-backup

2015-03-11 Thread Tom Barron
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I hereby solicit a feature freeze exception for the NFS backup review [1].

Although only about 140 lines of non-test code, this review completes
the implementation of the NFS backup blueprint [2].  Most of the
actual work for this blueprint was a refactor of the Swift backup
driver to
abstract the backup/restore business logic from the use of the Swift
object store itself as the backup repository.  With the help of Xing
Yang, Jay Bryant, and Ivan Kolodyazhny, that review [3] merged
yesterday and made the K3 FFE deadline.

In evaluating this FFE request, please take into account the following
considerations:

   * Without the second review, the goal of the blueprint remains
 unfullfilled.

   * This code was upstream in January and was essentially complete
 with only superficial changes since then.

   * As of March 5 this review had two core +2s.  Delay since then has
 been entirely due to wait time on the dependent review and need
 for rebase given upstream code churn.

   * No risk is added outside NFS backup service itself since the
 changes to current code are all in the core refactor and that
 is already merged.

If this FFE is granted, I will give the required rebase my immediate
attention.

Thanks.

- --
Tom Barron
t...@dyncloud.net

[1] - https://review.openstack.org/#/c/149726
[2] - https://blueprints.launchpad.net/cinder/+spec/nfs-backup
[3] - https://review.openstack.org/#/c/149725
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBAgAGBQJVAF1YAAoJEGeKBzAeUxEHayUH/2iWuOiKBnRauX40fwcR7+js
lIM+qRIHlg2iJ+cnqap6HHUhBSxwHnuAV41zQmFBKnfhc3sIqS98ZSVlUaJQtct/
YjjInKOpxFOEw1FgoFMsrg0qm76zFMXXVIKNegy2iXgXsKzDTWed5n57N8FAP2+6
q/uASOZNHgxbeZLV7LSKS21/3WUoQpIQiW0+1GtkVtO1C9t8Io+TrjlZj7T60kHJ
UEH5HShKE0U40SKhgwRyEK7HqbMDGv8w5SsUgyUntdgDlQycgyI/erKm5WJqcZsF
F6om6HY3oxtulcjbrWmA6+ENnOYsLchXFT8fZeLj7JWOarv5SF2fBQFTqzc/36U=
=/FVr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] ratio: created to attached

2014-12-24 Thread Tom Barron
On 12/22/14 4:48 PM, John Griffith wrote:
 On Sat, Dec 20, 2014 at 4:56 PM, Tom Barron t...@dyncloud.net wrote:
 Does anyone have real world experience, even data, to speak to the
 question: in an OpenStack cloud, what is the likely ratio of (created)
 cinder volumes to attached cinder volumes?
 
 Thanks,
 
 Tom Barron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Honestly I think the assumption is and should be 1:1, perhaps not 100%
 duty-cycle, but certainly periods of time when there is a 100% attach
 rate.
 

Certainly peak usage would be 1:1.  But that still allows for lots of
distributions - e.g. 1:1 2% of the time, 10:9 80%, 10:7 95% vs 1:1 90%, etc.

Some of the devs on this list also run clouds, so I'm curious if there
are data available indicating what kind of distribution of this ratio
they see in practice.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] ratio: created to attached

2014-12-20 Thread Tom Barron
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Does anyone have real world experience, even data, to speak to the
question: in an OpenStack cloud, what is the likely ratio of (created)
cinder volumes to attached cinder volumes?

Thanks,

Tom Barron
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBAgAGBQJUlgybAAoJEGeKBzAeUxEHqKwIAJjL5TCP7s+Ev8RNr+5bWARF
zy3I216qejKdlM+a9Vxkl6ZWHMklWEhpMmQiUDMvEitRSlHpIHyhh1RfZbl4W9Fe
GVXn04sXIuoNPgbFkkPIwE/45CJC1kGIBDub/pr9PmNv9mzAf3asLCHje8n3voWh
d30If5SlPiaVoc0QNrq0paK7Yl1hh5jLa2zeV4qu4teRts/GjySJI7bR0k/TW5n4
e2EKxf9MhbxzjQ6QsgvWzxmryVIKRSY9z8Eg/qt7AfXF4Kx++MNo8VbX3AuOu1XV
cnHlmuGqVq71uMjWXCeqK8HyAP8nkn2cKnJXhRYli6qSwf9LxzjC+kMLn364IX4=
=AZ0i
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] setuptools 6.0 ruins the world - SOLVED

2014-09-27 Thread Tom Barron
On 09/27/2014 10:20 AM, Sean Dague wrote:
 setuptools 6.0.1 has been released -
 https://pypi.python.org/pypi/setuptools/6.0.1#id1 - hopefully addressing
 this issue.
 
   -Sean

Thanks for taking care of this, Sean!  Though it was kinda interesting
to see my little 8GB stinkpad totally brought to its knees doing cinder
'run_tests.sh -f -u'.  Chasing requirements.txt wasn't getting me very
far.  Working now though.

-- Tom

 
 On 09/27/2014 09:34 AM, Diego Parrilla Santamaría wrote:
 Hi Sean,

 I'm getting memory errors today using devstack. Never happened to me
 before with 8GB of RAM... 

 It seems the mirrors still have the version with the memory leak.

 Thank you for the information, you saved my day!

 Diego

  -- 
 Diego Parrilla
 http://www.stackops.com/*CEO*
 **www.stackops.com*
 http://www.stackops.com/ | * diego.parri...@stackops.com
 mailto:diego.parri...@stackops.com | +34 91 005-2164
 | skype:diegoparrilla
 *

 

 *


 On Sat, Sep 27, 2014 at 3:27 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 On 09/27/2014 09:23 AM, Sean Dague wrote:
  If anyone is checking in on their patches this weekend and sees that
  they seem to be failing on really odd stuff, like docs, pep8, unit
  tests... it looks like it's because setuptools 6.0 has a terrible 
 memory
  leak when processing requirements - I've filed it upstream
  
 https://bitbucket.org/pypa/setuptools/issue/259/setuptools-60-causes-giant-memory-leaks-in
 
  Setuptools 6.0 was released Friday night. (Side note: as a service to
  others releasing major software bumps on critical python software on a
  Friday night should be avoided.)
 
  If anyone has a direct line to setuptools devs, please reach out. The
  entire CI pipeline is basically dead until this is addressed.
 
  Because of how deeply embedded setuptools is in our environment... 
 there
  isn't much we can do without an upstream fix.

 Just as I sent this Jason R. Coombs said he pulled the release from
 pypi. No idea if that will automatically get pulled from our mirrors or
 not, but that should make things function again (see update in -
 
 https://bitbucket.org/pypa/setuptools/issue/259/setuptools-60-causes-giant-memory-leaks-in)

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev