[openstack-dev] [cyborg] [weekly-meeting]

2018-10-30 Thread Li Liu
Weekly meeting tomorrow will be held tomorrow at the usual time10AM
EST/10PM BJ time

Planned Agenda:

1. Status updates on patches:
https://review.openstack.org/#/q/status:open%20project:openstack/cyborg
https://review.openstack.org/#/q/project:openstack/cyborg-specs

2. Berlin Summit Planning
Just opened an etherpad for tracking summit related stuff.
https://etherpad.openstack.org/p/cyborg-berlin-summit-2018-plans

-- 
Thank you

Regards

Li
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-30 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Freezer][release] A reminder to release Freezer at Stein-1

2018-10-30 Thread Trinh Nguyen
Hi Geng and team,

This is just a reminder that we are in the probation period of keeping
Freezer as an official project (deadline is Stein-2). So, we need to
release Freezer at Stein-1 this week (actually it's last week). Even though
it's not required anymore [1], we need to do this to evaluate our effort to
revive Freezer as we agreed.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/135088.html

Bests,

-- 
*Trinh Nguyen*
*www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open

2018-10-30 Thread Tony Breeds
On Tue, Oct 30, 2018 at 11:25:02AM -0700, iain macdonnell wrote:
> I must be losing it. On what planet is "Tiny Town" a single word, and
> "Troublesome" not more than 10 characters?

Sorry for the mistake.  Should either of these names win the popular
vote clearly they would not be viable.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Amy Marrich
Ian,

Great job by yourself, your mentees and last but not least your mentors!

Way to go!!!

Amy (spotz)

On Tue, Oct 30, 2018 at 9:10 AM, Ian Y. Choi  wrote:

> Hello,
>
> I got involved organizing & mentoring Korean people for OpenStack upstream
> contribution for about last two months,
> and would like to share with community members.
>
> Total nine mentees had started to learn OpenStack, contributed, and
> finally survived as volunteers for
>  1) developing OpenStack mobile app for better mobile user interfaces and
> experiences
> (inspired from https://github.com/stackerz/app which worked on Juno
> release), and
>  2) translating OpenStack official project artifacts including documents,
>  and Container Whitepaper ( https://www.openstack.org/cont
> ainers/leveraging-containers-and-openstack/ ).
>
> Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin,
> Sungjin Kang, and Andrew Yongjoon Kong)
> all helped to organize total 8 offline meetups + one mini-hackathon and
> mentored to attendees.
>
> The followings are brief summary:
>  - "OpenStack Controller" Android app is available on Play Store
>   : https://play.google.com/store/apps/details?id=openstack.cont
> ributhon.com.openstackcontroller
>(GitHub: https://github.com/kosslab-kr/openstack-controller )
>
>  - Most high-priority projects (although it is not during string freeze
> period) and documents are
>100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and
> Container Whitepaper.
>
>  - Total 18,695 words were translated into Korean by four contributors
>   (confirmed through Zanata API: https://translate.openstack.or
> g/rest/stats/user/[Zanata ID]/2018-08-16..2018-10-25 ):
>
> ++---+-+
> | Zanata ID  | Name  | Number of words |
> ++---+-+
> | ardentpark | Soonyeul Park | 12517   |
> ++---+-+
> | bnitech| Dongbim Im| 693 |
> ++---+-+
> | csucom | Sungwook Choi | 4397|
> ++---+-+
> | jaeho93| Jaeho Cho | 1088|
> ++---+-+
>
>  - The list of projects translated into Korean are described as:
>
> +-+-+
> | Project | Number of words |
> +-+-+
> | api-site| 20  |
> +-+-+
> | cinder  | 405 |
> +-+-+
> | designate-dashboard | 4   |
> +-+-+
> | horizon | 3226|
> +-+-+
> | i18n| 434 |
> +-+-+
> | ironic  | 4   |
> +-+-+
> | Leveraging Containers and OpenStack | 5480|
> +-+-+
> | neutron-lbaas-dashboard | 5   |
> +-+-+
> | openstack-helm  | 8835|
> +-+-+
> | trove-dashboard | 89  |
> +-+-+
> | zun-ui  | 193 |
> +-+-+
>
> I would like to really appreciate all co-mentors and participants on such
> a big event for promoting OpenStack contribution.
> The venue and food were supported by Korea Open Source Software
> Development Center ( https://kosslab.kr/ ).
>
>
> With many thanks,
>
> /Ian
>
> ___
> Community mailing list
> commun...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Trinh Nguyen
Awesome work, Ian \m/\m/\m/

On Wed, Oct 31, 2018 at 6:19 AM Sean McGinnis  wrote:

> On Tue, Oct 30, 2018 at 11:10:42PM +0900, Ian Y. Choi wrote:
> > Hello,
> >
> > I got involved organizing & mentoring Korean people for OpenStack
> upstream
> > contribution for about last two months,
> > and would like to share with community members.
> >
>
> Very cool! Thanks for organizing this Ian. And thank you to all that
> contributed. Some really fun and useful stuff!
>
> Sean
>
>
> > Total nine mentees had started to learn OpenStack, contributed, and
> finally
> > survived as volunteers for
> >  1) developing OpenStack mobile app for better mobile user interfaces and
> > experiences
> > (inspired from https://github.com/stackerz/app which worked on Juno
> > release), and
> >  2) translating OpenStack official project artifacts including documents,
> >  and Container Whitepaper (
> >
> https://www.openstack.org/containers/leveraging-containers-and-openstack/
> ).
> >
> > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin,
> > Sungjin Kang, and Andrew Yongjoon Kong)
> > all helped to organize total 8 offline meetups + one mini-hackathon and
> > mentored to attendees.
> >
> > The followings are brief summary:
> >  - "OpenStack Controller" Android app is available on Play Store
> >   :
> https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
> >(GitHub: https://github.com/kosslab-kr/openstack-controller )
> >
> >  - Most high-priority projects (although it is not during string freeze
> > period) and documents are
> >100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and
> > Container Whitepaper.
> >
> >  - Total 18,695 words were translated into Korean by four contributors
> >   (confirmed through Zanata API:
> > https://translate.openstack.org/rest/stats/user/[Zanata
> > ID]/2018-08-16..2018-10-25 ):
> >
> > ++---+-+
> > | Zanata ID  | Name  | Number of words |
> > ++---+-+
> > | ardentpark | Soonyeul Park | 12517   |
> > ++---+-+
> > | bnitech| Dongbim Im| 693 |
> > ++---+-+
> > | csucom | Sungwook Choi | 4397|
> > ++---+-+
> > | jaeho93| Jaeho Cho | 1088|
> > ++---+-+
> >
> >  - The list of projects translated into Korean are described as:
> >
> > +-+-+
> > | Project | Number of words |
> > +-+-+
> > | api-site| 20  |
> > +-+-+
> > | cinder  | 405 |
> > +-+-+
> > | designate-dashboard | 4   |
> > +-+-+
> > | horizon | 3226|
> > +-+-+
> > | i18n| 434 |
> > +-+-+
> > | ironic  | 4   |
> > +-+-+
> > | Leveraging Containers and OpenStack | 5480|
> > +-+-+
> > | neutron-lbaas-dashboard | 5   |
> > +-+-+
> > | openstack-helm  | 8835|
> > +-+-+
> > | trove-dashboard | 89  |
> > +-+-+
> > | zun-ui  | 193 |
> > +-+-+
> >
> > I would like to really appreciate all co-mentors and participants on
> such a
> > big event for promoting OpenStack contribution.
> > The venue and food were supported by Korea Open Source Software
> Development
> > Center ( https://kosslab.kr/ ).
> >
> >
> > With many thanks,
> >
> > /Ian
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [Searchlight][release] Searchlight will release Stein-1

2018-10-30 Thread Trinh Nguyen
Thanks :)

On Wed, Oct 31, 2018 at 4:00 AM Mohammed Naser  wrote:

> Yay!
>
> Congratulations on the first Stein release, well done with your work
> in looking after Searchlight so far.
> On Tue, Oct 30, 2018 at 6:37 AM Trinh Nguyen 
> wrote:
> >
> > Hi team,
> >
> > I'm doing a release for Searchlight projects (searchlight,
> searchlight-ui, python-searchlightclient) [1]. Please help to review and
> make sure everything is ok.
> >
> > [1] https://review.openstack.org/#/c/614066/
> >
> > Finally \m/ :D
> >
> > Bests,
> >
> > --
> > Trinh Nguyen
> > www.edlab.xyz
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Mohammed Naser — vexxhost
> -
> D. 514-316-8872
> D. 800-910-1726 ext. 200
> E. mna...@vexxhost.com
> W. http://vexxhost.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
*Trinh Nguyen*
*www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ironic integration CI jobs

2018-10-30 Thread Julia Kreger
With the discussion of CI jobs and the fact that I have been finding myself
checking job status several times a day so early in the cycle, I think it
is time for ironic to revisit many of our CI jobs.

The bottom line is ironic is very resource intensive to test. A lot of that
is because of the underlying way we enroll/manage nodes and then execute
the integration scenarios emulating bare metal. I think we can improve that
with some ansible.

In the mean time I created a quick chart[1] to try and make sense out
overall integration coverage and I think it makes sense to remove three of
the jobs.

ironic-tempest-dsvm-ipa-wholedisk-agent_ipmitool-tinyipa-multinode - This
job is essentially the same as our grenade mutlinode job, the only
difference being grenade.
ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job
essentially just duplicates the functionality already covered in other
jobs, including the grenade job.
ironic-tempest-dsvm-bfv - This presently non-voting job validates that the
iPXE mode of the 'pxe' boot interface supports boot from volume. It was
superseded by ironic-tempest-dsvm-ipxe-bfv which focuses on the use of the
'ipxe' boot interface. The underlying code is all the same deep down in all
of the helper methods.

I'll go ahead and put this up as a topic for our weekly meeting next week
so we can discuss.

Thanks,

-Julia

[1]: https://ethercalc.openstack.org/ces0z3xjb1ir
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul Queue backlogs and resource usage

2018-10-30 Thread Matt Riedemann

On 10/30/2018 11:03 AM, Clark Boylan wrote:

If you find any of this interesting and would like to help feel free to reach 
out to myself or the infra team.


I find this interesting and thanks for providing the update to the 
mailing list. That's mostly what I wanted to say.


FWIW I've still got https://review.openstack.org/#/c/606981/ and the 
related changes to drop the nova-multiattach job and enable the 
multiattach volume tests in the integrated gate, but am hung up on some 
test failures in the multi-node tempest job as a result of that (the 
nova-multiattach job is single-node). There must be something weird that 
tickles those tests in a multi-node configuration and I just haven't dug 
into it yet, but maybe one of our intrepid contributors can lend a hand 
and debug it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Sean McGinnis
On Tue, Oct 30, 2018 at 11:10:42PM +0900, Ian Y. Choi wrote:
> Hello,
> 
> I got involved organizing & mentoring Korean people for OpenStack upstream
> contribution for about last two months,
> and would like to share with community members.
> 

Very cool! Thanks for organizing this Ian. And thank you to all that
contributed. Some really fun and useful stuff!

Sean


> Total nine mentees had started to learn OpenStack, contributed, and finally
> survived as volunteers for
>  1) developing OpenStack mobile app for better mobile user interfaces and
> experiences
>     (inspired from https://github.com/stackerz/app which worked on Juno
> release), and
>  2) translating OpenStack official project artifacts including documents,
>  and Container Whitepaper (
> https://www.openstack.org/containers/leveraging-containers-and-openstack/ ).
> 
> Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin,
> Sungjin Kang, and Andrew Yongjoon Kong)
> all helped to organize total 8 offline meetups + one mini-hackathon and
> mentored to attendees.
> 
> The followings are brief summary:
>  - "OpenStack Controller" Android app is available on Play Store
>   : 
> https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
>    (GitHub: https://github.com/kosslab-kr/openstack-controller )
> 
>  - Most high-priority projects (although it is not during string freeze
> period) and documents are
>    100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and
> Container Whitepaper.
> 
>  - Total 18,695 words were translated into Korean by four contributors
>   (confirmed through Zanata API:
> https://translate.openstack.org/rest/stats/user/[Zanata
> ID]/2018-08-16..2018-10-25 ):
> 
> ++---+-+
> | Zanata ID  | Name  | Number of words |
> ++---+-+
> | ardentpark | Soonyeul Park | 12517   |
> ++---+-+
> | bnitech    | Dongbim Im    | 693 |
> ++---+-+
> | csucom | Sungwook Choi | 4397    |
> ++---+-+
> | jaeho93    | Jaeho Cho | 1088    |
> ++---+-+
> 
>  - The list of projects translated into Korean are described as:
> 
> +-+-+
> | Project | Number of words |
> +-+-+
> | api-site    | 20  |
> +-+-+
> | cinder  | 405 |
> +-+-+
> | designate-dashboard | 4   |
> +-+-+
> | horizon | 3226    |
> +-+-+
> | i18n    | 434 |
> +-+-+
> | ironic  | 4   |
> +-+-+
> | Leveraging Containers and OpenStack | 5480    |
> +-+-+
> | neutron-lbaas-dashboard | 5   |
> +-+-+
> | openstack-helm  | 8835    |
> +-+-+
> | trove-dashboard | 89  |
> +-+-+
> | zun-ui  | 193 |
> +-+-+
> 
> I would like to really appreciate all co-mentors and participants on such a
> big event for promoting OpenStack contribution.
> The venue and food were supported by Korea Open Source Software Development
> Center ( https://kosslab.kr/ ).
> 
> 
> With many thanks,
> 
> /Ian
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Clark Boylan
On Tue, Oct 30, 2018, at 1:01 PM, Ben Nemec wrote:
> 
> 
> On 10/30/18 1:25 PM, Clark Boylan wrote:
> > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote:
> >> On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec  wrote:
> >>>
> >>> Tagging with tripleo since my suggestion below is specific to that 
> >>> project.
> >>>
> >>> On 10/30/18 11:03 AM, Clark Boylan wrote:
> >>>> Hello everyone,
> >>>>
> >>>> A little while back I sent email explaining how the gate queues work and 
> >>>> how fixing bugs helps us test and merge more code. All of this still is 
> >>>> still true and we should keep pushing to improve our testing to avoid 
> >>>> gate resets.
> >>>>
> >>>> Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In 
> >>>> the process of doing this we had to restart Zuul which brought in a new 
> >>>> logging feature that exposes node resource usage by jobs. Using this 
> >>>> data I've been able to generate some report information on where our 
> >>>> node demand is going. This change [0] produces this report [1].
> >>>>
> >>>> As with optimizing software we want to identify which changes will have 
> >>>> the biggest impact and to be able to measure whether or not changes have 
> >>>> had an impact once we have made them. Hopefully this information is a 
> >>>> start at doing that. Currently we can only look back to the point Zuul 
> >>>> was restarted, but we have a thirty day log rotation for this service 
> >>>> and should be able to look at a month's worth of data going forward.
> >>>>
> >>>> Looking at the data you might notice that Tripleo is using many more 
> >>>> node resources than our other projects. They are aware of this and have 
> >>>> a plan [2] to reduce their resource consumption. We'll likely be using 
> >>>> this report generator to check progress of this plan over time.
> >>>
> >>> I know at one point we had discussed reducing the concurrency of the
> >>> tripleo gate to help with this. Since tripleo is still using >50% of the
> >>> resources it seems like maybe we should revisit that, at least for the
> >>> short-term until the more major changes can be made? Looking through the
> >>> merge history for tripleo projects I don't see a lot of cases (any, in
> >>> fact) where more than a dozen patches made it through anyway*, so I
> >>> suspect it wouldn't have a significant impact on gate throughput, but it
> >>> would free up quite a few nodes for other uses.
> >>>
> >>
> >> It's the failures in gate and resets.  At this point I think it would
> >> be a good idea to turn down the concurrency of the tripleo queue in
> >> the gate if possible. As of late it's been timeouts but we've been
> >> unable to track down why it's timing out specifically.  I personally
> >> have a feeling it's the container download times since we do not have
> >> a local registry available and are only able to leverage the mirrors
> >> for some levels of caching. Unfortunately we don't get the best
> >> information about this out of docker (or the mirrors) and it's really
> >> hard to determine what exactly makes things run a bit slower.
> > 
> > We actually tried this not too long ago 
> > https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b
> >  but decided to revert it because it didn't decrease the check queue 
> > backlog significantly. We were still running at several hours behind most 
> > of the time.
> 
> I'm surprised to hear that. Counting the tripleo jobs in the gate at 
> positions 11-20 right now, I see around 84 nodes tied up in long-running 
> jobs and another 32 for shorter unit test jobs. The latter probably 
> don't have much impact, but the former is a non-trivial amount. It may 
> not erase the entire 2300+ job queue that we have right now, but it 
> seems like it should help.
> 
> > 
> > If we want to set up better monitoring and measuring and try it again we 
> > can do that. But we probably want to measure queue sizes with and without 
> > the change like that to better understand if it helps.
> 
> This seems like good information to start capturing, otherwise we are 
> kind of just guessing. Is there something in infra alread

Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Alex Schultz
On Tue, Oct 30, 2018 at 12:25 PM Clark Boylan  wrote:
>
> On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote:
> > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec  wrote:
> > >
> > > Tagging with tripleo since my suggestion below is specific to that 
> > > project.
> > >
> > > On 10/30/18 11:03 AM, Clark Boylan wrote:
> > > > Hello everyone,
> > > >
> > > > A little while back I sent email explaining how the gate queues work 
> > > > and how fixing bugs helps us test and merge more code. All of this 
> > > > still is still true and we should keep pushing to improve our testing 
> > > > to avoid gate resets.
> > > >
> > > > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In 
> > > > the process of doing this we had to restart Zuul which brought in a new 
> > > > logging feature that exposes node resource usage by jobs. Using this 
> > > > data I've been able to generate some report information on where our 
> > > > node demand is going. This change [0] produces this report [1].
> > > >
> > > > As with optimizing software we want to identify which changes will have 
> > > > the biggest impact and to be able to measure whether or not changes 
> > > > have had an impact once we have made them. Hopefully this information 
> > > > is a start at doing that. Currently we can only look back to the point 
> > > > Zuul was restarted, but we have a thirty day log rotation for this 
> > > > service and should be able to look at a month's worth of data going 
> > > > forward.
> > > >
> > > > Looking at the data you might notice that Tripleo is using many more 
> > > > node resources than our other projects. They are aware of this and have 
> > > > a plan [2] to reduce their resource consumption. We'll likely be using 
> > > > this report generator to check progress of this plan over time.
> > >
> > > I know at one point we had discussed reducing the concurrency of the
> > > tripleo gate to help with this. Since tripleo is still using >50% of the
> > > resources it seems like maybe we should revisit that, at least for the
> > > short-term until the more major changes can be made? Looking through the
> > > merge history for tripleo projects I don't see a lot of cases (any, in
> > > fact) where more than a dozen patches made it through anyway*, so I
> > > suspect it wouldn't have a significant impact on gate throughput, but it
> > > would free up quite a few nodes for other uses.
> > >
> >
> > It's the failures in gate and resets.  At this point I think it would
> > be a good idea to turn down the concurrency of the tripleo queue in
> > the gate if possible. As of late it's been timeouts but we've been
> > unable to track down why it's timing out specifically.  I personally
> > have a feeling it's the container download times since we do not have
> > a local registry available and are only able to leverage the mirrors
> > for some levels of caching. Unfortunately we don't get the best
> > information about this out of docker (or the mirrors) and it's really
> > hard to determine what exactly makes things run a bit slower.
>
> We actually tried this not too long ago 
> https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b
>  but decided to revert it because it didn't decrease the check queue backlog 
> significantly. We were still running at several hours behind most of the time.
>
> If we want to set up better monitoring and measuring and try it again we can 
> do that. But we probably want to measure queue sizes with and without the 
> change like that to better understand if it helps.
>
> As for container image download times can we quantify that via docker logs? 
> Basically sum up the amount of time spent by a job downloading images so that 
> we can see what the impact is but also measure if changes improve that? As 
> for other ideas improving things seems like many of the images that tripleo 
> use are quite large. I recall seeing a > 600MB image just for rsyslog. 
> Wouldn't it be advantageous for both the gate and tripleo in the real world 
> to trim the size of those images (which should improve download times). In 
> any case quantifying the size of the downloads and trimming those if possible 
> is likely also worthwhile.
>

So it's not that simple as we don't just download all the images in a
distinct task and there isn't any information provided around
size/speed AFAIK.  Additionally we aren't doing anything special with
the images (it's mostly kolla built containers with a handful of
tweaks) so that's just the size of the containers.  I am currently
working on reducing any tripleo specific dependencies (ie removal of
instack-undercloud, etc) in hopes that we'll shave off some of the
dependencies but it seems that there's a larger (bloat) issue around
containers in general.  I have no idea why the rsyslog container would
be 600M, but yea that does seem excessive.

> Clark
>
> 

Re: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade

2018-10-30 Thread Emilien Macchi
A bit of an update here:

- We merged the patch in openstack/paunch that stop the Docker container if
we try to start a Podman container.
- We switched the undercloud upgrade job to test upgrades from Docker to
Podman (for now containers are stopped in Docker and then started in
Podman).
- We are now looking how and where to remove the Docker containers once the
upgrade finished. For that work, I started with the Undercloud and patched
tripleoclient to run the post_upgrade_tasks which to me is a good candidate
to run docker rm.

Please look:
- tripleoclient / run post_upgrade_tasks when upgrading
standalone/undercloud: https://review.openstack.org/614349
- THT: prototype on how we would remove the Docker containers:
https://review.openstack.org/611092

Note: for now we assume that Docker is still available on the host after
the upgrade as we are testing things under centos7. I'm aware that this
assumption can change in the future but we'll probably re-iterate.

What I need from the upgrade team is feedback on this workflow, and see if
we can re-use these bits originally tested on Undercloud / Standalone, for
the Overcloud as well.

Thanks for the feedback,


On Fri, Oct 19, 2018 at 8:00 AM Emilien Macchi  wrote:

> On Fri, Oct 19, 2018 at 4:24 AM Giulio Fidente 
> wrote:
>
>> 1) create the podman systemd unit
>> 2) delete the docker container
>>
>
> We finally went with "stop the docker container"
>
> 3) start the podman container
>>
>
> and 4) delete the docker container later in THT upgrade_tasks.
>
> And yes +1 to do the same in ceph-ansible if possible.
> --
> Emilien Macchi
>


-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Ben Nemec



On 10/30/18 1:25 PM, Clark Boylan wrote:

On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote:

On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec  wrote:


Tagging with tripleo since my suggestion below is specific to that project.

On 10/30/18 11:03 AM, Clark Boylan wrote:

Hello everyone,

A little while back I sent email explaining how the gate queues work and how 
fixing bugs helps us test and merge more code. All of this still is still true 
and we should keep pushing to improve our testing to avoid gate resets.

Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the 
process of doing this we had to restart Zuul which brought in a new logging 
feature that exposes node resource usage by jobs. Using this data I've been 
able to generate some report information on where our node demand is going. 
This change [0] produces this report [1].

As with optimizing software we want to identify which changes will have the 
biggest impact and to be able to measure whether or not changes have had an 
impact once we have made them. Hopefully this information is a start at doing 
that. Currently we can only look back to the point Zuul was restarted, but we 
have a thirty day log rotation for this service and should be able to look at a 
month's worth of data going forward.

Looking at the data you might notice that Tripleo is using many more node 
resources than our other projects. They are aware of this and have a plan [2] 
to reduce their resource consumption. We'll likely be using this report 
generator to check progress of this plan over time.


I know at one point we had discussed reducing the concurrency of the
tripleo gate to help with this. Since tripleo is still using >50% of the
resources it seems like maybe we should revisit that, at least for the
short-term until the more major changes can be made? Looking through the
merge history for tripleo projects I don't see a lot of cases (any, in
fact) where more than a dozen patches made it through anyway*, so I
suspect it wouldn't have a significant impact on gate throughput, but it
would free up quite a few nodes for other uses.



It's the failures in gate and resets.  At this point I think it would
be a good idea to turn down the concurrency of the tripleo queue in
the gate if possible. As of late it's been timeouts but we've been
unable to track down why it's timing out specifically.  I personally
have a feeling it's the container download times since we do not have
a local registry available and are only able to leverage the mirrors
for some levels of caching. Unfortunately we don't get the best
information about this out of docker (or the mirrors) and it's really
hard to determine what exactly makes things run a bit slower.


We actually tried this not too long ago 
https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b
 but decided to revert it because it didn't decrease the check queue backlog 
significantly. We were still running at several hours behind most of the time.


I'm surprised to hear that. Counting the tripleo jobs in the gate at 
positions 11-20 right now, I see around 84 nodes tied up in long-running 
jobs and another 32 for shorter unit test jobs. The latter probably 
don't have much impact, but the former is a non-trivial amount. It may 
not erase the entire 2300+ job queue that we have right now, but it 
seems like it should help.




If we want to set up better monitoring and measuring and try it again we can do 
that. But we probably want to measure queue sizes with and without the change 
like that to better understand if it helps.


This seems like good information to start capturing, otherwise we are 
kind of just guessing. Is there something in infra already that we could 
use or would it need to be new tooling?




As for container image download times can we quantify that via docker logs? 
Basically sum up the amount of time spent by a job downloading images so that we 
can see what the impact is but also measure if changes improve that? As for other 
ideas improving things seems like many of the images that tripleo use are quite 
large. I recall seeing a > 600MB image just for rsyslog. Wouldn't it be 
advantageous for both the gate and tripleo in the real world to trim the size of 
those images (which should improve download times). In any case quantifying the 
size of the downloads and trimming those if possible is likely also worthwhile.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Searchlight][release] Searchlight will release Stein-1

2018-10-30 Thread Mohammed Naser
Yay!

Congratulations on the first Stein release, well done with your work
in looking after Searchlight so far.
On Tue, Oct 30, 2018 at 6:37 AM Trinh Nguyen  wrote:
>
> Hi team,
>
> I'm doing a release for Searchlight projects (searchlight, searchlight-ui, 
> python-searchlightclient) [1]. Please help to review and make sure everything 
> is ok.
>
> [1] https://review.openstack.org/#/c/614066/
>
> Finally \m/ :D
>
> Bests,
>
> --
> Trinh Nguyen
> www.edlab.xyz
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling

2018-10-30 Thread Chris Dent

On Tue, 30 Oct 2018, Mohammed Naser wrote:


We spoke about this today in the OpenStack Ansible meeting, we've come
up with the following steps:


Great! Thank you, Guilherme, and Lee very much.


1) Create a role for placement which will be called `os_placement`
located in `openstack/openstack-ansible-os_placement`
2) Integrate that role with the OSA master and stop using the built-in
placement service
3) Update the playbooks to handle upgrades and verify using our
periodic upgrade jobs


Makes sense.


The difficult task really comes in the upgrade jobs, I really hope
that we can get some help on this as this probably puts a bit of a
load already on Guilherme, so anyone up to look into that part when
the first 2 are completed? :)


The upgrade-nova script in https://review.openstack.org/#/c/604454/
has been written to make it pretty clear what each of the steps
mean. With luck those steps can translate to both the ansible and
tripleo environments.

Please feel free to add me to any of the reviews and come calling in
#openstack-placement with questions if there are any.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Mohammed Naser
In echoing the words of everyone, it takes a tremendous amount of effort and 
patience to lead this effort. 

THANK YOU!

Sent from my iPhone

> On Oct 30, 2018, at 6:14 PM, Doug Hellmann  wrote:
> 
> "Ian Y. Choi"  writes:
> 
>> Hello,
>> 
>> I got involved organizing & mentoring Korean people for OpenStack 
>> upstream contribution for about last two months,
>> and would like to share with community members.
>> 
>> Total nine mentees had started to learn OpenStack, contributed, and 
>> finally survived as volunteers for
>>  1) developing OpenStack mobile app for better mobile user interfaces 
>> and experiences
>> (inspired from https://github.com/stackerz/app which worked on Juno 
>> release), and
>>  2) translating OpenStack official project artifacts including documents,
>>  and Container Whitepaper ( 
>> https://www.openstack.org/containers/leveraging-containers-and-openstack/ ).
>> 
>> Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, 
>> Sungjin Kang, and Andrew Yongjoon Kong)
>> all helped to organize total 8 offline meetups + one mini-hackathon and 
>> mentored to attendees.
>> 
>> The followings are brief summary:
>>  - "OpenStack Controller" Android app is available on Play Store
>>   : 
>> https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
>>(GitHub: https://github.com/kosslab-kr/openstack-controller )
>> 
>>  - Most high-priority projects (although it is not during string freeze 
>> period) and documents are
>>100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, 
>> and Container Whitepaper.
>> 
>>  - Total 18,695 words were translated into Korean by four contributors
>>   (confirmed through Zanata API: 
>> https://translate.openstack.org/rest/stats/user/[Zanata 
>> ID]/2018-08-16..2018-10-25 ):
>> 
>> ++---+-+
>> | Zanata ID  | Name  | Number of words |
>> ++---+-+
>> | ardentpark | Soonyeul Park | 12517   |
>> ++---+-+
>> | bnitech| Dongbim Im| 693 |
>> ++---+-+
>> | csucom | Sungwook Choi | 4397|
>> ++---+-+
>> | jaeho93| Jaeho Cho | 1088|
>> ++---+-+
>> 
>>  - The list of projects translated into Korean are described as:
>> 
>> +-+-+
>> | Project | Number of words |
>> +-+-+
>> | api-site| 20  |
>> +-+-+
>> | cinder  | 405 |
>> +-+-+
>> | designate-dashboard | 4   |
>> +-+-+
>> | horizon | 3226|
>> +-+-+
>> | i18n| 434 |
>> +-+-+
>> | ironic  | 4   |
>> +-+-+
>> | Leveraging Containers and OpenStack | 5480|
>> +-+-+
>> | neutron-lbaas-dashboard | 5   |
>> +-+-+
>> | openstack-helm  | 8835|
>> +-+-+
>> | trove-dashboard | 89  |
>> +-+-+
>> | zun-ui  | 193 |
>> +-+-+
>> 
>> I would like to really appreciate all co-mentors and participants on 
>> such a big event for promoting OpenStack contribution.
>> The venue and food were supported by Korea Open Source Software 
>> Development Center ( https://kosslab.kr/ ).
>> 
>> 
>> With many thanks,
>> 
>> /Ian
>> 
>> ___
>> Community mailing list
>> commun...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
> 
> This is an excellent success story, Ian, thank you for sharing it and
> for leading the effort.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling

2018-10-30 Thread Emilien Macchi
On the TripleO side, it sounds like Lee Yarwood is taking the lead with a
first commit in puppet-placement:
https://review.openstack.org/#/c/604182/

Lee, can you confirm that you and your team are working on it for Stein
cycle?

On Thu, Oct 25, 2018 at 1:34 PM Matt Riedemann  wrote:

> Hello OSA/TripleO people,
>
> A plan/checklist was put in place at the Stein PTG for extracting
> placement from nova [1]. The first item in that list is done in grenade
> [2], which is the devstack-based upgrade project in the integrated gate.
> That should serve as a template for the necessary upgrade steps in
> deployment projects. The related devstack change for extracted placement
> on the master branch (Stein) is [3]. Note that change has some
> dependencies.
>
> The second point in the plan from the PTG was getting extracted
> placement upgrade tooling support in a deployment project, notably
> TripleO (and/or OpenStackAnsible).
>
> Given the grenade change is done and passing tests, TripleO/OSA should
> be able to start coding up and testing an upgrade step when going from
> Rocky to Stein. My question is who can we name as an owner in either
> project to start this work? Because we really need to be starting this
> as soon as possible to flush out any issues before they are too late to
> correct in Stein.
>
> So if we have volunteers or better yet potential patches that I'm just
> not aware of, please speak up here so we know who to contact about
> status updates and if there are any questions with the upgrade.
>
> [1]
>
> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html
> [2] https://review.openstack.org/#/c/604454/
> [3] https://review.openstack.org/#/c/600162/
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open

2018-10-30 Thread iain macdonnell
I must be losing it. On what planet is "Tiny Town" a single word, and
"Troublesome" not more than 10 characters?

~iain


On Mon, Oct 29, 2018 at 10:41 PM Tony Breeds  wrote:
>
> Hi folks,
>
> It is time again to cast your vote for the naming of the T Release.
> As with last time we'll use a public polling option over per user private URLs
> for voting.  This means, everybody should proceed to use the following URL to
> cast their vote:
>
>   
> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df=b9e448b340787f0e
>
> We've selected a public poll to ensure that the whole community, not just 
> gerrit
> change owners get a vote.  Also the size of our community has grown such that 
> we
> can overwhelm CIVS if using private urls.  A public can mean that users
> behind NAT, proxy servers or firewalls may receive an message saying
> that your vote has already been lodged, if this happens please try
> another IP.
>
> Because this is a public poll, results will currently be only viewable by 
> myself
> until the poll closes. Once closed, I'll post the URL making the results
> viewable to everybody. This was done to avoid everybody seeing the results 
> while
> the public poll is running.
>
> The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results 
> will be
> posted shortly after.
>
> [1] https://governance.openstack.org/tc/reference/release-naming.html
> ---
>
> According to the Release Naming Process, this poll is to determine the
> community preferences for the name of the T release of OpenStack. It is
> possible that the top choice is not viable for legal reasons, so the second or
> later community preference could wind up being the name.
>
> Release Name Criteria
> -
>
> Each release name must start with the letter of the ISO basic Latin alphabet
> following the initial letter of the previous release, starting with the
> initial release of "Austin". After "Z", the next name should start with
> "A" again.
>
> The name must be composed only of the 26 characters of the ISO basic Latin
> alphabet. Names which can be transliterated into this character set are also
> acceptable.
>
> The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region under
> consideration must be declared before the opening of nominations, as part of
> the initiation of the selection process.
>
> The name must be a single word with a maximum of 10 characters. Words that
> describe the feature should not be included, so "Foo City" or "Foo Peak"
> would both be eligible as "Foo".
>
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may make
> an exception for one or more of them to be considered in the Condorcet poll.
> The naming official is responsible for presenting the list of exceptional
> names for consideration to the TC before the poll opens.
>
> Exact Geographic Region
> ---
>
> The Geographic Region from where names for the S release will come is Colorado
>
> Proposed Names
> --
>
> * Tarryall
> * Teakettle
> * Teller
> * Telluride
> * Thomas : the Tank Engine
> * Thornton
> * Tiger
> * Tincup
> * Timnath
> * Timber
> * Tiny Town
> * Torreys
> * Trail
> * Trinidad
> * Treasure
> * Troublesome
> * Trussville
> * Turret
> * Tyrone
>
> Proposed Names that do not meet the criteria (accepted by the TC)
> -
>
> * Train : Many Attendees of the first Denver PTG have a story to tell about 
> the trains near the PTG hotel.  We could celebrate those stories with this 
> name
>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Clark Boylan
On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote:
> On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec  wrote:
> >
> > Tagging with tripleo since my suggestion below is specific to that project.
> >
> > On 10/30/18 11:03 AM, Clark Boylan wrote:
> > > Hello everyone,
> > >
> > > A little while back I sent email explaining how the gate queues work and 
> > > how fixing bugs helps us test and merge more code. All of this still is 
> > > still true and we should keep pushing to improve our testing to avoid 
> > > gate resets.
> > >
> > > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In 
> > > the process of doing this we had to restart Zuul which brought in a new 
> > > logging feature that exposes node resource usage by jobs. Using this data 
> > > I've been able to generate some report information on where our node 
> > > demand is going. This change [0] produces this report [1].
> > >
> > > As with optimizing software we want to identify which changes will have 
> > > the biggest impact and to be able to measure whether or not changes have 
> > > had an impact once we have made them. Hopefully this information is a 
> > > start at doing that. Currently we can only look back to the point Zuul 
> > > was restarted, but we have a thirty day log rotation for this service and 
> > > should be able to look at a month's worth of data going forward.
> > >
> > > Looking at the data you might notice that Tripleo is using many more node 
> > > resources than our other projects. They are aware of this and have a plan 
> > > [2] to reduce their resource consumption. We'll likely be using this 
> > > report generator to check progress of this plan over time.
> >
> > I know at one point we had discussed reducing the concurrency of the
> > tripleo gate to help with this. Since tripleo is still using >50% of the
> > resources it seems like maybe we should revisit that, at least for the
> > short-term until the more major changes can be made? Looking through the
> > merge history for tripleo projects I don't see a lot of cases (any, in
> > fact) where more than a dozen patches made it through anyway*, so I
> > suspect it wouldn't have a significant impact on gate throughput, but it
> > would free up quite a few nodes for other uses.
> >
> 
> It's the failures in gate and resets.  At this point I think it would
> be a good idea to turn down the concurrency of the tripleo queue in
> the gate if possible. As of late it's been timeouts but we've been
> unable to track down why it's timing out specifically.  I personally
> have a feeling it's the container download times since we do not have
> a local registry available and are only able to leverage the mirrors
> for some levels of caching. Unfortunately we don't get the best
> information about this out of docker (or the mirrors) and it's really
> hard to determine what exactly makes things run a bit slower.

We actually tried this not too long ago 
https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b
 but decided to revert it because it didn't decrease the check queue backlog 
significantly. We were still running at several hours behind most of the time.

If we want to set up better monitoring and measuring and try it again we can do 
that. But we probably want to measure queue sizes with and without the change 
like that to better understand if it helps.

As for container image download times can we quantify that via docker logs? 
Basically sum up the amount of time spent by a job downloading images so that 
we can see what the impact is but also measure if changes improve that? As for 
other ideas improving things seems like many of the images that tripleo use are 
quite large. I recall seeing a > 600MB image just for rsyslog. Wouldn't it be 
advantageous for both the gate and tripleo in the real world to trim the size 
of those images (which should improve download times). In any case quantifying 
the size of the downloads and trimming those if possible is likely also 
worthwhile.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling

2018-10-30 Thread Mohammed Naser
Hi there:

We spoke about this today in the OpenStack Ansible meeting, we've come
up with the following steps:

1) Create a role for placement which will be called `os_placement`
located in `openstack/openstack-ansible-os_placement`
2) Integrate that role with the OSA master and stop using the built-in
placement service
3) Update the playbooks to handle upgrades and verify using our
periodic upgrade jobs

For #1, Guilherme from the OSA team will be taking care of creating
the role initially, I'm hoping that maybe we can get it done sometime
this week.  I think it'll probably take another week to integrate it
into the main repo.

The difficult task really comes in the upgrade jobs, I really hope
that we can get some help on this as this probably puts a bit of a
load already on Guilherme, so anyone up to look into that part when
the first 2 are completed? :)

Thanks,
Mohammed
On Thu, Oct 25, 2018 at 7:34 PM Matt Riedemann  wrote:
>
> Hello OSA/TripleO people,
>
> A plan/checklist was put in place at the Stein PTG for extracting
> placement from nova [1]. The first item in that list is done in grenade
> [2], which is the devstack-based upgrade project in the integrated gate.
> That should serve as a template for the necessary upgrade steps in
> deployment projects. The related devstack change for extracted placement
> on the master branch (Stein) is [3]. Note that change has some dependencies.
>
> The second point in the plan from the PTG was getting extracted
> placement upgrade tooling support in a deployment project, notably
> TripleO (and/or OpenStackAnsible).
>
> Given the grenade change is done and passing tests, TripleO/OSA should
> be able to start coding up and testing an upgrade step when going from
> Rocky to Stein. My question is who can we name as an owner in either
> project to start this work? Because we really need to be starting this
> as soon as possible to flush out any issues before they are too late to
> correct in Stein.
>
> So if we have volunteers or better yet potential patches that I'm just
> not aware of, please speak up here so we know who to contact about
> status updates and if there are any questions with the upgrade.
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html
> [2] https://review.openstack.org/#/c/604454/
> [3] https://review.openstack.org/#/c/600162/
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Alex Schultz
On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec  wrote:
>
> Tagging with tripleo since my suggestion below is specific to that project.
>
> On 10/30/18 11:03 AM, Clark Boylan wrote:
> > Hello everyone,
> >
> > A little while back I sent email explaining how the gate queues work and 
> > how fixing bugs helps us test and merge more code. All of this still is 
> > still true and we should keep pushing to improve our testing to avoid gate 
> > resets.
> >
> > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the 
> > process of doing this we had to restart Zuul which brought in a new logging 
> > feature that exposes node resource usage by jobs. Using this data I've been 
> > able to generate some report information on where our node demand is going. 
> > This change [0] produces this report [1].
> >
> > As with optimizing software we want to identify which changes will have the 
> > biggest impact and to be able to measure whether or not changes have had an 
> > impact once we have made them. Hopefully this information is a start at 
> > doing that. Currently we can only look back to the point Zuul was 
> > restarted, but we have a thirty day log rotation for this service and 
> > should be able to look at a month's worth of data going forward.
> >
> > Looking at the data you might notice that Tripleo is using many more node 
> > resources than our other projects. They are aware of this and have a plan 
> > [2] to reduce their resource consumption. We'll likely be using this report 
> > generator to check progress of this plan over time.
>
> I know at one point we had discussed reducing the concurrency of the
> tripleo gate to help with this. Since tripleo is still using >50% of the
> resources it seems like maybe we should revisit that, at least for the
> short-term until the more major changes can be made? Looking through the
> merge history for tripleo projects I don't see a lot of cases (any, in
> fact) where more than a dozen patches made it through anyway*, so I
> suspect it wouldn't have a significant impact on gate throughput, but it
> would free up quite a few nodes for other uses.
>

It's the failures in gate and resets.  At this point I think it would
be a good idea to turn down the concurrency of the tripleo queue in
the gate if possible. As of late it's been timeouts but we've been
unable to track down why it's timing out specifically.  I personally
have a feeling it's the container download times since we do not have
a local registry available and are only able to leverage the mirrors
for some levels of caching. Unfortunately we don't get the best
information about this out of docker (or the mirrors) and it's really
hard to determine what exactly makes things run a bit slower.

I've asked about the status of moving the scenarios off of multinode
to standalone which would half the number of systems being run for
these jobs. It's currently next on the list of things to tackle after
we get a single fedora28 job up and running.

Thanks,
-Alex

> *: I have no actual stats to back that up, I'm just looking through the
> IRC backlog for merge bot messages. If such stats do exist somewhere we
> should look at them instead. :-)
>
> >
> > Also related to the long queue backlogs is this proposal [3] to change how 
> > Zuul prioritizes resource allocations to try to be more fair.
> >
> > [0] https://review.openstack.org/#/c/613674/
> > [1] http://paste.openstack.org/show/733644/
> > [2] 
> > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html
> > [3] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html
> >
> > If you find any of this interesting and would like to help feel free to 
> > reach out to myself or the infra team.
> >
> > Thank you,
> > Clark
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ops Meetups team meeting 2018-10-30

2018-10-30 Thread Chris Morgan
Brief meeting today on #openstack-operators, minutes below.

If you are attending Berlin, please start contributing to the Forum by
selecting sesions of interest and then adding to the etherpads (see
https://wiki.openstack.org/wiki/Forum/Berlin2018). I hear there's going to
be a really great one about ceph, for example.

Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.txt
Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.log.html

Chris

-- 
Chris Morgan 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] agenda for TC meeting 1 Nov 1400 UTC

2018-10-30 Thread Doug Hellmann
TC members,

The TC will be meeting on 1 Nov at 1400 UTC in #openstack-tc to discuss
some of our ongoing initiatives. Here is the agenda for this week.

* meeting procedures

* discussion of topics for joint leadership meeting at Summit in
  Berlin

* completing TC liaison assignments
** https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams

* documenting chair responsibilities
** https://etherpad.openstack.org/p/tc-chair-responsibilities

* reviewing the health-check check list
** https://wiki.openstack.org/wiki/OpenStack_health_tracker#Health_check_list

* deciding next steps on technical vision statement
** https://review.openstack.org/592205

* deciding next steps on python 3 and distro versions for PTI
** https://review.openstack.org/610708 Add optional python3.7 unit test 
enablement to python3-first
** https://review.openstack.org/611010 Make Python 3 testing requirement less 
specific
** https://review.openstack.org/611080 Explicitly declare Stein supported 
runtimes
** https://review.openstack.org/613145 Resolution on keeping up with Python 3 
releases

* reviews needing attention
** https://review.openstack.org/613268 Indicate relmgt style for each 
deliverable
** https://review.openstack.org/613856 Remove Dragonflow from the official 
projects list

If you have suggestions for topics for the next meeting (6 Dec), please
add them to the wiki at
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Ben Nemec

Tagging with tripleo since my suggestion below is specific to that project.

On 10/30/18 11:03 AM, Clark Boylan wrote:

Hello everyone,

A little while back I sent email explaining how the gate queues work and how 
fixing bugs helps us test and merge more code. All of this still is still true 
and we should keep pushing to improve our testing to avoid gate resets.

Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the 
process of doing this we had to restart Zuul which brought in a new logging 
feature that exposes node resource usage by jobs. Using this data I've been 
able to generate some report information on where our node demand is going. 
This change [0] produces this report [1].

As with optimizing software we want to identify which changes will have the 
biggest impact and to be able to measure whether or not changes have had an 
impact once we have made them. Hopefully this information is a start at doing 
that. Currently we can only look back to the point Zuul was restarted, but we 
have a thirty day log rotation for this service and should be able to look at a 
month's worth of data going forward.

Looking at the data you might notice that Tripleo is using many more node 
resources than our other projects. They are aware of this and have a plan [2] 
to reduce their resource consumption. We'll likely be using this report 
generator to check progress of this plan over time.


I know at one point we had discussed reducing the concurrency of the 
tripleo gate to help with this. Since tripleo is still using >50% of the 
resources it seems like maybe we should revisit that, at least for the 
short-term until the more major changes can be made? Looking through the 
merge history for tripleo projects I don't see a lot of cases (any, in 
fact) where more than a dozen patches made it through anyway*, so I 
suspect it wouldn't have a significant impact on gate throughput, but it 
would free up quite a few nodes for other uses.


*: I have no actual stats to back that up, I'm just looking through the 
IRC backlog for merge bot messages. If such stats do exist somewhere we 
should look at them instead. :-)




Also related to the long queue backlogs is this proposal [3] to change how Zuul 
prioritizes resource allocations to try to be more fair.

[0] https://review.openstack.org/#/c/613674/
[1] http://paste.openstack.org/show/733644/
[2] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html
[3] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html

If you find any of this interesting and would like to help feel free to reach 
out to myself or the infra team.

Thank you,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Doug Hellmann
"Ian Y. Choi"  writes:

> Hello,
>
> I got involved organizing & mentoring Korean people for OpenStack 
> upstream contribution for about last two months,
> and would like to share with community members.
>
> Total nine mentees had started to learn OpenStack, contributed, and 
> finally survived as volunteers for
>   1) developing OpenStack mobile app for better mobile user interfaces 
> and experiences
>      (inspired from https://github.com/stackerz/app which worked on Juno 
> release), and
>   2) translating OpenStack official project artifacts including documents,
>   and Container Whitepaper ( 
> https://www.openstack.org/containers/leveraging-containers-and-openstack/ ).
>
> Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, 
> Sungjin Kang, and Andrew Yongjoon Kong)
> all helped to organize total 8 offline meetups + one mini-hackathon and 
> mentored to attendees.
>
> The followings are brief summary:
>   - "OpenStack Controller" Android app is available on Play Store
>    : 
> https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
>     (GitHub: https://github.com/kosslab-kr/openstack-controller )
>
>   - Most high-priority projects (although it is not during string freeze 
> period) and documents are
>     100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, 
> and Container Whitepaper.
>
>   - Total 18,695 words were translated into Korean by four contributors
>    (confirmed through Zanata API: 
> https://translate.openstack.org/rest/stats/user/[Zanata 
> ID]/2018-08-16..2018-10-25 ):
>
> ++---+-+
> | Zanata ID  | Name  | Number of words |
> ++---+-+
> | ardentpark | Soonyeul Park | 12517   |
> ++---+-+
> | bnitech    | Dongbim Im    | 693 |
> ++---+-+
> | csucom | Sungwook Choi | 4397    |
> ++---+-+
> | jaeho93    | Jaeho Cho | 1088    |
> ++---+-+
>
>   - The list of projects translated into Korean are described as:
>
> +-+-+
> | Project | Number of words |
> +-+-+
> | api-site    | 20  |
> +-+-+
> | cinder  | 405 |
> +-+-+
> | designate-dashboard | 4   |
> +-+-+
> | horizon | 3226    |
> +-+-+
> | i18n    | 434 |
> +-+-+
> | ironic  | 4   |
> +-+-+
> | Leveraging Containers and OpenStack | 5480    |
> +-+-+
> | neutron-lbaas-dashboard | 5   |
> +-+-+
> | openstack-helm  | 8835    |
> +-+-+
> | trove-dashboard | 89  |
> +-+-+
> | zun-ui  | 193 |
> +-+-+
>
> I would like to really appreciate all co-mentors and participants on 
> such a big event for promoting OpenStack contribution.
> The venue and food were supported by Korea Open Source Software 
> Development Center ( https://kosslab.kr/ ).
>
>
> With many thanks,
>
> /Ian
>
> ___
> Community mailing list
> commun...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/community

This is an excellent success story, Ian, thank you for sharing it and
for leading the effort.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-30 Thread Thomas Goirand
On 10/26/18 7:11 PM, Zane Bitter wrote:
> On 26/10/18 5:09 AM, Thomas Goirand wrote:
>> On 10/22/18 9:12 PM, Zane Bitter wrote:
>>> On 22/10/18 10:33 AM, Thomas Goirand wrote:
 This can only happen if we have supporting distribution packages for
 it.
 IMO, this is a call for using Debian Testing or even Sid in the gate.
>>>
>>> It depends on which versions we choose to support, but if necessary yes.
>>
>> If what we want is to have early detection of problems with latest
>> versions of Python, then there's not so many alternatives.
> 
> I think a lot depends on the relative timing of the Python release, the
> various distro release cycles, and the OpenStack release cycle. We
> established that for 3.7 that's the only way we could have done it in
> Rocky; for 3.8, who knows.

No need for a crystal ball...

Python 3.8 is scheduled to be released in summer 2019. As Buster is to
be frozen early the same year, it should be out before it. So, there's a
lot of chance that Python 3.8 will be in Debian Sid/Bullseye before
anywhere else again, probably just after the release of the OpenStack T
release, meaning it most likely will be broken again in Debian Sid.

> I agree that bugs with future versions of Python are always worth fixing
> ASAP, whether or not we are able to test them in the gate.

:)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Michael Johnson
This is awesome Ian.  Thanks for all of the work on this!

Michael

On Tue, Oct 30, 2018 at 8:28 AM Frank Kloeker  wrote:
>
> Hi Ian,
>
> thanks for sharing. What a great user story about community work and
> contributing to OpenStack. I think you did a great job as mentor and
> organizer. I want to keep you with us.
>
> Welcome new contributors any many thanks for translation and
> programming. Hopefully you feel comfortable and have enough energy and
> fun to work further on OpenStack.
>
> kind regards
> Frank
>
> Am 2018-10-30 15:10, schrieb Ian Y. Choi:
> > Hello,
> >
> > I got involved organizing & mentoring Korean people for OpenStack
> > upstream contribution for about last two months,
> > and would like to share with community members.
> >
> > Total nine mentees had started to learn OpenStack, contributed, and
> > finally survived as volunteers for
> >  1) developing OpenStack mobile app for better mobile user interfaces
> > and experiences
> > (inspired from https://github.com/stackerz/app which worked on
> > Juno release), and
> >  2) translating OpenStack official project artifacts including
> > documents,
> >  and Container Whitepaper (
> > https://www.openstack.org/containers/leveraging-containers-and-openstack/
> > ).
> >
> > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin,
> > Sungjin Kang, and Andrew Yongjoon Kong)
> > all helped to organize total 8 offline meetups + one mini-hackathon
> > and mentored to attendees.
> >
> > The followings are brief summary:
> >  - "OpenStack Controller" Android app is available on Play Store
> >   :
> > https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
> >(GitHub: https://github.com/kosslab-kr/openstack-controller )
> >
> >  - Most high-priority projects (although it is not during string
> > freeze period) and documents are
> >100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide,
> > and Container Whitepaper.
> >
> >  - Total 18,695 words were translated into Korean by four contributors
> >   (confirmed through Zanata API:
> > https://translate.openstack.org/rest/stats/user/[Zanata
> > ID]/2018-08-16..2018-10-25 ):
> >
> > ++---+-+
> > | Zanata ID  | Name  | Number of words |
> > ++---+-+
> > | ardentpark | Soonyeul Park | 12517   |
> > ++---+-+
> > | bnitech| Dongbim Im| 693 |
> > ++---+-+
> > | csucom | Sungwook Choi | 4397|
> > ++---+-+
> > | jaeho93| Jaeho Cho | 1088|
> > ++---+-+
> >
> >  - The list of projects translated into Korean are described as:
> >
> > +-+-+
> > | Project | Number of words |
> > +-+-+
> > | api-site| 20  |
> > +-+-+
> > | cinder  | 405 |
> > +-+-+
> > | designate-dashboard | 4   |
> > +-+-+
> > | horizon | 3226|
> > +-+-+
> > | i18n| 434 |
> > +-+-+
> > | ironic  | 4   |
> > +-+-+
> > | Leveraging Containers and OpenStack | 5480|
> > +-+-+
> > | neutron-lbaas-dashboard | 5   |
> > +-+-+
> > | openstack-helm  | 8835|
> > +-+-+
> > | trove-dashboard | 89  |
> > +-+-+
> > | zun-ui  | 193 |
> > +-+-+
> >
> > I would like to really appreciate all co-mentors and participants on
> > such a big event for promoting OpenStack contribution.
> > The venue and food were supported by Korea Open Source Software
> > Development Center ( https://kosslab.kr/ ).
> >
> >
> > With many thanks,
> >
> > /Ian
> >
> > ___
> > Community mailing list
> > commun...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>
>
> __
> OpenStack Development Mailing 

[openstack-dev] Zuul Queue backlogs and resource usage

2018-10-30 Thread Clark Boylan
Hello everyone,

A little while back I sent email explaining how the gate queues work and how 
fixing bugs helps us test and merge more code. All of this still is still true 
and we should keep pushing to improve our testing to avoid gate resets.

Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the 
process of doing this we had to restart Zuul which brought in a new logging 
feature that exposes node resource usage by jobs. Using this data I've been 
able to generate some report information on where our node demand is going. 
This change [0] produces this report [1].

As with optimizing software we want to identify which changes will have the 
biggest impact and to be able to measure whether or not changes have had an 
impact once we have made them. Hopefully this information is a start at doing 
that. Currently we can only look back to the point Zuul was restarted, but we 
have a thirty day log rotation for this service and should be able to look at a 
month's worth of data going forward.

Looking at the data you might notice that Tripleo is using many more node 
resources than our other projects. They are aware of this and have a plan [2] 
to reduce their resource consumption. We'll likely be using this report 
generator to check progress of this plan over time.

Also related to the long queue backlogs is this proposal [3] to change how Zuul 
prioritizes resource allocations to try to be more fair.

[0] https://review.openstack.org/#/c/613674/
[1] http://paste.openstack.org/show/733644/
[2] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html
[3] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html

If you find any of this interesting and would like to help feel free to reach 
out to myself or the infra team.

Thank you,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-30 Thread Doug Hellmann
Zane Bitter  writes:

> On 19/10/18 11:17 AM, Zane Bitter wrote:
>> I'd like to propose that we handle this by setting up a unit test 
>> template in openstack-zuul-jobs for each release. So for Stein we'd have 
>> openstack-python3-stein-jobs. This template would contain:
>> 
>> * A voting gate job for the highest minor version of py3 we want to 
>> support in that release.
>> * A voting gate job for the lowest minor version of py3 we want to 
>> support in that release.
>> * A periodic job for any interim minor releases.
>> * (Starting late in the cycle) a non-voting check job for the highest 
>> minor version of py3 we want to support in the *next* release (if 
>> different), on the master branch only.
>> 
>> So, for example, (and this is still under active debate) for Stein we 
>> might have gating jobs for py35 and py37, with a periodic job for py36. 
>> The T jobs might only have voting py36 and py37 jobs, but late in the T 
>> cycle we might add a non-voting py38 job on master so that people who 
>> haven't switched to the U template yet can see what, if anything, 
>> they'll need to fix.
>
> Just to make it easier to visualise, here is an example for how the Zuul 
> config _might_ look now if we had adopted this proposal during Rocky:
>
> https://review.openstack.org/611947
>
> And instead of having a project-wide goal in Stein to add 
> `openstack-python36-jobs` to the list that currently includes 
> `openstack-python35-jobs` in each project's Zuul config[1], we'd have 
> had a goal to change `openstack-python3-rocky-jobs` to 
> `openstack-python3-stein-jobs` in each project's Zuul config.

If we set up the template before we branch stein for T, we could
generate a patch as part of the branching process.

Doug

>
> - ZB
>
>
> [1] 
> https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Project Update Etherpad

2018-10-30 Thread Ben Nemec
Good news! The Foundation found space for us to do a project update 
session, so now we need to figure out what to talk about. I've started 
an etherpad at 
https://etherpad.openstack.org/p/oslo-project-update-stein to list the 
possible topics. Please add or expand on the ones I've pre-populated if 
there's something you want to have covered. The current list is a five 
minute off-the-top-of-my-head thing, so don't assume it's complete. :-)


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Frank Kloeker

Hi Ian,

thanks for sharing. What a great user story about community work and 
contributing to OpenStack. I think you did a great job as mentor and 
organizer. I want to keep you with us.


Welcome new contributors any many thanks for translation and 
programming. Hopefully you feel comfortable and have enough energy and 
fun to work further on OpenStack.


kind regards
Frank

Am 2018-10-30 15:10, schrieb Ian Y. Choi:

Hello,

I got involved organizing & mentoring Korean people for OpenStack
upstream contribution for about last two months,
and would like to share with community members.

Total nine mentees had started to learn OpenStack, contributed, and
finally survived as volunteers for
 1) developing OpenStack mobile app for better mobile user interfaces
and experiences
    (inspired from https://github.com/stackerz/app which worked on
Juno release), and
 2) translating OpenStack official project artifacts including 
documents,

 and Container Whitepaper (
https://www.openstack.org/containers/leveraging-containers-and-openstack/
).

Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin,
Sungjin Kang, and Andrew Yongjoon Kong)
all helped to organize total 8 offline meetups + one mini-hackathon
and mentored to attendees.

The followings are brief summary:
 - "OpenStack Controller" Android app is available on Play Store
  :
https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
   (GitHub: https://github.com/kosslab-kr/openstack-controller )

 - Most high-priority projects (although it is not during string
freeze period) and documents are
   100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide,
and Container Whitepaper.

 - Total 18,695 words were translated into Korean by four contributors
  (confirmed through Zanata API:
https://translate.openstack.org/rest/stats/user/[Zanata
ID]/2018-08-16..2018-10-25 ):

++---+-+
| Zanata ID  | Name  | Number of words |
++---+-+
| ardentpark | Soonyeul Park | 12517   |
++---+-+
| bnitech    | Dongbim Im    | 693 |
++---+-+
| csucom | Sungwook Choi | 4397    |
++---+-+
| jaeho93    | Jaeho Cho | 1088    |
++---+-+

 - The list of projects translated into Korean are described as:

+-+-+
| Project | Number of words |
+-+-+
| api-site    | 20  |
+-+-+
| cinder  | 405 |
+-+-+
| designate-dashboard | 4   |
+-+-+
| horizon | 3226    |
+-+-+
| i18n    | 434 |
+-+-+
| ironic  | 4   |
+-+-+
| Leveraging Containers and OpenStack | 5480    |
+-+-+
| neutron-lbaas-dashboard | 5   |
+-+-+
| openstack-helm  | 8835    |
+-+-+
| trove-dashboard | 89  |
+-+-+
| zun-ui  | 193 |
+-+-+

I would like to really appreciate all co-mentors and participants on
such a big event for promoting OpenStack contribution.
The venue and food were supported by Korea Open Source Software
Development Center ( https://kosslab.kr/ ).


With many thanks,

/Ian

___
Community mailing list
commun...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/community



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-30 Thread John Garbutt
Hi,

Basically we should kill quota classes.

It required out of tree stuff that was never implemented, AFAIK.

When I checked with Kevin about this, my memory says the idea was out
of tree authorization plugin would populate context.quota_class with
something like "i_have_big_credit_limit" or
"i_have_prepaid_loads_limit", and if blank fall back to the default. I
don't believe anyone ever used that system. Gives you like groups of
pre-defined quota limits, rather than per project overrides.

Either way, it should die, and now its keystone's problem.
I subscribe to the idea that downstream operational scripting is the
currently preferred solution.

Thanks,
johnthetubaguy

PS
Sorry been busy on SKA architecture last month or so, slowing getting
back up to speed.
On Fri, 26 Oct 2018 at 14:55, Jay Pipes  wrote:
>
> On 10/25/2018 02:44 PM, melanie witt wrote:
> > On Thu, 25 Oct 2018 14:00:08 -0400, Jay Pipes wrote:
> >> On 10/25/2018 01:38 PM, Chris Friesen wrote:
> >>> On 10/24/2018 9:10 AM, Jay Pipes wrote:
>  Nova's API has the ability to create "quota classes", which are
>  basically limits for a set of resource types. There is something
>  called the "default quota class" which corresponds to the limits in
>  the CONF.quota section. Quota classes are basically templates of
>  limits to be applied if the calling project doesn't have any stored
>  project-specific limits.
> 
>  Has anyone ever created a quota class that is different from "default"?
> >>>
> >>> The Compute API specifically says:
> >>>
> >>> "Only ‘default’ quota class is valid and used to set the default quotas,
> >>> all other quota class would not be used anywhere."
> >>>
> >>> What this API does provide is the ability to set new default quotas for
> >>> *all* projects at once rather than individually specifying new defaults
> >>> for each project.
> >>
> >> It's a "defaults template", yes.
> >>
> >> The alternative is, you know, to just set the default values in
> >> CONF.quota, which is what I said above. Or, if you want project X to
> >> have different quota limits from those CONF-driven defaults, then set
> >> the quotas for the project to some different values via the
> >> os-quota-sets API (or better yet, just use Keystone's /limits API when
> >> we write the "limits driver" into Nova). The issue is that the
> >> os-quota-classes API is currently blocking *me* writing that "limits
> >> driver" in Nova because I don't want to port nova-specific functionality
> >> (like quota classes) to a limits driver when the Keystone /limits
> >> endpoint doesn't have that functionality and nobody I know of has ever
> >> used it.
> >
> > When you say it's blocking you from writing the "limits driver" in nova,
> > are you saying you're picking up John's unified limits spec [1]? It's
> > been in -W mode and hasn't been updated in 4 weeks. In the spec,
> > migration from quota classes => registered limits and deprecation of the
> > existing quota API and quota classes is described.
> >
> > Cheers,
> > -melanie
> >
> > [1] https://review.openstack.org/602201
>
> Actually, I wasn't familiar with John's spec. I'll review it today.
>
> I was referring to my own attempts to clean up the quota system and
> remove all the limits-related methods from the QuotaDriver class...
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] linter jobs testing packaging data

2018-10-30 Thread Doug Hellmann
Earlier today I learned that we have a few repositories with linter jobs
failing (or at least reporting warnings) because they are running
"python setup.py check" to test that the packaging meta-data is OK.

This method of testing has been deprecated in favor of using the command
"twine check", which requires a bit of extra setup but performs multiple
checks on the built packages. Luckily, test-release-openstack-python3
job already runs "twine check".

Since it is part of the publish-to-pypi-python3 template, any
python-based projects that are releasing using the new job template
(which should be all official projects now) have
test-release-openstack-python3 configured to run when any files related
to packaging are modified.

Therefore, rather than updating the failing linter jobs to perform the
steps necessary to run twine, teams should simply remove the check and
allow the existing test job to perform that check. In addition to
avoiding redundancy, this means we will be able update the job in one
place instead of having to touch every repo when twine inevitably
changes in the future.

Sean is working on a set of patches to fix up some of the repos that
have issues, so please approve those quickly when they come in.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Sharing upstream contribution mentoring result with Korea user group

2018-10-30 Thread Ian Y. Choi

Hello,

I got involved organizing & mentoring Korean people for OpenStack 
upstream contribution for about last two months,

and would like to share with community members.

Total nine mentees had started to learn OpenStack, contributed, and 
finally survived as volunteers for
 1) developing OpenStack mobile app for better mobile user interfaces 
and experiences
    (inspired from https://github.com/stackerz/app which worked on Juno 
release), and

 2) translating OpenStack official project artifacts including documents,
 and Container Whitepaper ( 
https://www.openstack.org/containers/leveraging-containers-and-openstack/ ).


Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, 
Sungjin Kang, and Andrew Yongjoon Kong)
all helped to organize total 8 offline meetups + one mini-hackathon and 
mentored to attendees.


The followings are brief summary:
 - "OpenStack Controller" Android app is available on Play Store
  : 
https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller

   (GitHub: https://github.com/kosslab-kr/openstack-controller )

 - Most high-priority projects (although it is not during string freeze 
period) and documents are
   100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, 
and Container Whitepaper.


 - Total 18,695 words were translated into Korean by four contributors
  (confirmed through Zanata API: 
https://translate.openstack.org/rest/stats/user/[Zanata 
ID]/2018-08-16..2018-10-25 ):


++---+-+
| Zanata ID  | Name  | Number of words |
++---+-+
| ardentpark | Soonyeul Park | 12517   |
++---+-+
| bnitech    | Dongbim Im    | 693 |
++---+-+
| csucom | Sungwook Choi | 4397    |
++---+-+
| jaeho93    | Jaeho Cho | 1088    |
++---+-+

 - The list of projects translated into Korean are described as:

+-+-+
| Project | Number of words |
+-+-+
| api-site    | 20  |
+-+-+
| cinder  | 405 |
+-+-+
| designate-dashboard | 4   |
+-+-+
| horizon | 3226    |
+-+-+
| i18n    | 434 |
+-+-+
| ironic  | 4   |
+-+-+
| Leveraging Containers and OpenStack | 5480    |
+-+-+
| neutron-lbaas-dashboard | 5   |
+-+-+
| openstack-helm  | 8835    |
+-+-+
| trove-dashboard | 89  |
+-+-+
| zun-ui  | 193 |
+-+-+

I would like to really appreciate all co-mentors and participants on 
such a big event for promoting OpenStack contribution.
The venue and food were supported by Korea Open Source Software 
Development Center ( https://kosslab.kr/ ).



With many thanks,

/Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade checks for telemetry services

2018-10-30 Thread AKHIL Jain
Thanks Doug for quick response. I will start working accordingly.

Akhil


From: Doug Hellmann 
Sent: Tuesday, October 30, 2018 6:03:24 PM
To: AKHIL Jain; openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade 
checks for telemetry services

AKHIL Jain  writes:

> Hi Matt and Telemetry Team,
>
> I was going through the remaining project to be implemented with 
> upgrade-checkers placeholder framework. I would like to know about the 
> projects to implement the same under telemetry tab.
>
> According to my understanding from below link, multiple projects come under 
> telemetry:
> https://wiki.openstack.org/wiki/Telemetry#Managed
>
> Aodh being alarming service triggers alarm when collected data breaks over 
> set rules. Also, Aodh work as a standalone project using any 
> backend(ceilometer, gnocchi. etc.):
> So there are expected changes b/w releases.
>
> Ceilometer being data collection service(that helps in customer billing, 
> resource tracking, and alarming): As this service is involved in data pooling 
> from other projects. So there can be chances to perform upgrade checks.
>
> Panko being indexing service, that provides the ability to store and query 
> event data. Related changes to indexing objects can be checked while upgrading
>
> So, should we add upgrade-check command in each project or single command 
> upgrade-checks should be there to tell each service upgrade status?
>
> Thanks,
> Akhil
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Each of those services has its own configuration file and database, and
the code is in separate repositories, so it seems like we would want a
separate upgrade check command for each one.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Adjust weekly meeting time for US DST

2018-10-30 Thread Douglas Mendizabal
Hi openstack-dev@,

During the weekly meeting today the topic of moving the weekly meeting
forward by an hour to adjust for US Daylight Savings Time ending was
brought up.  All contributors in attendance unanimously voted for the
move. [1]

If you would like to participate in the meetings and didn't have a
chance to attend to day, or are unable to make the new proposed time of
Tuesdays at 1300 UTC, please respond to this thread and we can try to
find a time that works for everyone.  Otherwise we'll be meeting at the
new proposed time next week.

Thanks,
- Douglas Mendizábal

[1] 
http://eavesdrop.openstack.org/meetings/barbican/2018/barbican.2018-10-30-12.01.txt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade checks for telemetry services

2018-10-30 Thread Doug Hellmann
AKHIL Jain  writes:

> Hi Matt and Telemetry Team,
>
> I was going through the remaining project to be implemented with 
> upgrade-checkers placeholder framework. I would like to know about the 
> projects to implement the same under telemetry tab.
>
> According to my understanding from below link, multiple projects come under 
> telemetry:
> https://wiki.openstack.org/wiki/Telemetry#Managed
>
> Aodh being alarming service triggers alarm when collected data breaks over 
> set rules. Also, Aodh work as a standalone project using any 
> backend(ceilometer, gnocchi. etc.):
> So there are expected changes b/w releases.
>
> Ceilometer being data collection service(that helps in customer billing, 
> resource tracking, and alarming): As this service is involved in data pooling 
> from other projects. So there can be chances to perform upgrade checks.
>
> Panko being indexing service, that provides the ability to store and query 
> event data. Related changes to indexing objects can be checked while upgrading
>
> So, should we add upgrade-check command in each project or single command 
> upgrade-checks should be there to tell each service upgrade status?
>
> Thanks,
> Akhil
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Each of those services has its own configuration file and database, and
the code is in separate repositories, so it seems like we would want a
separate upgrade check command for each one.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CFP] FOSDEM 2019 IaaS and Virt DevRoom

2018-10-30 Thread Kashyap Chamarthy
Dear OpenStack community, 

FOSDEM 2019 will feature a Virtualization & IaaS DevRoom again.  Here is
the call for proposals.  Please check it out if you would like to submit
a talk.

Regards,
Kashyap

---
We are excited to announce that the call for proposals is now open for
the Virtualization & IaaS devroom at the upcoming FOSDEM 2019, to be
hosted on February 2nd 2019.

This year will mark FOSDEM’s 19th anniversary as one of the
longest-running free and open source software developer events,
attracting thousands of developers and users from all over the world.
FOSDEM will be held once again in Brussels, Belgium, on February 2nd &
3rd, 2019.

This devroom is a collaborative effort, and is organized by dedicated
folks from projects such as OpenStack, Xen Project, oVirt, QEMU, KVM,
and Foreman. We would like to invite all those who are involved in these
fields to submit your proposals by December 1st, 2018.

Important Dates
---

Submission deadline: 1 December 2019
Acceptance notifications: 14 December 2019
Final schedule announcement: 21 December 2019
Devroom: 2nd February 2019

About the Devroom
-

The Virtualization & IaaS devroom will feature session topics such as
open source hypervisors and virtual machine managers such as Xen
Project, KVM, bhyve, and VirtualBox, and Infrastructure-as-a-Service
projects such as KubeVirt, Apache CloudStack, OpenStack, oVirt, QEMU and
OpenNebula.

This devroom will host presentations that focus on topics of shared
interest, such as KVM; libvirt; shared storage; virtualized networking;
cloud security; clustering and high availability; interfacing with
multiple hypervisors; hyperconverged deployments; and scaling across
hundreds or thousands of servers.

Presentations in this devroom will be aimed at developers working on
these platforms who are looking to collaborate and improve shared
infrastructure or solve common problems. We seek topics that encourage
dialog between projects and continued work post-FOSDEM.

Submit Your Proposal


All submissions must be made via the Pentabarf event planning site[1].
If you have not used Pentabarf before, you will need to create an
account. If you submitted proposals for FOSDEM in previous years, you
can use your existing account.

After creating the account, select Create Event to start the submission
process. Make sure to select Virtualization and IaaS devroom from the
Track list. Please fill out all the required fields, and provide a
meaningful abstract and description of your proposed session.

Submission Guidelines
-

We expect more proposals than we can possibly accept, so it is vitally
important that you submit your proposal on or before the deadline. Late
submissions are unlikely to be considered.

All presentation slots are 30 minutes, with 20 minutes planned for
presentations, and 10 minutes for Q

All presentations will be recorded and made available under Creative
Commons licenses. In the Submission notes field, please indicate that
you agree that your presentation will be licensed under the CC-By-SA-4.0
or CC-By-4.0 license and that you agree to have your presentation
recorded.

For example:

"If my presentation is accepted for FOSDEM, I hereby agree to license
all recordings, slides, and other associated materials under the
Creative Commons Attribution Share-Alike 4.0 International License.
Sincerely, ."

In the Submission notes field, please also confirm that if your talk is
accepted, you will be able to attend FOSDEM and deliver your
presentation.  We will not consider proposals from prospective speakers
who are unsure whether they will be able to secure funds for travel and
lodging to attend FOSDEM. (Sadly, we are not able to offer travel
funding for prospective speakers.)
Speaker Mentoring Program

As a part of the rising efforts to grow our communities and encourage a
diverse and inclusive conference ecosystem, we're happy to announce that
we'll be offering mentoring for new speakers. Our mentors can help you
with tasks such as reviewing your abstract, reviewing your presentation
outline or slides, or practicing your talk with you.

You may apply to the mentoring program as a newcomer speaker if you:

Never presented before or
Presented only lightning talks or
Presented full-length talks at small meetups (<50 ppl)

Submission Guidelines
-

Mentored presentations will have 25-minute slots, where 20 minutes will
include the presentation and 5 minutes will be reserved for questions.
The number of newcomer session slots is limited, so we will probably not
be able to accept all applications.

You must submit your talk and abstract to apply for the mentoring
program, our mentors are volunteering their time and will happily
provide feedback but won't write your presentation for you!

If you are experiencing problems with Pentabarf, the proposal submission
interface, or 

[openstack-dev] [openstack-sigs][all] Berlin Forum for `expose SIGs and WGs`

2018-10-30 Thread Rico Lin
Hi all

To continue our discussion in Denver, we will have a forum [1] in Berlin on
*Wednesday, November 14, 11:50am-12:30pm CityCube Berlin - Level 3 -
M-Räume 8*
We will host the forum in an open discussion format, and try to get actions
from forum to make sure we can keep push what people need. So if you have
any feedback or idea, please join us.
I created an etherpad for this forum so we can collect information, get
feedback, and mark actions.
*https://etherpad.openstack.org/p/expose-sigs-and-wgs
 *

*For who don't know what is `expose SIGs and WGs`*
There is some started discussion in ML [2] , and in PTG session [3]. The
basic concept for this is to allow users/ops get a single window for
important scenario/user cases or issues into traceable tasks in single
story/place and ask developers be responsible (by changing the mission of
government policy) to co-work on that task. SIGs/WGs are so desired to get
feedbacks or use cases, so as for project teams (not gonna speak for all
projects/SIGs/WGs but we like to collect for more idea for sure). And
project teams got a central place to develop for specific user
requirements, or give document for more general OpenStack information. So
would like to have more discussion on how can we reach the goal by actions?
How can we change in TC, UC, Projects, SIGs, WGs's policy to bridge up from
user/ops to developers.


[1]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22750/expose-sigs-and-wgs
[2]
http://lists.openstack.org/pipermail/openstack-sigs/2018-August/000453.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134689.html
-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade checks for telemetry services

2018-10-30 Thread AKHIL Jain
Hi Matt and Telemetry Team,

I was going through the remaining project to be implemented with 
upgrade-checkers placeholder framework. I would like to know about the projects 
to implement the same under telemetry tab.

According to my understanding from below link, multiple projects come under 
telemetry:
https://wiki.openstack.org/wiki/Telemetry#Managed

Aodh being alarming service triggers alarm when collected data breaks over set 
rules. Also, Aodh work as a standalone project using any backend(ceilometer, 
gnocchi. etc.):
So there are expected changes b/w releases.

Ceilometer being data collection service(that helps in customer billing, 
resource tracking, and alarming): As this service is involved in data pooling 
from other projects. So there can be chances to perform upgrade checks.

Panko being indexing service, that provides the ability to store and query 
event data. Related changes to indexing objects can be checked while upgrading

So, should we add upgrade-check command in each project or single command 
upgrade-checks should be there to tell each service upgrade status?

Thanks,
Akhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-30 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev