[openstack-dev] [all][requirements] Freeze is coming next week

2018-01-16 Thread Matthew Thode
So get your changes in or get left behind :D

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Ironic][API] Service Management API Design

2018-01-16 Thread Kumari, Madhuri
Hi Nova Developers,

I am working on adding a service management API in Ironic [1][2]. This spec 
adds a new API /conductors to list, enable/disable an ironic-conductor service.
I am struggling to understand the difference between shutting down a service 
manually and disabling it.

So my question is what happens to the VMs and the current operation(if any) 
going on with the nova-compute service we disable? What is the difference 
between shutting down the service and disabling it?
I understand both the actions, disables scheduling request to the compute 
service and the workloads are taken over by other nova-compute service.

Please help me understand the design in Nova.

[1] https://review.openstack.org/#/c/471217/
[2] https://bugs.launchpad.net/ironic/+bug/1526759


Regards,
Madhuri

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif]

2018-01-16 Thread Sriharsha Basavapatna
On Wed, Jan 10, 2018 at 11:24 AM, Sriharsha Basavapatna <
sriharsha.basavapa...@broadcom.com> wrote:

> On Tue, Jan 9, 2018 at 10:46 PM, Stephen Finucane 
> wrote:
> > On Tue, 2018-01-09 at 16:30 +0530, Sriharsha Basavapatna wrote:
> >> On Tue, Jan 9, 2018 at 2:20 PM, Sriharsha Basavapatna
> >>  wrote:
> >> > Hi Andreas,
> >> >
> >> > On Tue, Jan 9, 2018 at 12:04 PM, Andreas Jaeger 
> >> > wrote:
> >> > > On 2018-01-09 07:00, Sriharsha Basavapatna wrote:
> >> > > > Hi,
> >> > > >
> >> > > > I've uploaded a patch for review:
> >> > > > https://review.openstack.org/#/c/531674/
> >> > > >
> >> > > > This is the first time I'm submitting a patch on openstack. I'd
> >> > > > like
> >> > >
> >> > > Welcome to OpenStack, Harsha.
> >> >
> >> > Thank you.
> >> >
> >> > > Please read
> >> > > https://docs.openstack.org/infra/manual/developers.html if you
> >> > > haven't.
> >> >
> >> > Ok, i'll read it.
> >> > >
> >> > > I see that your change fails the basic tests, you can run these
> >> > > locally
> >> > > as follows to check that your fixes will pass:
> >> > >
> >> > > tox -e pep8
> >> > > tox -e py27
> >> >
> >> > I was wondering if there's a way to catch these errors without
> >> > having
> >> > to submit it for gerrit review.  I fixed the ones that were
> >> > reported
> >> > in patch-set-1; looks like there's some new ones in the second
> >> > patch-set. I'll run the above commands to verify the fix locally.
> >> >
> >> > Thanks,
> >> > -Harsha
> >>
> >> I installed python-pip and tox.  But when I run "tox -e pep8", I'm
> >> seeing some errors:
> >>
> >> building 'netifaces' extension
> >> gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall
> >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
> >> -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall
> >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
> >> -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DNETIFACES_VERSION=0.10.6
> >> -DHAVE_GETIFADDRS=1 -DHAVE_GETNAMEINFO=1 -DHAVE_NETASH_ASH_H=1
> >> -DHAVE_NETATALK_AT_H=1 -DHAVE_NETAX25_AX25_H=1
> >> -DHAVE_NETECONET_EC_H=1
> >> -DHAVE_NETIPX_IPX_H=1 -DHAVE_NETPACKET_PACKET_H=1
> >> -DHAVE_LINUX_IRDA_H=1 -DHAVE_LINUX_ATM_H=1 -DHAVE_LINUX_LLC_H=1
> >> -DHAVE_LINUX_TIPC_H=1 -DHAVE_LINUX_DN_H=1 -DHAVE_SOCKADDR_AT=1
> >> -DHAVE_SOCKADDR_AX25=1 -DHAVE_SOCKADDR_IN=1 -DHAVE_SOCKADDR_IN6=1
> >> -DHAVE_SOCKADDR_IPX=1 -DHAVE_SOCKADDR_UN=1 -DHAVE_SOCKADDR_ASH=1
> >> -DHAVE_SOCKADDR_EC=1 -DHAVE_SOCKADDR_LL=1 -DHAVE_SOCKADDR_ATMPVC=1
> >> -DHAVE_SOCKADDR_ATMSVC=1 -DHAVE_SOCKADDR_DN=1 -DHAVE_SOCKADDR_IRDA=1
> >> -DHAVE_SOCKADDR_LLC=1 -DHAVE_PF_NETLINK=1 -I/usr/include/python2.7 -c
> >> netifaces.c -o build/temp.linux-x86_64-2.7/netifaces.o
> >> netifaces.c:1:20: fatal error: Python.h: No such file or
> >> directory
> >>  #include 
> >> ^
> >> compilation terminated.
> >> error: command 'gcc' failed with exit status 1
> >>
> >> 
> >> Command "/home/harshab/os-vif/.tox/pep8/bin/python2 -u -c "import
> >> setuptools, tokenize;__file__='/tmp/pip-build-
> >> OibnHO/netifaces/setup.py';f=getattr(tokenize,
> >> 'open', open)(__file__);code=f.read().replace('\r\n',
> >> '\n');f.close();exec(compile(code, __file__, 'exec'))" install
> >> --record /tmp/pip-3Hu__1-record/install-record.txt
> >> --single-version-externally-managed --compile --install-headers
> >> /home/harshab/os-vif/.tox/pep8/include/site/python2.7/netifaces"
> >> failed with error code 1 in /tmp/pip-build-OibnHO/netifaces/
> >>
> >> ERROR: could not install deps
> >> [-r/home/harshab/os-vif/requirements.txt,
> >> -r/home/harshab/os-vif/test-requirements.txt]; v =
> >> InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U
> >> -r/home/harshab/os-vif/requirements.txt
> >> -r/home/harshab/os-vif/test-requirements.txt (see
> >> /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1)
> >> ___ summary
> >> 
> >> ERROR:   pep8: could not install deps
> >> [-r/home/harshab/os-vif/requirements.txt,
> >> -r/home/harshab/os-vif/test-requirements.txt]; v =
> >> InvocationError('/home/harshab/os-vif/.tox/pep8/bin/pip install -U
> >> -r/home/harshab/os-vif/requirements.txt
> >> -r/home/harshab/os-vif/test-requirements.txt (see
> >> /home/harshab/os-vif/.tox/pep8/log/pep8-1.log)', 1)
> >>
> >> Thanks,
> >> -Harsha
> >
> > That's happening because the 'pep8' target is installing all the
> > requirements for the project in a virtualenv, and one of them needs
> > Python development headers. What Linux distro are you using? On Fedora
> > you can fix this like so:
> >
> > sudo dnf install python-devel
>
> Thanks Stephen, I'm using RHEL and 'yum install python-devel' resolved it.
> 

Re: [openstack-dev] [all] [tc] Community Goals for Rocky

2018-01-16 Thread ChangBo Guo
2018-01-17 0:29 GMT+08:00 Emilien Macchi :

> Here's an update so we can hopefully, as a community, take a decision
> in the next days or so.
>
>
> * Migration to StoryBoard
>
> Champion: Kendall Nelson
> https://review.openstack.org/#/c/513875/
> Some projects already migrated, some projects will migrate soon but
> there is still a gap of things that prevents some projects to not
> migrate.
> See https://storyboard.openstack.org/#!/search?tags=blocking-
> storyboard-migration
> For that reason, we are postponing this goal to later but work needs
> to keep going to make that happen one day.
>
>
> * Remove mox
>
> Champion: Sean McGinnis (unless someone else steps up)
> https://review.openstack.org/#/c/532361/
> This goal is to clean some technical debt in the code.
> It remains a good candidate for Queens.
>
>
> * Ensure pagination links
>
> Champion: Monty Taylor
> https://review.openstack.org/#/c/532627/
> This one would improve API users experience.
> It remains a good candidate for Queens.
>
>
> * Enable mutable configuration
> Champion: ChangBo Guo
> Nothing was proposed in governance so far and we have enough proposals
> now, I guess it could be a candidate for a future cycle though. This
> one would make happy our operators.
>
>This is the review in governance https://review.openstack.org/534605
   This change really benefit users, hope this can be finished in Rocky.



>
> * Cold upgrades capabilities
> Champion: Masayuki Igawa
> https://review.openstack.org/#/c/533544/
> This one would be appreciated by our operators who always need
> improvements on upgrades experience - I believe it would be a good
> candidate.
>
>
> Note: some projects requested about having less goals so they have
> more time to work on their backlogs. While I agree with that, I would
> like to know who asked exactly, and if they would be affected by the
> goals or not.
> It will help us to decide which ones we take.
>
> So now, it's really a good time to speak-up and say if:
> - your project could commit to 2 of these goals or not (and why? backlog?
> etc)
> - which ones you couldn't commit to
> - the ones you prefer
>
> We need to take a decision as a community, not just TC members, so
> please bring feedback.
>
> Thanks,
>
>
> On Fri, Jan 12, 2018 at 2:19 PM, Lance Bragstad 
> wrote:
> >
> >
> > On 01/12/2018 11:09 AM, Tim Bell wrote:
> >> I was reading a tweet from Jean-Daniel and wondering if there would be
> an appropriate community goal regarding support of some of the later API
> versions or whether this would be more of a per-project goal.
> >>
> >> https://twitter.com/pilgrimstack/status/951860289141641217
> >>
> >> Interesting numbers about customers tools used to talk to our
> @OpenStack APIs and the Keystone v3 compatibility:
> >> - 10% are not KeystoneV3 compatible
> >> - 16% are compatible
> >> - for the rest, the tools documentation has no info
> >>
> >> I think Keystone V3 and Glance V2 are the ones with APIs which have
> moved on significantly from the initial implementations and not all
> projects have been keeping up.
> > Yeah, I'm super interested in this, too. I'll be honest I'm not quite
> > sure where to start. If the tools are open source we can start
> > contributing to them directly.
> >>
> >> Tim
> >>
> >> -Original Message-
> >> From: Emilien Macchi 
> >> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> >> Date: Friday, 12 January 2018 at 16:51
> >> To: OpenStack Development Mailing List  openstack.org>
> >> Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky
> >>
> >> Here's a quick update before the weekend:
> >>
> >> 2 goals were proposed to governance:
> >>
> >> Remove mox
> >> https://review.openstack.org/#/c/532361/
> >> Champion: Sean McGinnis (unless someone else steps up)
> >>
> >> Ensure pagination links
> >> https://review.openstack.org/#/c/532627/
> >> Champion: Monty Taylor
> >>
> >> 2 more goals are about to be proposed:
> >>
> >> Enable mutable configuration
> >> Champion: ChangBo Guo
> >>
> >> Cold upgrades capabilities
> >> Champion: Masayuki Igawa
> >>
> >>
> >> Thanks everyone for your participation,
> >> We hope to make a vote within the next 2 weeks so we can prepare the
> >> PTG accordingly.
> >>
> >> On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi 
> wrote:
> >> > As promised, let's continue the discussion and move things
> forward.
> >> >
> >> > This morning Thierry brought the discussion during the TC office
> hour
> >> > (that I couldn't attend due to timezone):
> >> > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/
> latest.log.html#t2018-01-09T09:18:33
> >> >
> >> > Some outputs:
> >> >
> >> > - One goal has been proposed so far.
> >> >
> >> > Right now, we only 

Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-16 Thread Ed Leafe
On Jan 16, 2018, at 7:21 PM, Zhenyu Zheng  wrote:

> Thanks for the info, so it seems we are not going to implement aggregate 
> overcommit ratio in placement at least in the near future?

I would go so far as to say that we are not going to implement aggregate 
overcommit ratio in placement at all. Placement has the concept of a Resource 
Provider as its base unit, and aggregates really don't fit in this model at all.

If you need that sort of grouping, perhaps a tool that would assign a single 
ratio to all the members of an aggregate would be a good way to convert to the 
new paradigm.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-16 Thread Zhenyu Zheng
Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?

On Wed, Jan 17, 2018 at 9:19 AM, Zhenyu Zheng 
wrote:

> Thanks for the info, so it seems we are not going to implement aggregate
> overcommit ratio in placement at least in the near future?
>
> On Wed, Jan 17, 2018 at 5:24 AM, melanie witt  wrote:
>
>> Hello Stackers,
>>
>> This is a heads up to any of you using the AggregateCoreFilter,
>> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
>> These filters have effectively allowed operators to set overcommit ratios
>> per aggregate rather than per compute node in <= Newton.
>>
>> Beginning in Ocata, there is a behavior change where aggregate-based
>> overcommit ratios will no longer be honored during scheduling. Instead,
>> overcommit values must be set on a per compute node basis in nova.conf.
>>
>> Details: as of Ocata, instead of considering all compute nodes at the
>> start of scheduler filtering, an optimization has been added to query
>> resource capacity from placement and prune the compute node list with the
>> result *before* any filters are applied. Placement tracks resource capacity
>> and usage and does *not* track aggregate metadata [1]. Because of this,
>> placement cannot consider aggregate-based overcommit and will exclude
>> compute nodes that do not have capacity based on per compute node
>> overcommit.
>>
>> How to prepare: if you have been relying on per aggregate overcommit,
>> during your upgrade to Ocata, you must change to using per compute node
>> overcommit ratios in order for your scheduling behavior to stay consistent.
>> Otherwise, you may notice increased NoValidHost scheduling failures as the
>> aggregate-based overcommit is no longer being considered. You can safely
>> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter
>> from your enabled_filters and you do not need to replace them with any
>> other core/ram/disk filters. The placement query takes care of the
>> core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter
>> are redundant.
>>
>> Thanks,
>> -melanie
>>
>> [1] Placement has been a new slate for resource management and prior to
>> placement, there were conflicts between the different methods for setting
>> overcommit ratios that were never addressed, such as, "which value to take
>> if a compute node has overcommit set AND the aggregate has it set? Which
>> takes precedence?" And, "if a compute node is in more than one aggregate,
>> which overcommit value should be taken?" So, the ambiguities were not
>> something that was desirable to bring forward into placement.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-16 Thread Zhenyu Zheng
Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?

On Wed, Jan 17, 2018 at 5:24 AM, melanie witt  wrote:

> Hello Stackers,
>
> This is a heads up to any of you using the AggregateCoreFilter,
> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
> These filters have effectively allowed operators to set overcommit ratios
> per aggregate rather than per compute node in <= Newton.
>
> Beginning in Ocata, there is a behavior change where aggregate-based
> overcommit ratios will no longer be honored during scheduling. Instead,
> overcommit values must be set on a per compute node basis in nova.conf.
>
> Details: as of Ocata, instead of considering all compute nodes at the
> start of scheduler filtering, an optimization has been added to query
> resource capacity from placement and prune the compute node list with the
> result *before* any filters are applied. Placement tracks resource capacity
> and usage and does *not* track aggregate metadata [1]. Because of this,
> placement cannot consider aggregate-based overcommit and will exclude
> compute nodes that do not have capacity based on per compute node
> overcommit.
>
> How to prepare: if you have been relying on per aggregate overcommit,
> during your upgrade to Ocata, you must change to using per compute node
> overcommit ratios in order for your scheduling behavior to stay consistent.
> Otherwise, you may notice increased NoValidHost scheduling failures as the
> aggregate-based overcommit is no longer being considered. You can safely
> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter
> from your enabled_filters and you do not need to replace them with any
> other core/ram/disk filters. The placement query takes care of the
> core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter
> are redundant.
>
> Thanks,
> -melanie
>
> [1] Placement has been a new slate for resource management and prior to
> placement, there were conflicts between the different methods for setting
> overcommit ratios that were never addressed, such as, "which value to take
> if a compute node has overcommit set AND the aggregate has it set? Which
> takes precedence?" And, "if a compute node is in more than one aggregate,
> which overcommit value should be taken?" So, the ambiguities were not
> something that was desirable to bring forward into placement.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Merging feature/zuulv3 into master

2018-01-16 Thread James E. Blair
Hi,

On Thursday, January 18, 2018, we will merge the feature/zuulv3 branches
of both Zuul and Nodepool into master.

If you continuously deploy Zuul or Nodepool from master, you should make
sure you are prepared for this.

The current version of the single_node_ci pattern in puppet-openstackci
should, by default, install the latest released versions of Zuul and
Nodepool.  However, if you are running Zuul continuously deployed from a
version of puppet-openstackci which is not continuously deployed, or
using some other method, you may find that your system has automatically
been upgraded if you have not taken action before the branch is merged.

Regardless of how you deploy Zuul, if you find that your system has been
upgraded, simply re-install the most current releases of Zuul and
Nodepool, either from PyPI or from a git tag.  They are:

Nodepool: 0.5.0
Zuul: 2.6.0

Note that the final version of Zuul v3 has not been released yet.  We
hope to do so soon, but until we do, our recommendation is to continue
using the current releases.

Finally, if you find this message relevant, please subscribe to the new
zuul-annou...@lists.zuul-ci.org mailing list:

http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-announce

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky PTG planning

2018-01-16 Thread Emilien Macchi
Hey,

I kicked-off an etherpad for PTG planning:
https://etherpad.openstack.org/p/tripleo-ptg-rocky

It's basic now but feel free to add your ideas of topics.
We'll work on the agenda during the following weeks.

Thanks!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard

2018-01-16 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-01-12 15:37:42 -0500:
> Since we are discussing goals for the Rocky cycle, I would like to
> propose a change to the way we track progress on the goals.
> 
> We've started to see lots and lots of changes to the goal documents,
> more than anticipated when we designed the system originally. That
> leads to code review churn within the governance repo, and it means
> the goal champions have to wait for the TC to review changes before
> they have complete tracking information published somewhere. We've
> talked about moving the tracking out of git and using an etherpad
> or a wiki page, but I propose that we use storyboard.
> 
> Specifically, I think we should create 1 story for each goal, and
> one task for each project within the goal. We can then use a board
> to track progress, with lanes like "New", "Acknowledged", "In
> Progress", "Completed", and "Not Applicable". It would be the
> responsibility of the goal champion to create the board, story, and
> tasks and provide links to the board and story in the goal document
> (so we only need 1 edit after the goal is approved). From that point
> on, teams and goal champions could collaborate on keeping the board
> up to date.
> 
> Not all projects are registered in storyboard, yet. Since that
> migration is itself a goal under discussion, I think for now we can
> just associate all tasks with the governance repository.
> 
> It doesn't look like changes to a board trigger any sort of
> notifications for the tasks or stories involved, but that's probably
> OK. If we really want notifications we can look at adding them as
> a feature of Storyboard at the board level.
> 
> How does this sound as an approach? Does anyone have any reservations
> about using storyboard this way?
> 
> Doug

Since the feedback has been positive, I wrote up the policy changes to
go along with this. Please continue any discussion of the idea over
there.

https://review.openstack.org/534443

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-16 Thread melanie witt

Hello Stackers,

This is a heads up to any of you using the AggregateCoreFilter, 
AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. 
These filters have effectively allowed operators to set overcommit 
ratios per aggregate rather than per compute node in <= Newton.


Beginning in Ocata, there is a behavior change where aggregate-based 
overcommit ratios will no longer be honored during scheduling. Instead, 
overcommit values must be set on a per compute node basis in nova.conf.


Details: as of Ocata, instead of considering all compute nodes at the 
start of scheduler filtering, an optimization has been added to query 
resource capacity from placement and prune the compute node list with 
the result *before* any filters are applied. Placement tracks resource 
capacity and usage and does *not* track aggregate metadata [1]. Because 
of this, placement cannot consider aggregate-based overcommit and will 
exclude compute nodes that do not have capacity based on per compute 
node overcommit.


How to prepare: if you have been relying on per aggregate overcommit, 
during your upgrade to Ocata, you must change to using per compute node 
overcommit ratios in order for your scheduling behavior to stay 
consistent. Otherwise, you may notice increased NoValidHost scheduling 
failures as the aggregate-based overcommit is no longer being 
considered. You can safely remove the AggregateCoreFilter, 
AggregateRamFilter, and AggregateDiskFilter from your enabled_filters 
and you do not need to replace them with any other core/ram/disk 
filters. The placement query takes care of the core/ram/disk filtering 
instead, so CoreFilter, RamFilter, and DiskFilter are redundant.


Thanks,
-melanie

[1] Placement has been a new slate for resource management and prior to 
placement, there were conflicts between the different methods for 
setting overcommit ratios that were never addressed, such as, "which 
value to take if a compute node has overcommit set AND the aggregate has 
it set? Which takes precedence?" And, "if a compute node is in more than 
one aggregate, which overcommit value should be taken?" So, the 
ambiguities were not something that was desirable to bring forward into 
placement.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] adding Gage Hugo to keystone core

2018-01-16 Thread Harry Rybacki
+100 -- congratulations, Gage!

On Tue, Jan 16, 2018 at 2:24 PM, Raildo Mascena de Sousa Filho <
rmasc...@redhat.com> wrote:

> +1
>
> Congrats Gage, very well deserved!
>
> Cheers,
>
> On Tue, Jan 16, 2018 at 4:02 PM Lance Bragstad 
> wrote:
>
>> Hey folks,
>>
>> In today's keystone meeting we made the announcement to add Gage Hugo
>> (gagehugo) as a keystone core reviewer [0]! Gage has been actively
>> involved in keystone over the last several cycles. Not only does he
>> provide thorough reviews, but he's really stepped up to help move the
>> project forward by keeping a handle on bugs, fielding questions in the
>> channel, and being diligent about documentation (especially during
>> in-person meet ups).
>>
>> Thanks for all the hard work, Gage!
>>
>> [0]
>> http://eavesdrop.openstack.org/meetings/keystone/2018/
>> keystone.2018-01-16-18.00.log.html
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
>
> Raildo mascena
>
> Software Engineer, Identity Managment
>
> Red Hat
>
> 
> 
> TRIED. TESTED. TRUSTED. 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Emilien Macchi
On Tue, Jan 16, 2018 at 9:01 AM, James E. Blair  wrote:
[...]
> We're currently using tags like "zuulv3.0" and "zuulv3.1" to make this
> automatic board:
>
> https://storyboard.openstack.org/#!/board/53

Yeah I guess using worklists & tags would probably be our best bet now.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] adding Gage Hugo to keystone core

2018-01-16 Thread Raildo Mascena de Sousa Filho
+1

Congrats Gage, very well deserved!

Cheers,

On Tue, Jan 16, 2018 at 4:02 PM Lance Bragstad  wrote:

> Hey folks,
>
> In today's keystone meeting we made the announcement to add Gage Hugo
> (gagehugo) as a keystone core reviewer [0]! Gage has been actively
> involved in keystone over the last several cycles. Not only does he
> provide thorough reviews, but he's really stepped up to help move the
> project forward by keeping a handle on bugs, fielding questions in the
> channel, and being diligent about documentation (especially during
> in-person meet ups).
>
> Thanks for all the hard work, Gage!
>
> [0]
>
> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-01-16-18.00.log.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 

Raildo mascena

Software Engineer, Identity Managment

Red Hat



TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Emilien Macchi
On Tue, Jan 16, 2018 at 8:29 AM, Jeremy Stanley  wrote:
> On 2018-01-16 06:55:03 -0800 (-0800), Emilien Macchi wrote:
> [...]
>> I created a dev instance of storyboard and imported all bugs from
>> TripleO so we could have a look at how it would be if we were using
>> the tool:
> [...]
>
> Awesome! We do also have a https://storyboard-dev.openstack.org/
> deployment we can do test migrations into if you'd prefer something
> more central with which to play around.

well - I wanted root access to hack myself. If I need the -dev
instance I'll ask ok.

>
>> - how do we deal milestones in stories and also how can we have a
>> dashboard with an overview per milestone (useful for PTL + TripleO
>> release managers).
>
> So far, the general suggestion for stuff like this is to settle on a
> consistent set of story tags to apply. It really depends on whether
> you're trying to track this at a story or task level (there is no
> per-task tagging implemented yet at any rate). I could imagine, for
> example, setting something like tripleo-r2 as a tag on stories whose
> TripleO deliverable tasks are targeting Rocky milestone #2, and then
> you could have an automatic board with stories matching that tag and
> lanes based on the story status.

Does this kind of board exist already?
Something like https://launchpad.net/tripleo/+milestone/queens-3 maybe.
If the answer is "no but we can do it", fine but keep in mind this is
a blocker for us now.

I created a story: https://storyboard.openstack.org/#!/story/2001479

>> - how to update old Launchpad bugs with the new link in storyboard
>> (probably by hacking the migration script during the import).
>
> We've debated this... unfortunately mass bug updates are a challenge
> with LP due to the slowness and instability of its API. We might be
> able to get away with leaving a comment on each open LP bug for a
> project with a link to its corresponding story, but it would take a
> long time and may need many retries for some bugs with large numbers
> of subscribers. Switching the status of all the LP bugtasks en masse
> is almost guaranteed to be a dead end since bugtask status changes
> trigger API timeouts far more often based on our prior experience
> with LP integration, though I suppose we could just live with the
> idea that some of them might be uncloseable and ignore that
> fraction. If the migration script is to do any of this, it will also
> need to be extended to support LP authentication (since it currently
> only performs anonymous queries it doesn't need to authenticate).
> Further, that tool is currently designed to support being rerun
> against the same set of projects for iterative imports in the case
> of failure or to pick up newer comments/bugs so would need to know
> to filter out its own comments for purposes of sanity.

I'm fine to not update closed bugs but we should update ongoing bugs
before closing them.
We don't want let our users in a situation where their bugs closed and
they don't know what to do.
Not everyone is reading this mailing-list and having a nice message
posted on Launchpad will be a requirement.

I created a story: https://storyboard.openstack.org/#!/story/2001480

>> - how do we want the import to be like:
>>   * all bugs into a single project?
>
> Remember that the model for StoryBoard is like LP in that stories
> (analogous to bugs in LP) are themselves projectless. It's their
> tasks (similar to bugtasks in LP) which map to specific projects and
> a story can have tasks related to multiple projects. In our
> deployment of SB we create an SB project for each Git repository so
> over time you would expect the distribution of tasks to cover many
> "projects" (repositories) maintained by your team. The piece you may
> be missing here is that you can also define SB projects as belonging
> to one or more project groups, and in most cases by convention
> we've defined groups corresponding to official project teams (the
> governance concept of a "project") for ease of management.

ack

>>   * closing launchpad/tripleo/bugs access? if so we loose web search
>> on popular bugs
>
> They don't disappear from bugs.launchpad.net, and in fact you can't
> really even prevent people from updating those bugs or adding
> bugtasks for your project to other bug reports. What you have
> control over is disabling the ability to file new bugs and list
> existing bugs from your project page in LP. I would also recommend
> updating the project description on LP to prominently feature the
> URL to a closely corresponding project or group in SB.

ack

> Separately, I notice https://storyboard.openstack.org/robots.txt has
> been disallowing indexing by search engines... I think this is
> probably an oversight we should correct ASAP and I've just now added
> it to the agenda to discuss at today's Infra team meeting.

It would be great.

>> - (more a todo) update the elastic-recheck bugs
>
> This should hopefully be more of a (trivial?) feature 

[openstack-dev] [keystone] core adjustments

2018-01-16 Thread Lance Bragstad
Hey all,

I've been in touch with a few folks from our core team that are no
longer as active as they used to be. We've made a mutual decision to
remove Steve Martinelli and Brant Knudson from keystone core and Brad
Topol from keystone specification core.

I'd like to express my gratitude for all the work they have done to make
keystone better. It's been an absolute pleasure working with each of
them and if they do see their involvement in keystone increase, we can
expedite their path to core if they choose to pursue it.




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-03

2018-01-16 Thread Chris Dent


(Blog version: )

If a common theme exists in TC activity in the past week, it is the
cloudy nature of leadership and governance and how this relates to
what the TC should be doing, as a body, and how TC members, as
individuals, should identify what they are doing ("I'm doing this with
my TC hat on", "I am not doing this with my TC hat on").

It's a bit of a strange business, to me, because I think much of what
a TC member can do is related to the relative freedom being elected
allows them to achieve. I feel I can budget the time to write this
newsletter because I'm a TC member, but I would be doing a _bad thing_
if I declared that this document was an official utterance of
OpenStack governance™.

Other TC members probably have a much different experience.

## Entropy and Governance

The theme started with [a
discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-11.log.html#t2018-01-11T13:27:51)
about driving some cleanup of stale repos on
 and whether that was an activity that
should be associated with the TC role. It is clear there are some
conflicts:

* Because many repositories on `git.openstack.org` are not _official_
  OpenStack projects it would be inappropriate to manage them out of
  existence. In this case, using OpenStack infra does not indicate
  volunteering oneself to be governed by the TC. Only being _official_
  does that.
* On the other hand, if being on the TC represents a kind of
  leadership and presents a form of freedom-to-do, then such cleanups
  represent an opportunity to, as [Sean put
  
it](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-11.log.html#t2018-01-11T13:37:29),
  improve things: "Governed or not, I care about OpenStack and would
  like to see it not weighed down by entropy." In some sense, the role
  of the TC is to exercise that caring for OpenStack and what that
  caring is is context-dependent.

These issues are further complicated by the changing shape of the
OpenStack Foundation where there will be things which are officially
part of the Foundation (such as Kata), and may use OpenStack infra,
but have little to no relationship with the TC. Expect this to get
more complicated before it gets less.

That was before office-hours. By the time office hours started, the
conversation abstracted (as it often does) into more of a discussion
about the [role of the
TC](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-11.log.html#t2018-01-11T15:21:03)
with me saying:


What I'm upset (mildly) about is our continued effort to sort [of]
_not_ have the TC fill the leadership void that I think exists in
OpenStack. The details of this particular case are a stimulus for
that conversation, but not necessar[il]y relevant.


(I did get a bit cranky in the discussion, my apologies to those who
there. This is one of the issues that I'm most passionate about in
OpenStack and I let myself run away a bit. My personal feeling has
always been that we need an activist _and_ responsive TC if we
expect to steward an environment that improves and adapts to change.)

The log is worth reading if this is a topic of interest to you. We
delineated some problems that have been left on the floor in the past,
some meta-problems with how we identify problems, and even had some
agreement on things to try.

## OpenStack-wide Goals

Mixed in with the above discussions—and a good example of where the
TC does provide some coordination and leadership to help guide all the
boats in a similar direction—were efforts to establish sufficient
proposals for [OpenStack-wide
goals](https://governance.openstack.org/tc/goals/index.html) to make a
fair choice. There are now four reviews pending:

* [Pagination Links](https://review.openstack.org/#/c/532627/)
* [Asserting Cold Upgrade
  Capabilities](https://review.openstack.org/#/c/533544/)
* [Migrating to StoryBoard](https://review.openstack.org/#/c/513875/)
* [Removing mox](https://review.openstack.org/#/c/532361/)

It's quite likely that the StoryBoard goal will move to later, to get
increased experience with it (such as by using it for [tracking rocky
goals](http://lists.openstack.org/pipermail/openstack-dev/2018-January/126189.html)).
That leaves the other three. They provide a nice balance between
improving the user experience, improving the operator experience, and
dealing with some technical debt.

If you have thoughts on these goals you should comment on the reviews.
There is also a [mailing list
thread](http://lists.openstack.org/pipermail/openstack-dev/2018-January/126090.html)
in progress.

Late in the day today, there was discussion of perhaps [limiting the
number of
goals](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-01-16.log.html#t2018-01-16T13:05:31).
Some projects are still trying to complete queens goals and were
delayed for various reasons, 

[openstack-dev] [keystone] adding Gage Hugo to keystone core

2018-01-16 Thread Lance Bragstad
Hey folks,

In today's keystone meeting we made the announcement to add Gage Hugo
(gagehugo) as a keystone core reviewer [0]! Gage has been actively
involved in keystone over the last several cycles. Not only does he
provide thorough reviews, but he's really stepped up to help move the
project forward by keeping a handle on bugs, fielding questions in the
channel, and being diligent about documentation (especially during
in-person meet ups).

Thanks for all the hard work, Gage!

[0]
http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-01-16-18.00.log.html




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTL Election Season

2018-01-16 Thread Kendall Nelson
Thanks  for the reminder Luke!

I will update my patch to remove Security from the directories setup.

-Kendall (diablo_rojo)

On Mon, Jan 15, 2018 at 9:11 AM Luke Hinds  wrote:

> On Mon, Jan 15, 2018 at 5:04 PM, Kendall Nelson 
> wrote:
>
>> Election details: https://governance.openstack.org/election/
>>
>> Please read the stipulations and timelines for candidates and electorate
>> contained in this governance documentation.
>>
>> Be aware, in the PTL elections if the program only has one candidate,
>> that candidate is acclaimed and there will be no poll. There will only be a
>> poll if there is more than one candidate stepping forward for a program's
>> PTL position.
>>
>> There will be further announcements posted to the mailing list as action
>> is required from the electorate or candidates. This email is for
>> information purposes only.
>>
>> If you have any questions which you feel affect others please reply to
>> this email thread.
>>
>> If you have any questions that you which to discuss in private please
>> email any of the election judges[1] so that we may address your concerns.
>>
>>  Thank you,
>>
>> -Kendall Nelson (diablo_rojo)
>>
>> [1] https://governance.openstack.org/election/#election-officials
>>
>
> Keep in mind there will be no Security PTL election for rocky as we will
> be changing to a SIG and will no longer be a project.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2018-01-16 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. ironic-lib patches to finish before the freeze
1.1. fix waiting for partition: https://review.openstack.org/#/c/529325/
2. Classic drivers deprecation
2.1. upgrade: Patch to be posted early this week
3. Traits
3.1. RPC https://review.openstack.org/#/c/532268/
3.2. API https://review.openstack.org/#/c/532269/
4. ironicclient version negotiation
4.1. expose negotiated latest: https://review.openstack.org/531029
4.2. accept list of versions: https://review.openstack.org/#/c/531271/
5. Rescue:
5.1. RPC https://review.openstack.org/#/c/509336/
5.2. network interface update: https://review.openstack.org/#/c/509342
6. Fix for non-x86 architectures: https://review.openstack.org/#/c/501799/

Vendor priorities
-
cisco-ucs:
Patches in works for SDK update, but not posted yet, currently rebuilding 
third party CI infra after a disaster...
idrac:
RFE and first several patches for adding UEFI support will be posted by 
Tuesday, 1/9
ilo:
https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5
irmc:
None

oneview:
Introduce hpOneView and ilorest to OneView -  
https://review.openstack.org/#/c/523943/

Subproject priorities
-
bifrost:
(TheJulia): Fedora support fixes -  https://review.openstack.org/#/c/471750/
ironic-inspector (or its client):
(dtantsur) keystoneauth adapters https://review.openstack.org/#/c/515787/
networking-baremetal:
neutron baremetal agent https://review.openstack.org/#/c/456235/
sushy and the redfish driver:
(dtantsur) implement redfish sessions: 
https://review.openstack.org/#/c/471942/

Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between  08 Jan 2018 and 15 Jan 2018)
- Ironic: 216 bugs (-3) + 260 wishlist items. 1 new (-1), 156 in progress (-2), 
0 critical, 33 high (-1) and 27 incomplete (-1)
- Inspector: 14 bugs (-1) + 28 wishlist items. 0 new, 10 in progress, 0 
critical, 2 high (-1) and 6 incomplete (+1)
- Nova bugs with Ironic tag: 13. 1 new, 0 critical, 0 high
- via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/
- HIGH bugs with patches to review:
- Clean steps are not tested in gate 
https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic 
standalone test https://review.openstack.org/#/c/429770/15
- Needs to be reproposed to the ironic tempest plugin repository.
- prepare_instance() is not called for whole disk images with 'agent' deploy 
interface https://bugs.launchpad.net/ironic/+bug/1713916:
- Fix ``agent`` deploy interface to call ``boot.prepare_instance`` 
https://review.openstack.org/#/c/499050/
- (TheJulia) Currently WF-1, as revision is required for deprecation.
- If provisioning network is changed, Ironic conductor does not behave 
correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor 
works correctly on changes of networks: https://review.openstack.org/#/c/462931/
- (rloo) needs some direction
- may be fixed as part of https://review.openstack.org/#/c/460564/
- IPA may not find partition created by conductor 
https://bugs.launchpad.net/ironic-lib/+bug/1739421
- Fix proposed: https://review.openstack.org/#/c/529325/

CI refactoring and missing test coverage

- not considered a priority, it's a 'do it always' thing
- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- localboot with partitioned image patches:
- Ironic - add localboot partitioned image test: 
https://review.openstack.org/#/c/502886/
- when previous are merged TODO (vsaienko)
- Upload tinycore partitioned image to tarbals.openstack.org
- Switch ironic to use tinyipa partitioned image by default
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO
- node take over
- resource classes integration tests: 
https://review.openstack.org/#/c/443628/
- radosgw (https://bugs.launchpad.net/ironic/+bug/1737957)

Essential Priorities


Ironic client API version negotiation (TheJulia, dtantsur)
--
- RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145
- Nova bug https://bugs.launchpad.net/nova/+bug/1739440
- gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145
- status as of 15 Jan 2017:
- Nova request was accepted as a bug for now: 

Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread James E. Blair
Emmet Hikory  writes:

> Emilien Macchi wrote:
>
>> What we need to investigate:
>> - how do we deal milestones in stories and also how can we have a
>> dashboard with an overview per milestone (useful for PTL + TripleO
>> release managers).
>
>     While the storyboard API supports milestones, they don’t work very
> similarly to “milestones” in launchpad, so are probably confusing to
> adopt (and have no UI support).  Some folk use tags for this (perhaps
> with an automatic worklist that selects all the stories with the tag,
> for overview).

We're currently using tags like "zuulv3.0" and "zuulv3.1" to make this
automatic board:

https://storyboard.openstack.org/#!/board/53

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2018-01-16 16:29:32 +:
> On 2018-01-16 06:55:03 -0800 (-0800), Emilien Macchi wrote:
> [...]
> > I created a dev instance of storyboard and imported all bugs from
> > TripleO so we could have a look at how it would be if we were using
> > the tool:
> [...]
> 
> Awesome! We do also have a https://storyboard-dev.openstack.org/
> deployment we can do test migrations into if you'd prefer something
> more central with which to play around.
> 
> > - how do we deal milestones in stories and also how can we have a
> > dashboard with an overview per milestone (useful for PTL + TripleO
> > release managers).
> 
> So far, the general suggestion for stuff like this is to settle on a
> consistent set of story tags to apply. It really depends on whether
> you're trying to track this at a story or task level (there is no
> per-task tagging implemented yet at any rate). I could imagine, for
> example, setting something like tripleo-r2 as a tag on stories whose
> TripleO deliverable tasks are targeting Rocky milestone #2, and then
> you could have an automatic board with stories matching that tag and
> lanes based on the story status.

That sounds like it might also be a useful way to approach the goal
tracking. Can someone point me to an example of how to set up an
automatic board like that?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Jeremy Stanley
On 2018-01-16 16:29:32 + (+), Jeremy Stanley wrote:
[...]
> Thankfully, SB was designed from the very beginning in an API-first
> manner with the WebUI merely one possible API client (there are also
> other clients like the boartty console client and a .
[...]

Oops, to complete my thought there: ...and a CLI but that's seen
less development activity since boartty came into being.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 5th Edition

2018-01-16 Thread Emilien Macchi
Note: this is the fifth edition of a weekly update of what happens in
TripleO, with a little touch of fun.
The goal is to provide a short reading (less than 5 minutes) to learn
where we are and what we're doing.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126091.html

+-+
| General announcements |
+-+

+--> Focus is on Queens-m3 (next week): stabilization and getting RDO
promotion before the milestone.
+--> No new contributor this week.
+--> The team should be planning for Rocky, and prepare the specs /
blueprints if needed.
+--> Storyboard is being evaluated, take a look how it would be!
http://storyboard.macchi.pro:9000

+--+
| Continuous Integration |
+--+

+--> Rover is Gabriele and ruck is Arx. Please let them know any new CI issue.
+--> Master promotion is 11 days, Pike is 10 days and Ocata is 10 days.
+--> The team is working hard to get a promotion asap.
+--> Sprint 6 is still ongoing, major focus on TripleO CI data
collection in grafana.
+--> We're re-enabling voting on all scenarios on both pike & master
(jobs are passing now).
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
and https://goo.gl/D4WuBP

+-+
| Upgrades |
+-+

+--> As usual, reviews are needed on FFU, Backups, Upgrades to Pike &
Queens; please check the etherpads
+--> RDO-cloud upgrade retrospective was done, good feedback and will
hopefully make our upgrades stronger.
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status
and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting

+---+
| Containers |
+---+

+--> Kubernetes: dealing with networking during OpenShift deployment.
+--> Containerized undercloud: undercloud-passwords.conf is now
generated to mimic instack-undercloud behavior.
+--> Containerized overcloud: "container prepare" workflow looks good,
need review now.
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| Integration |
+--+

+--> Some ongoing work to run Manila & Sahara tempest tests in the gate!
+--> Need reviews on Manila/CephNFS.
+--> Multiple Ceph clusters is still work in progress.
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> Currently having CI issues with a javascript error possibly
caused by prettier.
+--> Working to upgrade npm to avoid future deps issues.
+--> Enforcing NPM versions.
+--> Roles and Network UI work continuing.
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> Configuring VFs in the overcloud for non-tenant networking use (for queens)
+--> Fondation routed networks support
+--> Octavia LBaaS configuration (for queens)
+--> TLS support for ODL
+--> OVN parity feature support (OVN Metadata just merged!) (for queens)
+--> Configuration support for OVS SR-IOV offload feature (for queens)
+--> Jinja2 rendered network templates (for queens?)
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> Preparing the PTG:
https://etherpad.openstack.org/p/tripleo-workflows-squad-ptg
+--> Work on API design & documentation
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+-+
| Owl facts |
+-+

The Golden Masked Owl is a relatively small barn owl with no
ear-tufts. It is also known as the New Britain Masked Owl or New
Britain Barn Owl.
This one is uncommon to rare and vulnerable! You can find them on the
island of New Britain in Papua New Guinea.
(source: https://www.owlpages.com/owls/species.php?s=130)

Stay tuned!
--
Your fellow reporter, Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Community Goals for Rocky

2018-01-16 Thread Sean McGinnis
> 
> > Champion: Sean McGinnis (unless someone else steps up)
> > https://review.openstack.org/#/c/532361/
> > This goal is to clean some technical debt in the code.
> > It remains a good candidate for Queens.
> >
> 
> May I step up for this goal for Rocky release?
> I am currently involved with Tempest plugin split goal in Queens
> Release. I wanted to help on this one.
> 
> Thanks,
> 
> Chandan Kumar
> 

Excellent, thanks Chandan. There are a few updates to do to the proposal that
I've been waiting to do. I will probably make those updates in the next couple
days, and when I do I will put you down as the champion.

I plan to help see it through as well, but great to have someone like you
working on this. Thanks a lot for stepping up and for the current work you've
been doing on the tempest goal.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Facing issues with Openstack subnet static routes

2018-01-16 Thread Akshay Kapoor
Hello everyone,


I am working on a task, requirement for which is to share services/VMs
provisioned inside tenant X (say: network X-n subnet X-s) with another
tenant Y(say: network Y-n and subnet Y-s).

Did the following steps:

1) Shared the network X-n using 'openstack network rbac...' with tenant Y

2) Created a port X-p on Subnet X-s and added the port as an interface to
the default router in tenant Y

3) Created a static route in subnet X-s to forward all traffic intended for
CIDR range of tenant Y (Y-s) subnet to port X-p.

VMs provisioned in X-s are not reachable from within tenant Y VMs. However,
if I remove the static route from the subnet X-s static route settings and
add the route to default router in tenant X instead, it works.

Can someone please help with why this could happen ?


Regards,

Akshay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Community Goals for Rocky

2018-01-16 Thread Chandan kumar
Hello Em

On Tue, Jan 16, 2018 at 9:59 PM, Emilien Macchi  wrote:
> Here's an update so we can hopefully, as a community, take a decision
> in the next days or so.
>
>
> * Migration to StoryBoard
>
> Champion: Kendall Nelson
> https://review.openstack.org/#/c/513875/
> Some projects already migrated, some projects will migrate soon but
> there is still a gap of things that prevents some projects to not
> migrate.
> See 
> https://storyboard.openstack.org/#!/search?tags=blocking-storyboard-migration
> For that reason, we are postponing this goal to later but work needs
> to keep going to make that happen one day.
>
>
> * Remove mox
>

> Champion: Sean McGinnis (unless someone else steps up)
> https://review.openstack.org/#/c/532361/
> This goal is to clean some technical debt in the code.
> It remains a good candidate for Queens.
>

May I step up for this goal for Rocky release?
I am currently involved with Tempest plugin split goal in Queens
Release. I wanted to help on this one.

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Jeremy Stanley
On 2018-01-16 06:55:03 -0800 (-0800), Emilien Macchi wrote:
[...]
> I created a dev instance of storyboard and imported all bugs from
> TripleO so we could have a look at how it would be if we were using
> the tool:
[...]

Awesome! We do also have a https://storyboard-dev.openstack.org/
deployment we can do test migrations into if you'd prefer something
more central with which to play around.

> - how do we deal milestones in stories and also how can we have a
> dashboard with an overview per milestone (useful for PTL + TripleO
> release managers).

So far, the general suggestion for stuff like this is to settle on a
consistent set of story tags to apply. It really depends on whether
you're trying to track this at a story or task level (there is no
per-task tagging implemented yet at any rate). I could imagine, for
example, setting something like tripleo-r2 as a tag on stories whose
TripleO deliverable tasks are targeting Rocky milestone #2, and then
you could have an automatic board with stories matching that tag and
lanes based on the story status.

> - how to update old Launchpad bugs with the new link in storyboard
> (probably by hacking the migration script during the import).

We've debated this... unfortunately mass bug updates are a challenge
with LP due to the slowness and instability of its API. We might be
able to get away with leaving a comment on each open LP bug for a
project with a link to its corresponding story, but it would take a
long time and may need many retries for some bugs with large numbers
of subscribers. Switching the status of all the LP bugtasks en masse
is almost guaranteed to be a dead end since bugtask status changes
trigger API timeouts far more often based on our prior experience
with LP integration, though I suppose we could just live with the
idea that some of them might be uncloseable and ignore that
fraction. If the migration script is to do any of this, it will also
need to be extended to support LP authentication (since it currently
only performs anonymous queries it doesn't need to authenticate).
Further, that tool is currently designed to support being rerun
against the same set of projects for iterative imports in the case
of failure or to pick up newer comments/bugs so would need to know
to filter out its own comments for purposes of sanity.

> - how do we want the import to be like:
>   * all bugs into a single project?

Remember that the model for StoryBoard is like LP in that stories
(analogous to bugs in LP) are themselves projectless. It's their
tasks (similar to bugtasks in LP) which map to specific projects and
a story can have tasks related to multiple projects. In our
deployment of SB we create an SB project for each Git repository so
over time you would expect the distribution of tasks to cover many
"projects" (repositories) maintained by your team. The piece you may
be missing here is that you can also define SB projects as belonging
to one or more project groups, and in most cases by convention
we've defined groups corresponding to official project teams (the
governance concept of a "project") for ease of management.

>   * closing launchpad/tripleo/bugs access? if so we loose web search
> on popular bugs

They don't disappear from bugs.launchpad.net, and in fact you can't
really even prevent people from updating those bugs or adding
bugtasks for your project to other bug reports. What you have
control over is disabling the ability to file new bugs and list
existing bugs from your project page in LP. I would also recommend
updating the project description on LP to prominently feature the
URL to a closely corresponding project or group in SB.

Separately, I notice https://storyboard.openstack.org/robots.txt has
been disallowing indexing by search engines... I think this is
probably an oversight we should correct ASAP and I've just now added
it to the agenda to discuss at today's Infra team meeting.

> - (more a todo) update the elastic-recheck bugs

This should hopefully be more of a (trivial?) feature add to ER,
since the imported stories keep the same story numbers as the bugs
from which they originated.

> - investigate our TripleO Alerts (probably will have to use Storyboard
> API instead of Launchpad).
[...]

Thankfully, SB was designed from the very beginning in an API-first
manner with the WebUI merely one possible API client (there are also
other clients like the boartty console client and a . In theory
pretty much anything you can do through the WebUI can also be done
through the API, as opposed to LP where the API is sort of
bolted-on.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Community Goals for Rocky

2018-01-16 Thread Emilien Macchi
Here's an update so we can hopefully, as a community, take a decision
in the next days or so.


* Migration to StoryBoard

Champion: Kendall Nelson
https://review.openstack.org/#/c/513875/
Some projects already migrated, some projects will migrate soon but
there is still a gap of things that prevents some projects to not
migrate.
See 
https://storyboard.openstack.org/#!/search?tags=blocking-storyboard-migration
For that reason, we are postponing this goal to later but work needs
to keep going to make that happen one day.


* Remove mox

Champion: Sean McGinnis (unless someone else steps up)
https://review.openstack.org/#/c/532361/
This goal is to clean some technical debt in the code.
It remains a good candidate for Queens.


* Ensure pagination links

Champion: Monty Taylor
https://review.openstack.org/#/c/532627/
This one would improve API users experience.
It remains a good candidate for Queens.


* Enable mutable configuration
Champion: ChangBo Guo
Nothing was proposed in governance so far and we have enough proposals
now, I guess it could be a candidate for a future cycle though. This
one would make happy our operators.


* Cold upgrades capabilities
Champion: Masayuki Igawa
https://review.openstack.org/#/c/533544/
This one would be appreciated by our operators who always need
improvements on upgrades experience - I believe it would be a good
candidate.


Note: some projects requested about having less goals so they have
more time to work on their backlogs. While I agree with that, I would
like to know who asked exactly, and if they would be affected by the
goals or not.
It will help us to decide which ones we take.

So now, it's really a good time to speak-up and say if:
- your project could commit to 2 of these goals or not (and why? backlog? etc)
- which ones you couldn't commit to
- the ones you prefer

We need to take a decision as a community, not just TC members, so
please bring feedback.

Thanks,


On Fri, Jan 12, 2018 at 2:19 PM, Lance Bragstad  wrote:
>
>
> On 01/12/2018 11:09 AM, Tim Bell wrote:
>> I was reading a tweet from Jean-Daniel and wondering if there would be an 
>> appropriate community goal regarding support of some of the later API 
>> versions or whether this would be more of a per-project goal.
>>
>> https://twitter.com/pilgrimstack/status/951860289141641217
>>
>> Interesting numbers about customers tools used to talk to our @OpenStack 
>> APIs and the Keystone v3 compatibility:
>> - 10% are not KeystoneV3 compatible
>> - 16% are compatible
>> - for the rest, the tools documentation has no info
>>
>> I think Keystone V3 and Glance V2 are the ones with APIs which have moved on 
>> significantly from the initial implementations and not all projects have 
>> been keeping up.
> Yeah, I'm super interested in this, too. I'll be honest I'm not quite
> sure where to start. If the tools are open source we can start
> contributing to them directly.
>>
>> Tim
>>
>> -Original Message-
>> From: Emilien Macchi 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Friday, 12 January 2018 at 16:51
>> To: OpenStack Development Mailing List 
>> Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky
>>
>> Here's a quick update before the weekend:
>>
>> 2 goals were proposed to governance:
>>
>> Remove mox
>> https://review.openstack.org/#/c/532361/
>> Champion: Sean McGinnis (unless someone else steps up)
>>
>> Ensure pagination links
>> https://review.openstack.org/#/c/532627/
>> Champion: Monty Taylor
>>
>> 2 more goals are about to be proposed:
>>
>> Enable mutable configuration
>> Champion: ChangBo Guo
>>
>> Cold upgrades capabilities
>> Champion: Masayuki Igawa
>>
>>
>> Thanks everyone for your participation,
>> We hope to make a vote within the next 2 weeks so we can prepare the
>> PTG accordingly.
>>
>> On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi  
>> wrote:
>> > As promised, let's continue the discussion and move things forward.
>> >
>> > This morning Thierry brought the discussion during the TC office hour
>> > (that I couldn't attend due to timezone):
>> > 
>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33
>> >
>> > Some outputs:
>> >
>> > - One goal has been proposed so far.
>> >
>> > Right now, we only have one goal proposal: Storyboard Migration. There
>> > are some concerns about the ability to achieve this goal in 6 months.
>> > At that point, we think it would be great to postpone the goal to S
>> > cycle, continue the progress (kudos to Kendall) and fine other goals
>> > for Rocky.
>> >
>> >
>> > - We still have a good backlog of goals, we're just missing champions.
>> >
>> > 

[openstack-dev] ping between 2 instances using an ovs in the middle

2018-01-16 Thread David Gabriel
Dears,

I am writing you this email to look for your help in order to fix a
problem, I am facing since a while, related to creating two ubuntu
instances in Openstack (Fuel 9.2 for Mitaka) and setting an ovs bridge in
each VM.
Here is the problem description:
I have defined two instances called VM1 and VM2 and ovs bridge, each one of
them is deployed in one Virtual Machine (VM) based on this simple topology:
*VM1* ---LAN1*OVS*---LAN2--- *VM2*

I used the following commands, taken from some tutorial, for OVS:

ovs-vsctl add-br mybridge1
ifconfig mybridge1 up
ovs-vsctl add-port eth1 mybridge1
ifconfig eth1 0
ovs-vsctl add-port eth1 mybridge1 ovs-vsctl set-controller mybridge
tcp:AddressOfController:6633

Then I tried to make the ping between the two VMs but it fails !
Could you please tell/guide me how to fix this problem.

Thanks in advance.
Best regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][fwaas] YouTube video demoing FWaaS V2 L2

2018-01-16 Thread German Eichberger
All,

With great pleasure I am sharing this link of  Chandan’s excellent video 
showing the new L2 functionality: 
https://www.youtube.com/watch?v=gBYJIZ4tUaw=youtu.be

Psyched and many thanks to Chandan --

German

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Ben Nemec



On 01/16/2018 08:55 AM, Emilien Macchi wrote:

Hey folks,

Alex and I spent a bit of time to look at Storyboard, as it seems that
some OpenStack projects are migrating from Launchpad to Storyboard.
I created a dev instance of storyboard and imported all bugs from
TripleO so we could have a look at how it would be if we were using
the tool:

http://storyboard.macchi.pro:9000/

So far, I liked:

- the import went just... fine. Really good work! Title, status,
descriptions, tags, comments were successfully imported.
- the simplicity of bug status which is more clear than Launchpad:
Active, Merged, Invalid.
- the UI is really good and it works fine on mobile.
- if we manage to make the migration good, each TripleO squad would
have their own backlogs and owns boards / worklists / ways to manage
todos.

What we need to investigate:
- how do we deal milestones in stories and also how can we have a
dashboard with an overview per milestone (useful for PTL + TripleO
release managers).
- how to update old Launchpad bugs with the new link in storyboard
(probably by hacking the migration script during the import).
- how do we want the import to be like:
   * all bugs into a single project?
   * closing launchpad/tripleo/bugs access? if so we loose web search
on popular bugs
- (more a todo) update the elastic-recheck bugs
- investigate our TripleO Alerts (probably will have to use Storyboard
API instead of Launchpad).


We discussed this in the last Designate meeting too, and it was noted 
that there are some stories tracking the migration blocking issues: 
https://storyboard.openstack.org/#!/search?tags=blocking-storyboard-migration


It might be good to add stories for any of these issues that aren't 
already covered.




Anyway, in a short conclusion I think the project is great but until
we figure our what we need to investigate we can't migrate easily.
If you would like to be involved on that topic, please let us know,
any help is welcome.

Thanks,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][oslo][requirements] oslo.serialization fails with glance

2018-01-16 Thread Matthew Thode
On 18-01-16 19:12:16, ChangBo Guo wrote:
> What's the issue for Glance,  any bug link ?
> 
> 2018-01-16 0:12 GMT+08:00 Matthew Thode :
> 
> > On 18-01-13 00:41:28, Matthew Thode wrote:
> > > https://review.openstack.org/531788 is the review we are seeing it in,
> > > but 2.22.0 failed as well.
> > >
> > > I'm guessing it was introduced in either
> > >
> > > https://github.com/openstack/oslo.serialization/commit/
> > c1a7079c26d27a2e46cca26963d3d9aa040bdbe8
> > > or
> > > https://github.com/openstack/oslo.serialization/commit/
> > cdb2f60d26e3b65b6370f87b2e9864045651c117
> >
> > bamp
> >

The best bug for this is
https://bugs.launchpad.net/oslo.serialization/+bug/1728368 and we are
currently getting test fails in https://review.openstack.org/531788

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Emmet Hikory
Emilien Macchi wrote:

> What we need to investigate:
> - how do we deal milestones in stories and also how can we have a
> dashboard with an overview per milestone (useful for PTL + TripleO
> release managers).

    While the storyboard API supports milestones, they don’t work very 
similarly to “milestones” in launchpad, so are probably confusing to adopt (and 
have no UI support).  Some folk use tags for this (perhaps with an automatic 
worklist that selects all the stories with the tag, for overview).

>  - how to update old Launchpad bugs with the new link in storyboard
> (probably by hacking the migration script during the import).

    There had been a task on the migration story to do this, but it it was 
dropped in favour of project-wide communications, rather than per-bug 
communications.  An add-on script to modify bugs after migration is complete is 
probably a richer solution than direct modification of the current migration 
script (which allows multiple runs to keep up to date during transition), if 
tripleO wishes per-bug communication.

    Some context at https://storyboard.openstack.org/#!/story/2000876

—
Emmet HIKORY

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG

2018-01-16 Thread Zhipeng Huang
application filed at

https://review.openstack.org/534342 and wiki created at
https://wiki.openstack.org/wiki/Res_Mgmt_SIG .

On Wed, Jan 10, 2018 at 12:51 AM, Zhipeng Huang 
wrote:

> i think I could do it, but I gotta rely on you guys to attend the Resource
> Management WG meeting since its time is really bad for us in APAC timezone
> :P
>
> On Tue, Jan 9, 2018 at 6:30 PM, Chris Dent  wrote:
>
>> On Mon, 8 Jan 2018, Jay Pipes wrote:
>>
>> I think having a bi-weekly cross-project (or even cross-ecosystem if
>>> we're talking about OpenStack+k8s) status email reporting any big events in
>>> the resource tracking world would be useful. As far as regular meetings for
>>> a resource management SIG, I'm +0 on that. I prefer to have targeted
>>> topical meetings over regular meetings.
>>>
>>
>> I agree, would much prefer to see more email and less meetings. It
>> would be fantastic if we can get some cross pollination disucssion
>> happening.
>>
>> A status email, especially one that was cross-ecosystem, would be
>> great. Unfortunately I can't commit to doing that myself (the
>> existing 2 a week I do is plenty) but hope someone will take it up.
>>
>> --
>> Chris Dent  (⊙_⊙') https://anticdent.org/
>> freenode: cdent tw: @anticdent
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Product Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] storyboard evaluation

2018-01-16 Thread Emilien Macchi
Hey folks,

Alex and I spent a bit of time to look at Storyboard, as it seems that
some OpenStack projects are migrating from Launchpad to Storyboard.
I created a dev instance of storyboard and imported all bugs from
TripleO so we could have a look at how it would be if we were using
the tool:

http://storyboard.macchi.pro:9000/

So far, I liked:

- the import went just... fine. Really good work! Title, status,
descriptions, tags, comments were successfully imported.
- the simplicity of bug status which is more clear than Launchpad:
Active, Merged, Invalid.
- the UI is really good and it works fine on mobile.
- if we manage to make the migration good, each TripleO squad would
have their own backlogs and owns boards / worklists / ways to manage
todos.

What we need to investigate:
- how do we deal milestones in stories and also how can we have a
dashboard with an overview per milestone (useful for PTL + TripleO
release managers).
- how to update old Launchpad bugs with the new link in storyboard
(probably by hacking the migration script during the import).
- how do we want the import to be like:
  * all bugs into a single project?
  * closing launchpad/tripleo/bugs access? if so we loose web search
on popular bugs
- (more a todo) update the elastic-recheck bugs
- investigate our TripleO Alerts (probably will have to use Storyboard
API instead of Launchpad).

Anyway, in a short conclusion I think the project is great but until
we figure our what we need to investigate we can't migrate easily.
If you would like to be involved on that topic, please let us know,
any help is welcome.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug deputy report

2018-01-16 Thread Akihiro Motoki
Gary,

> I need to drop off bug duty on Thursday night so if someone can please swap 
> me for Friday.

I am a bug deputy of the next week. I can start my coverage from this Friday.

Thanks,
Akihiro

2018-01-16 21:51 GMT+09:00 Gary Kotton :
> Hi,
>
> Things have been relatively quiet. There are two bugs:
>
> 1. https://bugs.launchpad.net/neutron/+bug/1743480 - I think that we can
> leverage tags here so that should address the issue. Would be interesting to
> know what others thing.
>
> 2. https://bugs.launchpad.net/neutron/+bug/1743552 - patch in review
> https://review.openstack.org/#/c/534263/
>
> I need to drop off bug duty on Thursday night so if someone can please swap
> me for Friday.
>
> Thanks
>
> Gary
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ops Mid Cycle in Tokyo Mar 7-8 2018

2018-01-16 Thread David Medberry
Hi all,

Broad distribution to make sure folks are aware of the upcoming Ops Meetup
in Tokyo.

You can help "steer" this meetup by participating in the planning meetings
or more practically by editing this page (respectfully):
https://etherpad.openstack.org/p/TYO-ops-meetup-2018

Sign up for the meetup is here:https://goo.gl/HBJkPy

We'll see you there!

-dave
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] not run for PTL

2018-01-16 Thread Doug Hellmann
Excerpts from ChangBo Guo's message of 2018-01-16 18:55:26 +0800:
> Hi Oslo folks,
> 
> I taken the role of PTL in last 2 cycles, and would like to focus on coding
> this cycle.
>  it's time to let new leader to make oslo better.  So  I won't be running
> for PTL reelection for Rocky cycle.   Thanks all of your support and trust
> in last 2 cycles.
> 

Thank you for serving as PTL, gcb! The libraries have been quite stable
lately under your leadership.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Bug deputy report

2018-01-16 Thread Gary Kotton
Hi,
Things have been relatively quiet. There are two bugs:
1. https://bugs.launchpad.net/neutron/+bug/1743480 - I think that we can 
leverage tags here so that should address the issue. Would be interesting to 
know what others thing.
2. https://bugs.launchpad.net/neutron/+bug/1743552 - patch in review 
https://review.openstack.org/#/c/534263/
I need to drop off bug duty on Thursday night so if someone can please swap me 
for Friday.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] not run for PTL

2018-01-16 Thread Davanum Srinivas
Thanks for all your help @gcb!

On Tue, Jan 16, 2018 at 5:55 AM, ChangBo Guo  wrote:
> Hi Oslo folks,
>
> I taken the role of PTL in last 2 cycles, and would like to focus on coding
> this cycle.
>  it's time to let new leader to make oslo better.  So  I won't be running
> for PTL reelection for Rocky cycle.   Thanks all of your support and trust
> in last 2 cycles.
>
>
> --
> ChangBo Guo(gcb)
> Community Director @EasyStack
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Dublin PTG agenda

2018-01-16 Thread Alexander Chadin
Watcher team,

We are preparing PTG Agenda here: 
https://etherpad.openstack.org/p/rocky-watcher-ptg
Feel free to add your topics. I’m going to discuss this content on the next 
weekly
meeting (which will be held tomorrow at 13:00 UTC).

Best Regards,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][oslo][requirements] oslo.serialization fails with glance

2018-01-16 Thread ChangBo Guo
What's the issue for Glance,  any bug link ?

2018-01-16 0:12 GMT+08:00 Matthew Thode :

> On 18-01-13 00:41:28, Matthew Thode wrote:
> > https://review.openstack.org/531788 is the review we are seeing it in,
> > but 2.22.0 failed as well.
> >
> > I'm guessing it was introduced in either
> >
> > https://github.com/openstack/oslo.serialization/commit/
> c1a7079c26d27a2e46cca26963d3d9aa040bdbe8
> > or
> > https://github.com/openstack/oslo.serialization/commit/
> cdb2f60d26e3b65b6370f87b2e9864045651c117
>
> bamp
>
> --
> Matthew Thode (prometheanfire)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] not run for PTL

2018-01-16 Thread ChangBo Guo
Hi Oslo folks,

I taken the role of PTL in last 2 cycles, and would like to focus on coding
this cycle.
 it's time to let new leader to make oslo better.  So  I won't be running
for PTL reelection for Rocky cycle.   Thanks all of your support and trust
in last 2 cycles.


-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Adding Adriano Petrich to the core team

2018-01-16 Thread Adriano Petrich
Thank you!

On Tue, Jan 16, 2018 at 4:03 AM, Renat Akhmerov 
wrote:

> Adriano, you now have +2 vote and can approve patches :) Welcome!
>
> Thanks
>
> Renat Akhmerov
> @Nokia
>
> On 16 Jan 2018, 05:03 +0700, Lingxian Kong , wrote:
>
> welcome to the team, Adriano!
>
>
> Cheers,
> Lingxian Kong (Larry)
>
> On Mon, Jan 15, 2018 at 10:11 PM, Renat Akhmerov  > wrote:
>
>> Hi,
>>
>> I’d like to promote Adriano Petrich to the Mistral core team. Adriano has
>> shown the good review rate and quality at least over the last two cycles
>> and implemented several important features (including new useful YAQL/JINJA
>> functions).
>>
>> Please vote whether you agree to add Adriano to the core team.
>>
>> Adriano’s statistics: http://stackalytics.com/?module=mistral-group;
>> release=queens_id=apetrich
>>
>> Thanks
>>
>> Renat Akhmerov
>> @Nokia
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev