Re: [openstack-dev] [horizon] Adding Ivan Kolodyazhny to the Horizon Core team

2017-12-15 Thread Jay S Bryant

Ivan,

Congratulations!

Jay


On 12/15/2017 10:52 AM, Ying Zuo (yinzuo) wrote:


Hi everyone,

After some discussion with the Horizon Core team, I am pleased to 
announce that we are adding Ivan Kolodyazhny to the team. Ivan has 
been actively contributing to Horizon since the beginning of the Pike 
release. He has worked on many bug fixes, blueprints, and performed 
thorough code reviews. You may have known him by his IRC nickname e0ne 
since he's one of the most active members on #openstack-horizon. 
Please join me in welcoming Ivan to the Horizon Core team :)


Regards,

Ying



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Race in FixedIP.associate_pool

2017-12-15 Thread Arun SAG
Hi Jay,

On Fri, Dec 15, 2017 at 1:56 PM, Jay Pipes  wrote:

> Can you point us to the code that is generating the above? It seems that
> get_instance_nw_info() in the Yahoo! manager.py contrib module line 965 is
> trying to build network information for an empty list of vNICs... where is
> that list of vNICs coming from?


The vNICs are empty because objects.FixedIPList.get_by_instance_uuid
is empty 
https://github.com/openstack/nova/blob/master/nova/network/manager.py#L527
The Yahoo! manager.py's get_by_instance_uuid is essentially same as
the upstream code except we change the VIF_TYPE in
get_instance_nw_info

@messaging.expected_exceptions(exception.InstanceNotFound)
def get_instance_nw_info(self, context, instance_id, rxtx_factor,
 host, instance_uuid=None, **kwargs):
"""Creates network info list for instance.

called by allocate_for_instance and network_api
context needs to be elevated
:returns: network info list [(network,info),(network,info)...]
where network = dict containing pertinent data from a network db object
and info = dict containing pertinent networking data
"""
if not uuidutils.is_uuid_like(instance_id):
instance_id = instance_uuid
instance_uuid = instance_id
LOG.debug('Get instance network info', instance_uuid=instance_uuid)

try:
fixed_ips = objects.FixedIPList.get_by_instance_uuid(
context, instance_uuid)
except exception.FixedIpNotFoundForInstance:
fixed_ips = []

LOG.debug('Found %d fixed IPs associated to the instance in the '
  'database.',
  len(fixed_ips), instance_uuid=instance_uuid)

nw_info = network_model.NetworkInfo()
# (saga): The default VIF_TYPE is bridge. We need to use OVS
# This is the only reason we copied this method from the base class
if not CONF.network_driver or CONF.network_driver ==
'nova.network.linux_net':
if CONF.linuxnet_interface_driver ==
'nova.network.linux_net.LinuxOVSInterfaceDriver':
vif_type = network_model.VIF_TYPE_OVS


Here are the sequence of actions happen in nova-network

1. allocate_for_instance calls -> allocate_fixed_ips
2. FixedIPs are successfully associated (we can see this in the log)
3. allocate_for_instance calls get_instance_nw_info, which in turn
gets the fixedip's associated in step 2 using
objects.FixedIPList.get_by_instance_uuid, This raises FixedIPNotFound
exception

We remove the slave and just ran with just single master, the errors
went away. We also switched to using semi-synchronous replication
between master
and slave,  the errors went away too. All of this points to a race
between write and read to the DB.

Does openstack expects synchronous replication to read-only slaves?


-- 
Arun S A G
http://zer0c00l.in/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] about add transformer into libvirt

2017-12-15 Thread Jaze Lee
2017-12-15 23:17 GMT+08:00 gordon chung :
>
>
> On 2017-12-14 10:38 PM, Jaze Lee wrote:
>> It sounds like great. When the gnocchi can be ready to do transformer's work?
>> If it takes long time, we have to move to compute temporarily.
>
> it already exists in gnocchi 4.1+. we just need to change the workflow
> in ceilometer so it integrates with gnocchi.
Oh, I do not quite understand, can you give more details?
I do not know change which part of workflow in ceilometer.


>
>
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
谦谦君子

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping down from core

2017-12-15 Thread Rochelle Grober
Armando,

You’ve been great for Neutron.  It’s sad to see you have to cut back, but it’s 
great to hear you aren’t totally leaving.

Thank you for all of your hard work.  You’ve brought Neutron along quite 
nicely.  I’d also like to thank you for all of your help with the stadium 
projects.  Your mentorship has been invaluable.

And thanks for all the fish,
--Rocky

From: Armando M. [mailto:arma...@gmail.com]
Sent: Friday, December 15, 2017 11:01 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [neutron] Stepping down from core

Hi neutrinos,

To some of you this email may not come as a surprise.

During the past few months my upstream community engagements have been more and 
more sporadic. While I tried hard to stay committed and fulfill my core 
responsibilities I feel like I failed to retain the level of quality and 
consistency that I would have liked ever since I stepped down from being the 
Neutron PTL back at the end of Ocata.

I stated many times when talking to other core developers that being core is a 
duty rather than a privilege, and I personally feel like it's way overdue for 
me to recognize on the mailing list that it's the time that I state officially 
my intention to step down due to other commitments.

This does not mean that I will disappear tomorrow. I'll continue to be on 
neutron IRC channels, support the neutron team, being the release liasion for 
Queens, participate at meetings, and be open to providing feedback to anyone 
who thinks my opinion is still valuable, especially when dealing with the 
neutron quirks for which I might be (git) blamed :)

Cheers,
Armando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread Thomas Goirand
On 12/15/2017 05:52 PM, Matt Riedemann wrote:
> On 12/15/2017 9:15 AM, Thomas Goirand wrote:
>> Not only that. Everyone is lagging a few release behind, and currently,
>> upstream OpenStack don't care backporting to older releases.
> 
> Can you clarify this please? The nova team is definitely backporting
> fixes to pike, ocata and newton. Newton isn't EOL yet *because* nova has
> held it up backporting fixes that we think are important enough to get
> in there before we EOL the branch.

I very much appreciate what has been done with the CVE fixes. Thanks a
lot for this, especially that it looked quite tricky and a way above the
level of patch I could backport by myself in a safe way.

> If you're talking about LTS, that's a different story, but please don't
> say upstream OpenStack doesn't care about backporting fixes. That might
> be a per-project statement, but in general it's untrue.

After re-reading myself, I noticed that it could be read in a variety of
ways. Sorry for this that's typical from me, maybe because I'm not a
native English speaker. :(

Let me attempt to correct myself.

First, it wasn't "upstream don't care about anyone, upstream is bad". It
was more: upstream currently doesn't have in place support so it can
care for a long enough time for its security bugfixes to be relevant to
distros.

More in details:

Upstream distributions are all advertising for 5 years support. For my
own case, and considering the last Debian release, Newton was out a year
ago, a bit before Debian Stretch freeze. Stretch was then released on
the 17th of June, while Newton was officially EOL on the 11th of
October. This means that, officially, Debian received 4 months of
official support during the lifetime of its release, which is supposed
to be at least 3 years, and preferably 5 (if we account the LTS effort).
So even without talking about OpenStack LTS, I hope everyone understand
that for me & Debian, the *official* security support is as good as
inexistant when dealing with Debian Stable.

Lucky, as always within this awesome OpenStack community, mostly
everyone from individual projects has been super helpful and helped when
I asked. However, even with very nice people, this helpfulness has
limits, and an official longer support would definitively help.

Anyway, all this was to say: I'm convinced that releasing less often
will help. I don't think backporting from master to Pike, Ocata and
Newton has so much value, but it's a lot of effort upstream. And in
Debian's case, Ocata backport wasn't needed. Even if we're not talking
about LTS, I'm sure having half the number of backports may help extend
the life of stable releases.

I hope it's clearer this time,
Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [First Contact] [SIG] Presence at the PTG

2017-12-15 Thread Kendall Nelson
Thinking more around a half a day to a full day, but if you can't make it
the whole time I will likely type up a summary to send to the dev list at
the end and take notes all throughout our discussions.

-Kendall (diablo_rojo)

On Thu, Dec 14, 2017 at 5:10 PM Ghanshyam Mann 
wrote:

> +1, nice idea. I will make it.
>
> btw, what will be the meeting duration you are planning like an hour or 2 ?
>
>
> -gmann
>
>
> On Fri, Dec 15, 2017 at 5:55 AM, Kendall Nelson 
> wrote:
> > Hello,
> >
> > It came up in a discussion today that it might be good to get together
> and
> > discuss all the activities around on boarding and various other initial
> > interactions to get us all on the same page and a little more
> > organized/established.
> >
> > Given that SIGs have space the beginning of the week (Mon/Tuesday), I am
> > proposing that we meet for one of these days. If you are interested,
> please
> > let me know so I can get a headcount.
> >
> > -Kendall (diablo_rojo)
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Race in FixedIP.associate_pool

2017-12-15 Thread Jay Pipes

On 12/12/2017 03:22 PM, Arun SAG wrote:

This kind of how the logs look like
2017-12-08 22:33:37,124 DEBUG
[yahoo.contrib.ocata_openstack_yahoo_plugins.nova.network.manager]
/opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py:get_instance_nw_info:894
Fixed IP NOT found for instance
2017-12-08 22:33:37,125 DEBUG
[yahoo.contrib.ocata_openstack_yahoo_plugins.nova.network.manager]
/opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py:get_instance_nw_info:965
Built network info: |[]|


Can you point us to the code that is generating the above? It seems that 
get_instance_nw_info() in the Yahoo! manager.py contrib module line 965 
is trying to build network information for an empty list of vNICs... 
where is that list of vNICs coming from?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread James Penick
On Thu, Dec 14, 2017 at 7:07 AM, Dan Smith  wrote:

>
>
> Agreed. The first reaction I had to this proposal was pretty much what
> you state here: that now the 20% person has a 365-day window in which
> they have to keep their head in the game, instead of a 180-day one.
>
> Assuming doubling the length of the cycle has no impact on the
> _importance_ of the thing the 20% person is working on, relative to
> project priorities, then the longer cycle just means they have to
> continuously rebase for a longer period of time.
>

+1, I see yearly releases as something that will inevitably hinder project
velocity, not help it.

-James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone weekly update - Week of 11 December 2017

2017-12-15 Thread Lance Bragstad
Hey all,

Things were pretty light this week since people are preparing for the
holidays. Colleen is out this week, so I'm going to take a stab at the
weekly report.


# Keystone Team Update - Week of 11 December 2017

## News

### Specification Freeze

December 8th was specification freeze for keystone. To recap, we've
accepted application credentials [0], system scope [1], and project tags
[2] for Queens. A specification freeze exception was issued [3] for
unified limits [4], and the specification is pending final reviews.
Please weigh in on the specification if you have any outstanding concerns.

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html
[1] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
[2] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/project-tags.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2017-December/125331.html
[4] https://review.openstack.org/#/c/455709/

### Feature work

We do have a lot of things in flight feature-wise that need reviews.
Some patches aren't passing tests just yet, but early feedback is important:
  System scope: https://goo.gl/HPFHs8
  Application credentials: https://goo.gl/usx52i
  Unified limits: https://goo.gl/vc8QLu

The project tag implementation is only waiting on a couple client
patches, which are getting really close (https://goo.gl/oTskBE ).
Otherwise there are a bunch of patches that are passing and need
reviews. More on that below.

### PTG Registration

Just a reminder that registration is open for the PTG in
Dublin: https://www.openstack.org/ptg/

The keystone team will start diving into planning sessions for the PTG
right after New Years. You can expect to see more information about
keystone's schedule then.

## Open Specs

Search query: https://goo.gl/pc8cCf

As mentioned above, unified limits is the only specification pending
review for Queens. There has been some good discussion on a revived
specification for  user specified project IDs [5]. Given where we are in
the release, this will likely be something we pursue for Rocky, but
getting early feedback is important and we should continue to have
discussion on the topic. Thanks to the folks from Orange for providing
use cases and details, which will be useful moving forward!

[5] https://review.openstack.org/#/c/323499/

## Recently Merged Changes

Search query: https://goo.gl/hdD9Kw

We merged 7 changes this week.

Of those were a couple patches to lazy load memcached libraries for
keystonemiddleware and some documentation fixes.

## Changes that need Attention

Search query:https://goo.gl/YiLt6o

There are 67 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots, so their authors are
waiting for feedback from reviewers. Please have a look at them.

## Milestone Outlook

https://releases.openstack.org/queens/schedule.html

We haven't accepted anything spec-wise that doesn't have an
implementation up for review, but feature proposal freeze is next week.
Just a reminder that we are into Queens-3, so reviews on remaining
feature work are going to be critical. Don't hesitate to ask if you have
any questions about what needs reviews or if you have an interesting in
moving along any of those efforts.

## Shout-outs

Thanks to Gage, Tin, Nicolas, Jaewoo, Sam, and Rohan for all the work
they've done with project tags, which started coming to a head wrapping
up client changes this week.

## Help with this newsletter

Help contribute to this newsletter by editing the
etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread John Trowbridge
On Fri, Dec 15, 2017 at 1:15 PM, Ben Nemec  wrote:

>
>
> On 12/15/2017 10:26 AM, Emilien Macchi wrote:
>
>> On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
>> [...]
>>
>>> The goal on this sprint was to enable into quickstart a way to reproduce
>>> upstream jobs, in your personal RDO cloud tenant, making easy to
>>> developers
>>> to debug and reproduce their code.
>>>
>>
>> This phrase confused some non-Red-Hat OpenStack contributors on
>> #openstack-tc:
>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23op
>> enstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59
>>
>> 2 questions came up:
>>
>> 1) Do we need RDO Cloud access to reproduce TripleO CI jobs?
>>
>> I think the answer is no. What you need though is an OpenStack cloud,
>> with the work that is being done here:
>> https://review.openstack.org/#/c/525743
>>
>> I'll let the TripleO CI team to confirm that, no, you don't need RDO
>> Cloud access.
>>
>
> /me makes yet another note to try OVB against a public cloud
>
> At the moment, at least for the OVB jobs, you pretty much do need access
> to either RDO cloud or rh1/2.  It _may_ work against some public clouds,
> but I don't know of anyone trying it yet so I can't really recommend it.
>

Ah right, didnt think about the OVB part. That has nothing to do with the
reproducer script though... It is just not possible to reproduce OVB jobs
against a non-OVB cloud. The multinode jobs will work against any cloud
though.


>
>
>>
>> 2) Can anyone have access to RDO Cloud resources?
>>
>> One of the reasons of creating RDO Cloud was for developers so they
>> can get resources to build OpenStack.
>> RDO community organizes something called "test days", where anyone is
>> welcome to join and test OpenStack on centos7 with RDO packages.
>> See: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-qu
>> eens-deployment/
>> The event is announced on RDO users mailing list:
>> https://lists.rdoproject.org/pipermail/users/2017-December/79.html
>> Other than that, I'm not sure about the process if someone needs
>> full-time access. FWIW, I never saw any rejection in the past. We
>> welcome contributors and we want to help how we can.
>>
>
> I am aware of a few people who have been rejected for RDO cloud access,
> and given the capacity constraints it is currently under I suspect there
> would need to be strong justification for new users.  I'm _not_ an RDO
> cloud admin though, so that's not an official statement of any kind.
>
> Also note that the test day is not happening on RDO cloud, but on a
> separate single node cloud (per https://etherpad.openstack.org
> /p/rdo-queens-m2-cloud).  It would not be particularly well suited to
> reproducing CI and presumably won't be around for long.
>
> So the story's not great right now unless you already have access to cloud
> resources.  The developer hardware requirements problem is not quite solved
> yet. :-/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Stepping down from core

2017-12-15 Thread Armando M.
Hi neutrinos,

To some of you this email may not come as a surprise.

During the past few months my upstream community engagements have been more
and more sporadic. While I tried hard to stay committed and fulfill my core
responsibilities I feel like I failed to retain the level of quality and
consistency that I would have liked ever since I stepped down from being
the Neutron PTL back at the end of Ocata.

I stated many times when talking to other core developers that being core
is a duty rather than a privilege, and I personally feel like it's way
overdue for me to recognize on the mailing list that it's the time that I
state officially my intention to step down due to other commitments.

This does not mean that I will disappear tomorrow. I'll continue to be on
neutron IRC channels, support the neutron team, being the release liasion
for Queens, participate at meetings, and be open to providing feedback to
anyone who thinks my opinion is still valuable, especially when dealing
with the neutron quirks for which I might be (git) blamed :)

Cheers,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-15 Thread Alex Schultz
On Thu, Dec 14, 2017 at 5:01 PM, Tony Breeds  wrote:
> On Wed, Dec 13, 2017 at 03:01:41PM -0700, Alex Schultz wrote:
>> I assume since some of this work was sort of done earlier outside of
>> tripleo and does not affect the default installation path that most
>> folks will consume, it shouldn't be impacting to general testing or
>> increase regressions. My general requirement for anyone who needed an
>> FFE for functionality that isn't essential is that it's off by
>> default, has minimal impact to the existing functionality and we have
>> a rough estimate on feature landing.  Do you have idea when you expect
>> to land this functionality? Additionally the patches seem to be
>> primarily around the ironic integration so have those been sorted out?
>
> Sadly this is going to be more impactful on x86 and anyone will like,
> and I appologise for not raising these issues before now.
>
> There are 3 main aspects:
> 1. Ironic integration/provisioning setup.
>1.1 Teaching ironic inspector how to deal with ppc64le memory
>detection.  There are a couple of approaches there but they don't
>directly impact tripleo
>1.2 I think there will be some work with puppet-ironic to setup the
>introspection dnsmasq in a way that's compatible with mulri-arch.
>right now this is the introduction of a new tag (based on options
>in the DHCP request and then sending diffent responses in the
>presense/absence of that.  Verymuch akin to the ipxe stuff there
>today.
>1.3 Helping tripleo understadn that there is now more than one
>deply/overcloud image and correctly using that.  These are mostly
>covered with the review Mark published but there is the backwards
>compat/corner cases to deal with.
>1.4 Right now ppc64le has very specific requirements with respect to
>the boot partition layout. Last time I checked these weren't
>handled by default in ironic.  The smiple workaround here is to
>make the overcloud image on ppc64le a whole disk rather than
>single partition and I think given the scope of everythign else
>that's the most likley outcome for queens.
>
> 2. Containers.
>Here we run in to several issues not least of which is my general
>lack of understanding of containers but the challenges as I
>understand them are:
>2.1 Having a venue to build/publish/test ppc64le container builds.
>This in many ways is tied to the CI issue below, but all of the
>potential solutions require some conatiner image for ppc64le to
>be available to validate that adding them doesn't impact x86_64.
>2.2 As I understand it the right way to do multi-arch containers is
>with an image manifest or manifest list images[1]  There are so
>many open questions here.
>2.2.1 If the container registry supports manifest lists when we
>  pull them onto the the undercloud can we get *all*
>  layers/objects - or will we just get the one that matches
>  the host CPU?
>2.2.2 If the container registry doesn't support manifest list
>  images, can we use somethign like manifest-tool[2] to pull
>  "nova" from multiple registreies or orgs on the same
>  registry and combine them into a single manifest image on
>  the underclud?
>2.2.3 Do we give up entirely on manifest images and just have
>  multiple images / tags on the undercloud for example:
> nova:latest
> nova:x86_64_latest
> nova:ppc64le_64_latest
>  and have the deployed node pull the $(arch)_latest tag
>  first and if $(arch) == x86_64 pull the :latest tag if the
>  first pull failed?
>2.3 All the things I can't describe/know about 'cause I haven't
>gotten there yet.
> 3. CI
>There isn't any ppc64le CI for tripleo and frankly there wont be in
>the forseeable future.  Given the CI that's inplace on x86 we can
>confidently assert that we won't break x86 but the code paths we add
>for power will largely be untested (beyonf unit tests) and any/all
>issues will have to be caught by downstream teams.
>
> So as you can see the aim is to have minimal impact on x86_64 and
> default to the existing behaviour in the absence of anything
> specifically requesting multi-arch support.  but minimal *may* be > none
> :(
>
> As to code ETAs realistically all of the ironic related code will be
> public by m3 but probably not merged, and the containers stuff is
> somewhat dpenedant on that work / direction from the community on how to
> handle the points I enumerated.
>

Perhaps we can start reviewing the items and those with little to no
impact we can merge for the remainder of the cycle. I know
realistically everything has an impact so it'll be >0, but lets try
and keep it as close to 0 

Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread Ben Nemec



On 12/15/2017 10:26 AM, Emilien Macchi wrote:

On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
[...]

The goal on this sprint was to enable into quickstart a way to reproduce
upstream jobs, in your personal RDO cloud tenant, making easy to developers
to debug and reproduce their code.


This phrase confused some non-Red-Hat OpenStack contributors on #openstack-tc:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59

2 questions came up:

1) Do we need RDO Cloud access to reproduce TripleO CI jobs?

I think the answer is no. What you need though is an OpenStack cloud,
with the work that is being done here:
https://review.openstack.org/#/c/525743

I'll let the TripleO CI team to confirm that, no, you don't need RDO
Cloud access.


/me makes yet another note to try OVB against a public cloud

At the moment, at least for the OVB jobs, you pretty much do need access 
to either RDO cloud or rh1/2.  It _may_ work against some public clouds, 
but I don't know of anyone trying it yet so I can't really recommend it.





2) Can anyone have access to RDO Cloud resources?

One of the reasons of creating RDO Cloud was for developers so they
can get resources to build OpenStack.
RDO community organizes something called "test days", where anyone is
welcome to join and test OpenStack on centos7 with RDO packages.
See: 
https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
The event is announced on RDO users mailing list:
https://lists.rdoproject.org/pipermail/users/2017-December/79.html
Other than that, I'm not sure about the process if someone needs
full-time access. FWIW, I never saw any rejection in the past. We
welcome contributors and we want to help how we can.


I am aware of a few people who have been rejected for RDO cloud access, 
and given the capacity constraints it is currently under I suspect there 
would need to be strong justification for new users.  I'm _not_ an RDO 
cloud admin though, so that's not an official statement of any kind.


Also note that the test day is not happening on RDO cloud, but on a 
separate single node cloud (per 
https://etherpad.openstack.org/p/rdo-queens-m2-cloud).  It would not be 
particularly well suited to reproducing CI and presumably won't be 
around for long.


So the story's not great right now unless you already have access to 
cloud resources.  The developer hardware requirements problem is not 
quite solved yet. :-/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread John Trowbridge
On Fri, Dec 15, 2017 at 11:26 AM, Emilien Macchi  wrote:

> On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
> [...]
> > The goal on this sprint was to enable into quickstart a way to reproduce
> > upstream jobs, in your personal RDO cloud tenant, making easy to
> developers
> > to debug and reproduce their code.
>
> This phrase confused some non-Red-Hat OpenStack contributors on
> #openstack-tc:
> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%
> 23openstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59
>
> 2 questions came up:
>
> 1) Do we need RDO Cloud access to reproduce TripleO CI jobs?
>
> I think the answer is no. What you need though is an OpenStack cloud,
> with the work that is being done here:
> https://review.openstack.org/#/c/525743
>
> I'll let the TripleO CI team to confirm that, no, you don't need RDO
> Cloud access.
>

Correct, the reproducer script work does not require being run specifically
on RDO Cloud. Downloading images will be
a bit slower, since the images are hosted on the same infra as RDO Cloud.
However, the script simply creates the
resources nodepool would create on any OpenStack cloud, then runs the exact
script from CI.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Community Goals for Rocky

2017-12-15 Thread Emilien Macchi
On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi  wrote:
[...]
> Suggestions are welcome:
> - on the mailing-list, in a new thread per goal [all] [tc] Proposing
> goal XYZ for Rocky
> - on Gerrit in openstack/governance like Kendall did.

Just a fresh reminder about Rocky goals.
A few questions that we can ask ourselves:

1) What common challenges do we have?

e.g. Some projects don't have mutable configuration or some projects
aren't tested against IPv6 clouds, etc.

2) Who is willing to drive a community goal (a.k.a. Champion)?

note: a Champion is someone who volunteer to drive the goal, but
doesn't commit to write the code necessarily. The Champion will
communicate with projects PTLs about the goal, and make the liaison if
needed.

The list of ideas for Community Goals is documented here:
https://etherpad.openstack.org/p/community-goals

Please be involved and propose some ideas, I'm sure our community has
some common goals, right ? :-)
Thanks, and happy holidays. I'll follow-up in January of next year.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Adding Ivan Kolodyazhny to the Horizon Core team

2017-12-15 Thread Ying Zuo (yinzuo)
Hi everyone,

After some discussion with the Horizon Core team, I am pleased to announce that 
we are adding Ivan Kolodyazhny to the team. Ivan has been actively contributing 
to Horizon since the beginning of the Pike release. He has worked on many bug 
fixes, blueprints, and performed thorough code reviews. You may have known him 
by his IRC nickname e0ne since he's one of the most active members on 
#openstack-horizon. Please join me in welcoming Ivan to the Horizon Core team :)


Regards,
Ying

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread Matt Riedemann

On 12/15/2017 9:15 AM, Thomas Goirand wrote:

Not only that. Everyone is lagging a few release behind, and currently,
upstream OpenStack don't care backporting to older releases.


Can you clarify this please? The nova team is definitely backporting 
fixes to pike, ocata and newton. Newton isn't EOL yet *because* nova has 
held it up backporting fixes that we think are important enough to get 
in there before we EOL the branch.


If you're talking about LTS, that's a different story, but please don't 
say upstream OpenStack doesn't care about backporting fixes. That might 
be a per-project statement, but in general it's untrue.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No networking-sfc meetings until Jan 11

2017-12-15 Thread Henry Fourie
There will no networking-sfc meetings until Jan 11. Seasons greetings.

https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting
- Louis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread Emilien Macchi
On Fri, Dec 15, 2017 at 5:04 AM, Arx Cruz  wrote:
[...]
> The goal on this sprint was to enable into quickstart a way to reproduce
> upstream jobs, in your personal RDO cloud tenant, making easy to developers
> to debug and reproduce their code.

This phrase confused some non-Red-Hat OpenStack contributors on #openstack-tc:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-12-15.log.html#t2017-12-15T15:37:59

2 questions came up:

1) Do we need RDO Cloud access to reproduce TripleO CI jobs?

I think the answer is no. What you need though is an OpenStack cloud,
with the work that is being done here:
https://review.openstack.org/#/c/525743

I'll let the TripleO CI team to confirm that, no, you don't need RDO
Cloud access.


2) Can anyone have access to RDO Cloud resources?

One of the reasons of creating RDO Cloud was for developers so they
can get resources to build OpenStack.
RDO community organizes something called "test days", where anyone is
welcome to join and test OpenStack on centos7 with RDO packages.
See: 
https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
The event is announced on RDO users mailing list:
https://lists.rdoproject.org/pipermail/users/2017-December/79.html
Other than that, I'm not sure about the process if someone needs
full-time access. FWIW, I never saw any rejection in the past. We
welcome contributors and we want to help how we can.

Any feedback is welcome to improve our transparency and inclusiveness.

[...]

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] resource providers update 45

2017-12-15 Thread Chris Dent


Here's resource provider and placement update 45. I think we'll call
this one the last one of 2017 and I'll start things up again in 2018
on the 5th. Probably with some kind of new numbering scheme so the
subjects don't overlap with this year. Thanks to everyone who has
helped to move placement along this year. We've moved an absolute ton
of code and set the foundation for some long-desired features.

(Apologies if this version of the update is even more riddled with
typos and weird wrong words and bad contractions than normal, I have a
cold. I realize this is a bit like saying infinite (resource classes)
+ 1 is different from infinite (resources classes) - 1.)

# Most Important

The three main themes (nested providers, alternate hosts, migration
allocation doubling) remain the main priorities, but there's a lot of
work alongside these things to make them integrate properly and get
necessary bits (like traits) happening.

# What's Changed

Microversion 1.15 has merged. This adds last-modified headers to those
responses where it makes sense. This was done to increase the degree
to which we are good users of HTTP but doesn't have any immediate
change on functionality, unless you happen to want to use that
last-modified information client side. It does mean, however, that any
other pending microversion are now racing for 1.16.

Also, we've made a commitment that the nova-side of the conversation
requires placement microversion 1.14. This means that failovers to
older microversions in requests in the report client are not required.
Matt has started a nova-status check and docs for this:

https://review.openstack.org/#/c/526505/

There's plenty of code in the report client that is now redundant, but
doesn't really hurt anything by being there. If someone is aching for
something to do, it would be a nice cleanup (green bar on the yellow
diff!)

# Help Wanted

(unchanged from last week, no new data, yet)

A takeaway from summit is that we need, where possible, benchmarking
info from people who are making the transition from old methods of
scheduling to the newer allocation_candidate driven modes.  While
detailed numbers will be most useful, even anecdotal summaries of
"woot it's way better" or "hmmm, no it seems worse" are useful.

# Docs

A few more docs improvements merged. Others need a bit more review to
push them over the edge:

* https://review.openstack.org/#/c/512215/
Add create inventories doc for placement

* https://review.openstack.org/#/c/523007/
Add x-openstack-request-id in API ref

* https://review.openstack.org/#/c/511342/
add API reference for create inventory

* https://review.openstack.org/#/c/501252/
 doc: note that custom resources are not fully supported

# Main Themes

## Nested Providers

The nested-resource-providers stack has grown a long tail of changes
for managing nested providers rooted on a compute node:

 https://review.openstack.org/#/q/topic:bp/nested-resource-providers

I'm on the hook to decode some description of this work that Eric gave
me in IRC into a summary of what the end game is with that long tail
of stuff. That's been delayed because of the aforementioned cold, but
I intend to get to it next week. Alongside that will also be some
clarifications about how in_tree is supposed to work when querying
resource providers, as asked by Matt in response to last week's
update. Some of that will go in the placement docs.

While we currently have support for nested providers in
/resource_providers we do not in /allocation_candidates. My
understanding is that Jay has the ball on this.

## Alternate Hosts

Having the scheduler request and use alternate hosts is real close:

https://review.openstack.org/#/q/topic:bp/return-alternate-hosts

Matt has made a change to ironic's CI to depend on this stuff, to see
if reschedules will not work as desired:

https://review.openstack.org/#/c/527289/

## Migration Allocations

The main code to do doubling of migration allocations using an
allocation uuid has merged. I've started the process of changing that
to use the new POST /allocations functionality. Just a WIP at this
point but is mostly working:

https://review.openstack.org/#/c/528089/

## Misc Traits, Shared, Etc Cleanups

There's a stack of code that fixes up a lot of things related to
traits, sharing providers, test additions and fixes to those tests. At
the moment the changes are in a bug topic:

  https://review.openstack.org/#/q/topic:bug/1702420

But that is not the only bug they are addressing. Some of the above
may appear in the list below too.

# Other

Since everyone seems rather busy anyway, once again this week nothing
new is added to the "other" list. I've simply copied over the previous
week's list with anything that's been merged or abandoned removed.

* https://review.openstack.org/#/c/522002/
skip authentication on root URI

* https://review.openstack.org/#/c/519462/
Log options at debug when starting API services 

[openstack-dev] [Ironic] Removal of tempest plugin code from openstack/ironic & openstack/ironic-inspector

2017-12-15 Thread John Villalovos
I wanted to send out a note to any 3rd Party CI or other users of the
tempest plugin code inside either openstack/ironic or
openstack/ironic-inspector. That code has been migrated to the
openstack/ironic-inspector-plugin repository. We have been busily (
https://review.openstack.org/#/q/topic:ironic-tempest-plugin ) migrating
all of the projects to use this new repository.

If you have a 3rd Party CI or something else that is depending on the
tempest plugin code please migrate it to use
openstack/ironic-tempest-plugin.

We plan to remove the tempest plugin code on Tuesday 19-Dec-2017 from
openstack/ironic and openstack/ironic-tempest-plugin. And then after that
doing backports of those changes to the stable branches.

openstack/ironic Removal patch
https://review.openstack.org/527733

openstack/ironic-inspector Removal patch
https://review.openstack.org/527743
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] about add transformer into libvirt

2017-12-15 Thread gordon chung


On 2017-12-14 10:38 PM, Jaze Lee wrote:
> It sounds like great. When the gnocchi can be ready to do transformer's work?
> If it takes long time, we have to move to compute temporarily.

it already exists in gnocchi 4.1+. we just need to change the workflow 
in ceilometer so it integrates with gnocchi.


-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread Thomas Goirand
On 12/14/2017 12:44 AM, Clint Byrum wrote:
> We can take stock of the intermediate releases over the last year, and make
> sure they all work together once a year. Chris Jones mentioned that we should
> give users time between milestones and release. I suggest we release an
> intermediary and _support it_. Let distros pick those up when they need to 
> ship
> new features.

I don't think this will happen.

> Let users jump ahead for a few projects when they need the bug fixes.

And that, I don't agree. New releases aren't to fix bugs, bugs should be
fixed in stable too, otherwise you face new issues trying to get a
bugfix. And that's why we have stable.

> I understand the belief that nobody will run the intermediaries.

Not only that. Everyone is lagging a few release behind, and currently,
upstream OpenStack don't care backporting to older releases.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-15 Thread Adriano Petrich
I have
https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-executions-yaql-function
almost all implemented and I would like submit an FFE for it

Cheers,
   Adriano

On Fri, Dec 15, 2017 at 12:01 AM, Tony Breeds 
wrote:

> On Wed, Dec 13, 2017 at 03:01:41PM -0700, Alex Schultz wrote:
> > I assume since some of this work was sort of done earlier outside of
> > tripleo and does not affect the default installation path that most
> > folks will consume, it shouldn't be impacting to general testing or
> > increase regressions. My general requirement for anyone who needed an
> > FFE for functionality that isn't essential is that it's off by
> > default, has minimal impact to the existing functionality and we have
> > a rough estimate on feature landing.  Do you have idea when you expect
> > to land this functionality? Additionally the patches seem to be
> > primarily around the ironic integration so have those been sorted out?
>
> Sadly this is going to be more impactful on x86 and anyone will like,
> and I appologise for not raising these issues before now.
>
> There are 3 main aspects:
> 1. Ironic integration/provisioning setup.
>1.1 Teaching ironic inspector how to deal with ppc64le memory
>detection.  There are a couple of approaches there but they don't
>directly impact tripleo
>1.2 I think there will be some work with puppet-ironic to setup the
>introspection dnsmasq in a way that's compatible with mulri-arch.
>right now this is the introduction of a new tag (based on options
>in the DHCP request and then sending diffent responses in the
>presense/absence of that.  Verymuch akin to the ipxe stuff there
>today.
>1.3 Helping tripleo understadn that there is now more than one
>deply/overcloud image and correctly using that.  These are mostly
>covered with the review Mark published but there is the backwards
>compat/corner cases to deal with.
>1.4 Right now ppc64le has very specific requirements with respect to
>the boot partition layout. Last time I checked these weren't
>handled by default in ironic.  The smiple workaround here is to
>make the overcloud image on ppc64le a whole disk rather than
>single partition and I think given the scope of everythign else
>that's the most likley outcome for queens.
>
> 2. Containers.
>Here we run in to several issues not least of which is my general
>lack of understanding of containers but the challenges as I
>understand them are:
>2.1 Having a venue to build/publish/test ppc64le container builds.
>This in many ways is tied to the CI issue below, but all of the
>potential solutions require some conatiner image for ppc64le to
>be available to validate that adding them doesn't impact x86_64.
>2.2 As I understand it the right way to do multi-arch containers is
>with an image manifest or manifest list images[1]  There are so
>many open questions here.
>2.2.1 If the container registry supports manifest lists when we
>  pull them onto the the undercloud can we get *all*
>  layers/objects - or will we just get the one that matches
>  the host CPU?
>2.2.2 If the container registry doesn't support manifest list
>  images, can we use somethign like manifest-tool[2] to pull
>  "nova" from multiple registreies or orgs on the same
>  registry and combine them into a single manifest image on
>  the underclud?
>2.2.3 Do we give up entirely on manifest images and just have
>  multiple images / tags on the undercloud for example:
> nova:latest
> nova:x86_64_latest
> nova:ppc64le_64_latest
>  and have the deployed node pull the $(arch)_latest tag
>  first and if $(arch) == x86_64 pull the :latest tag if the
>  first pull failed?
>2.3 All the things I can't describe/know about 'cause I haven't
>gotten there yet.
> 3. CI
>There isn't any ppc64le CI for tripleo and frankly there wont be in
>the forseeable future.  Given the CI that's inplace on x86 we can
>confidently assert that we won't break x86 but the code paths we add
>for power will largely be untested (beyonf unit tests) and any/all
>issues will have to be caught by downstream teams.
>
> So as you can see the aim is to have minimal impact on x86_64 and
> default to the existing behaviour in the absence of anything
> specifically requesting multi-arch support.  but minimal *may* be > none
> :(
>
> As to code ETAs realistically all of the ironic related code will be
> public by m3 but probably not merged, and the containers stuff is
> somewhat dpenedant on that work / direction from the community on how to
> handle the points I enumerated.
>
>
> Yours Tony.
>
> [1] 

Re: [openstack-dev] [tripleo] Removing old baremetal commands from python-tripleoclient

2017-12-15 Thread Dmitry Tantsur

On 12/15/2017 01:04 PM, Dmitry Tantsur wrote:

On 12/15/2017 04:49 AM, Tony Breeds wrote:

Hi All,
 In review I01837a9daf6f119292b5a2ffc361506925423f11 I updated
ValidateInstackEnv to handle the case when then instackenv.json file
needs to represent a node that deosn't require a pm_user for IMPI to
work.

It turns out that I foudn that code path with grep rather than the
result of a deploy step failing.  That's becuase it's only used for a
command that isn't used anymore, and the validation logic has been moved
to a mistral action.

That lead me to look at which of the commands in that file aren't needed
anymore.  If my analysis is correct we have the collowing commands:

openstack baremetal instackenv validate:
 tripleoclient.v1.baremetal:ValidateInstackEnv
 NOT Deprecated


See below, it can be fixed. But I'd really prefer us to roll it into something 
like "openstack overcloud node import --validate-only".



openstack baremetal import:
 tripleoclient.v1.baremetal:ImportBaremetal
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node import
openstack baremetal introspection bulk start:
 tripleoclient.v1.baremetal:StartBaremetalIntrospectionBulk
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node introspect
openstack baremetal introspection bulk status:
 tripleoclient.v1.baremetal:StatusBaremetalIntrospectionBulk
 NOT Deprecated


This should really be deprecated with "bulk start"..


openstack baremetal configure ready state:
 tripleoclient.v1.baremetal:ConfigureReadyState
 NOT Deprecated


I wonder if this even works. It was introduces long ago, and has never had a lot 
of testing (if at all).



openstack baremetal configure boot:
 tripleoclient.v1.baremetal:ConfigureBaremetalBoot
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node configure


YES PLEASE to all of this. The "baremetal" part make users often confuse these 
commands with ironicclient commands.




So my questions are basically:
1) Can we remove the deprecated code?
2) Does leaving the not deprecated commands make sesne?
3) Should we deprecate the remaining commands?
3) Do I need to update ValidateInstackEnv or is it okay for it to be
    busted for my use case?


I'm sorry for not getting to it ever, but the fix should be quite simple. You 
need to drop all its code from tripleoclient and make it use this workflow 
instead: 
https://github.com/openstack/tripleo-common/blob/master/workbooks/baremetal.yaml#L103. 
It is much newer, and is actually used in enrollment as well. If it is also 
broken for you - please fix it. But the code in tripleoclient is long rotten :)




Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] reminder about deadlines

2017-12-15 Thread Dmitry Tantsur

Hi all!

Half of the Queens cycle is behind us. I'd like to remind about a few deadlines 
and how they affect us:


1. Jan 18th is the non-client library release deadline. It affects ironic-lib 
and sushy. The latter has a few outstanding changes - please try to find some 
time to get them a bit of attention. We have one months left, minus holidays.


2. Jan 25th is the client release deadline. If you work on new API, it has to 
land before that point with its client change to end up in ironicclient Queens.


3. Jan 25th is also the feature freeze. We haven't had a formal feature freeze 
for a few cycles. But after troubles with releasing Pike we decided to get back 
to it.


Feature freeze exceptions *may* be given to priority work, low-impact vendor 
changes and low-impact work substantially improving operator's and/or user's 
experience. Feature freeze exceptions are unlikely to be granted to major API 
additions or breaking changes.


If you think your work may be affected by the feature freeze, please start 
planning accordingly.


Thank you and happy winter holidays,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-15 Thread Arx Cruz
Hello,

On December 13 we came the end of sprint using our new team structure, and
here’s the highlights.

Sprint Review:

The goal on this sprint was to enable into quickstart a way to reproduce
upstream jobs, in your personal RDO cloud tenant, making easy to developers
to debug and reproduce their code.

Unfortunately due the RDO cloud upgrade and upstream infra issues, we
weren’t able to finish successfully this sprint, having completed 4 cards,
5 that are waiting to be merged, and 3 blocked.

Nevertheless we have reviews [1] and documentation that can be used to
reproduce the jobs upstream manually [2], and your feedback is more than
welcome to improve it in our future automated script

One can see the results of the sprint via https://trello.com/c/EWotWxGe/

Tripleo CI community meeting


   - TLS Everywhere Job:
   - Harry is working to have a periodic job running TLS
  - OVB (most recent hash) has the ability to deploy an extra node
  alongside the undercloud
 - https://github.com/cybertron/openstack-virtual-baremetal/commit/
 c288a1ae973f3b32e9e4481f6604204386cbae9c
 

 - https://github.com/cybertron/openstack-virtual-baremetal/commit/
 2e4dd517d1736f004725ee29ac3e4764af246bab
 

  - Patch is live to deploy extra nodes in te-broker
 - https://review.openstack.org/#/c/512899/
  - OVB migration
  - Blocked due RDO cloud upgrade
   - RDO Cloud
  - Upgrade process still in progress
   - Kubernetes
  - Will we see kubernetes in CI? Yes
  - There are several scenarios for kubernetes and openshift already
  - Kubernetes jobs are in check queue
  - Openshift jobs are in experimental
  - For more questions, please ping Flavio (flaper87)
   - Upgrades related patches:
  - https://review.openstack.org/#/c/504822/ Support release change
  during playbook run
  - https://review.openstack.org/#/c/504939/ Set repo setup release in
  playbook




Ruck and Rover

What is Ruck and Rover

One person in our team is designated Ruck and another Rover, one is
responsible to monitoring the CI, checking for failures, opening bugs,
participate on meetings, and this is your focal point to any CI issues. The
other person, is responsible to work on these bugs, fix problems and the
rest of the team are focused on the sprint. For more information about our
structure, check [1]

List of bugs that Ruck and Rover were working on:


   - Undercloud fails with " [ERROR] Could not lock
   /var/run/os-refresh-config.lock
  - intermittently seen from upstream --> rdophase2.  posted patch to
  add elastic-recheck query to help id future instances
  - https://bugs.launchpad.net/tripleo/+bug/1669110
  - https://review.openstack.org/#/c/527559
   - Newton: RDO CI gate and phase1 promotion jobs fail with error
   "publicURL endpoint for workflowv2 service not found" because mistral is
   disabled
  - https://bugs.launchpad.net/tripleo/+bug/1737502
   - http_proxy makes quickstart fail
  - https://bugs.launchpad.net/tripleo/+bug/1736499
   - Need better logging for HA jobs
  - https://bugs.launchpad.net/tripleo/+bug/1695237
  - https://review.openstack.org/#/c/527554/
   - Remove mistral tempest tests jenkins-periodic-master-rdo_
   trunk-virtbasic-1ctlr_1comp_64gb
  - https://bugs.launchpad.net/tripleo/+bug/1736252
   - 1737940 CI: neutron.tests.tempest.scenario.test_qos.QoSTest.test_qos
   fails in pike promotion jobs
   - Fix Released - #1737716 TripleO CI jobs fail: can't connect to
   nodepool hosts
   - #1737688 RDO CI quickstart job logs collection of dlrn logs is broken
   in tripleo-quickstart-extras-gate-master-tripleo-ci-
   delorean-full-minimal_pacemaker
   - #1737617 master promoter script looping and failing since 12/9 (should
   put a 2 hr timeout on the promioter script  - to tech debt for the sprint
   team)
   - #1737568 CI: fs035 OVB job fails to run mysql_init_bundle container  (
   still running on RH1 - could fix it there - contact HA people)
   - Fix Released - 1737502 Newton: RDO CI gate and phase1 promotion jobs
   fail with error "publicURL endpoint for workflowv2 service not found"
   because mistral is disabled
   - #1737485 CI: featureset020 (all tempest tests) promotion job fails
   with timeout on mtu test
   - Cron jobs of promotions scripts are commented, please uncomment them
   when RDO cloud is online and stable


We also have our new Ruck and Rover for this week:


   - Ruck
  - John Trownbridge - trown|ruck
   - Rover
  - Ronelle Landy - rlandy|rover


If you have any questions and/or suggestions, please contact us

[1] https://review.openstack.org/#/c/509280/

[2] https://review.openstack.org/#/c/525743/

[3] 

Re: [openstack-dev] [tripleo] Removing old baremetal commands from python-tripleoclient

2017-12-15 Thread Dmitry Tantsur

On 12/15/2017 04:49 AM, Tony Breeds wrote:

Hi All,
 In review I01837a9daf6f119292b5a2ffc361506925423f11 I updated
ValidateInstackEnv to handle the case when then instackenv.json file
needs to represent a node that deosn't require a pm_user for IMPI to
work.

It turns out that I foudn that code path with grep rather than the
result of a deploy step failing.  That's becuase it's only used for a
command that isn't used anymore, and the validation logic has been moved
to a mistral action.

That lead me to look at which of the commands in that file aren't needed
anymore.  If my analysis is correct we have the collowing commands:

openstack baremetal instackenv validate:
 tripleoclient.v1.baremetal:ValidateInstackEnv
 NOT Deprecated


See below, it can be fixed. But I'd really prefer us to roll it into something 
like "openstack overcloud node import --validate-only".



openstack baremetal import:
 tripleoclient.v1.baremetal:ImportBaremetal
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node import
openstack baremetal introspection bulk start:
 tripleoclient.v1.baremetal:StartBaremetalIntrospectionBulk
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node introspect
openstack baremetal introspection bulk status:
 tripleoclient.v1.baremetal:StatusBaremetalIntrospectionBulk
 NOT Deprecated
openstack baremetal configure ready state:
 tripleoclient.v1.baremetal:ConfigureReadyState
 NOT Deprecated
openstack baremetal configure boot:
 tripleoclient.v1.baremetal:ConfigureBaremetalBoot
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node configure


YES PLEASE to all of this. The "baremetal" part make users often confuse these 
commands with ironicclient commands.




So my questions are basically:
1) Can we remove the deprecated code?
2) Does leaving the not deprecated commands make sesne?
3) Should we deprecate the remaining commands?
3) Do I need to update ValidateInstackEnv or is it okay for it to be
busted for my use case?


I'm sorry for not getting to it ever, but the fix should be quite simple. You 
need to drop all its code from tripleoclient and make it use this workflow 
instead: 
https://github.com/openstack/tripleo-common/blob/master/workbooks/baremetal.yaml#L103. 
It is much newer, and is actually used in enrollment as well. If it is also 
broken for you - please fix it. But the code in tripleoclient is long rotten :)




Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread Dmitry Tantsur

On 12/15/2017 11:00 AM, Luigi Toscano wrote:



- Original Message -

On 12/14/2017 9:38 AM, Luigi Toscano wrote:

And the QE in me says that there are enough moving parts around for the
integration testing (because CD yes, but the resources are limited) that a
longer cycle with a longer time between freeze and release is better to
refine the testing part. The cycle as it is, especially for people working
both upstream and downstream at the same time, is complicated enough.


Nothing in this proposal is talking about a longer time between feature
freeze and release, as far as I know (maybe I missed that somewhere).


It does not talk about this. But I think that it does not make sense without 
that.
IMHO.



In that regard, the one year cycle does not make the release any more
stable, it just means a bigger pile of stuff dumped on you when you
finally get it and upgrade.


Not if you commit to work on stabilization only (which in turn would allow us 
to be more confident that PASS jobs means that things are really working not 
just in DevStack).



If people could commit to work on stabilization only, we could do the same even 
with 3 months cycle. 2 months for features, 1 month for stabilization could 
actually work very well.


But so far 100% of people I've seen talking about stabilization only talked 
about it when it did not cut their features.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][vitrage] oslo.service 1.28.1 breaks Vitrage gate

2017-12-15 Thread ChangBo Guo
Thanks for raising this,  Oslo team will revert the change in
https://review.openstack.org/#/c/528202/

2017-12-14 23:58 GMT+08:00 Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com>:

> Hi,
>
>
>
> The latest release of oslo.service 1.28.1 breaks the Vitrage gate. We are
> creating several threads and timers [1], but only the first thread is
> executed. We noticed that Trove project already reported this problem [2].
>
>
>
> Please help us fix it.
>
>
>
> Thanks,
>
> Ifat.
>
>
>
> [1] https://github.com/openstack/vitrage/blob/master/vitrage/
> datasources/services.py
>
> [2] https://review.openstack.org/#/c/527755/
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] What nova needs to get to newton end of life

2017-12-15 Thread Lee Yarwood
On 14-12-17 09:15:18, Matt Riedemann wrote:
> I'm not sure how many other projects still have an active stable/newton
> branch, but I know nova is one of them.
> 
> At this point, these are I think the things that need to get done to end of
> life the newton branch for nova:
> 
> 1. We have a set of existing stable/newton backports that need to get
> merged:
> 
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton
> 
> 3 of those are related to a CVE errata, and the other is an API regression
> introduced in Newton (trivial low-risk fix).
> 
> Those can't merge until the corresponding Ocata backports are merged first.
> I'll start pinging people for reviews on the Ocata backports.

The Ocata changes have mereged and the remaining Newton changes are
approved. I'll keep an eye on these during the day to ensure they land.
 
> 2. Fix and backport https://bugs.launchpad.net/nova/+bug/1738094
> 
> This came up just yesterday but it's an upgrade impact introduced in Newton
> so while we have the branch available I think we should get a fix there
> before EOL. There are going to be at least two fixes for this bug:
> 
> a) Don't store all of the instance group (members and policies) in the
> request_specs table. I think this is a correct fix but I also think because
> of how instance groups and request spec code tends to surprise you with
> funny bugs in funny ways, it's high risk to backport this to newton. Dan has
> a patch started though: https://review.openstack.org/#/c/527799/3

This merged into master so I went ahead and posted the stable backports:

https://review.openstack.org/#/q/topic:bug/1738094+(status:open+OR+status:merged)

 
> b) Alter the request_specs.spec column from TEXT to MEDIUMTEXT, just like
> the build_requests.instance column was increased for similar reasons
> (instance.user_data alone is a MEDIUMTEXT column). This is a straight
> forward schema migration and I think is low risk to backport all the way to
> Newton.

FWIW this is the master change - https://review.openstack.org/#/c/528012/

Cheers,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, December 15th

2017-12-15 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:

https://wiki.openstack.org/wiki/Technical_Committee_Tracker

If you are working on something (or plan to work on something) that is
not on the tracker, feel free to add to it !


== Recently-approved changes ==

* Add "goal champions" as a top-5 help wanted area [1][2]
* Rename Shade team to OpenStackSDK, include os-client-config [3][4][5]
* Rename 'top-5' list to 'help most needed' list [6][7]
* Removing unused docs:follows-policy tag [8]
* New repos: mistral-tempest-plugin, congress-tempest-plugin,
self-healing-sig
* Goal updates: mistral, tacker, cloudkitty, octavia, congress, murano

[1] https://review.openstack.org/#/c/527138/
[2] https://review.openstack.org/#/c/510656/
[3] https://review.openstack.org/#/c/523520/
[4] https://review.openstack.org/#/c/523519/
[5] https://review.openstack.org/#/c/524249/
[6] https://review.openstack.org/#/c/520619/
[7] https://review.openstack.org/#/c/527138/
[8] https://review.openstack.org/#/c/524217/

Most significant change this week is the addition of "goal champions" as
one area in our top-5 list, now renames "Help most needed" list. Now
that the list is fully formed, we'll ramp up the communication around it
as an effort to point organizations and contributors to areas in most
need of urgent help. You can find the list at:

https://governance.openstack.org/tc/reference/help-most-needed.html


== Voting in progress ==

Debian packaging activity moved from OpenStack repositories to Debian
repositories, and therefore the repositories (and corresponding project
team) are being retired. This is still missing a couple of votes:

https://review.openstack.org/524732


== Under discussion ==

As you probably noticed, I started a discussion around the possibility
of increasing the length of our "coordinated releases" development
cycle, as a way to reduce overall pressure in development. I'll try to
summarize the thread soon, but in the mean time if you have a strong
opinion on it, please join the party at:

http://lists.openstack.org/pipermail/openstack-dev/2017-December/125473.html

The discussion started by Graham Hayes to clarify how the testing of
interoperability programs should be organized in the age of add-on
trademark programs is still going on, with most people still trying to
wrap their heads around the various options. We'd welcome more opinions
on that thread, so please chime in on the review:

https://review.openstack.org/521602

Matt Treinish proposed an update to the Python PTI for tests to be
specific and explicit. Wider community input is needed on that topic.
Please review at:

https://review.openstack.org/519751

We still only have one goal proposed for Rocky. We need other proposals
before we can make a call. See the thread:

http://lists.openstack.org/pipermail/openstack-dev/2017-November/124976.html


== TC member actions for the coming week(s) ==

Please brainstorm potential goals for the Rocky cycle, so that we can
move to proposal phase soon.


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

For the coming week, I expect a continuation of the discussion around
one-year cycles and alternate solutions to make openstack development
more accessible to part-time contributors, as well as a bit of live
Rocky goal brainstorming.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread Thierry Carrez
Adrian Turjak wrote:
> I worry that moving to a yearly release is actually going to make things
> worse for deployers and there will be less encouragement for them to be
> on more up to date and bug fixed code. Not to mention, no one will trust
> or use the intermediary releases unless they are coordinated and tested
> much like the current release process. That means that anyone is who
> upgrading faster will be forced to wait for yearly releases because they
> are the only ones they know to be 'stable'.
> 
> I'm actually one of the 20% developers upstream (although I'm trying to
> change that), and my experience is actually the opposite. I like the
> shorter release times, I'd find that longer releases will make it much
> harder and longer waits to get anything in. With the 6 month cadence I
> know that if I miss a deadline for one release, the next one is around
> the corner. I've never had issue following up in the next release, and
> often if a feature or bug fix misses a release, in my experience the
> core team does a good job of making it a bit more of a priority. With
> yearly releases I'd be waiting for a year to get my code into a stable
> coordinated release, and then longer for that code to be deployed as
> part of an upgrade to a stable release. And I do often miss release
> deadlines, and that yearly wait would drive me mad.
> [...]

That is excellent feedback. Thanks Adrian!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread Luigi Toscano


- Original Message -
> On 12/14/2017 9:38 AM, Luigi Toscano wrote:
> > And the QE in me says that there are enough moving parts around for the
> > integration testing (because CD yes, but the resources are limited) that a
> > longer cycle with a longer time between freeze and release is better to
> > refine the testing part. The cycle as it is, especially for people working
> > both upstream and downstream at the same time, is complicated enough.
> 
> Nothing in this proposal is talking about a longer time between feature
> freeze and release, as far as I know (maybe I missed that somewhere).

It does not talk about this. But I think that it does not make sense without 
that.
IMHO.

> 
> In that regard, the one year cycle does not make the release any more
> stable, it just means a bigger pile of stuff dumped on you when you
> finally get it and upgrade.

Not if you commit to work on stabilization only (which in turn would allow us 
to be more confident that PASS jobs means that things are really working not 
just in DevStack).

-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev