Re: [openstack-dev] [Zun] Add Deepak Mourya to the core team

2018-05-14 Thread Kumari, Madhuri
Welcome to the team, Deepak!

Regards,
Madhuri

From: Hongbin Lu [mailto:hongbin...@gmail.com]
Sent: Monday, May 14, 2018 10:00 AM
To: OpenStack Development Mailing List (not for usage questions) 
; deepak.mou...@nectechnologies.in
Subject: [openstack-dev] [Zun] Add Deepak Mourya to the core team

Hi all,

This is an announcement of the following change on the Zun core reviewers team:

+ Deepak Mourya (mourya007)

Deepak has been actively involving in Zun for several months. He has submitted 
several code patches to Zun, all of which are useful features or bug fixes. In 
particular, I would like to highlight that he has implemented the availability 
zone API which is a significant contribution to the Zun feature set. Based on 
his significant contribution, I would like to propose him to become a core 
reviewer of Zun.

This proposal has been voted within the existing core team and is unanimously 
approved. Welcome to the core team Deepak.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telemetry] [ceilometer] Ceilometer-file-publisher-compression-csv-format

2018-05-14 Thread gordon chung


On 2018-05-14 3:26 PM, Kwan, Louie wrote:
> Hi All,
> 
> Any weekly meeting for Telemetry?  I would like to discuss what we can do for 
> the next step for the following review?
> 
> https://review.openstack.org/#/c/562768/
> 
> Ping in the IRC a few times and please advice the next step.
> 

there are no meetings given sparse participation. regardless, i've added 
a review.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [nova] Cannot live migrattion, because error:libvirtError: the CPU is incompatible with host CPU:Host CPU does not provide required features: cmt, mbm_total, mbm_loc

2018-05-14 Thread 何健乐
Hi ,Chris: the following is information from my machines
virsh capobilities from source host:




 
  x86_64
  Broadwell
  Intel
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  



virsh capobilities from destination host:


  x86_64
  Broadwell
  Intel
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  




libvirt section in nova.conf from source host:


  [libvirt]
inject_partition=-2
inject_password=False
disk_cachemodes=network=writeback
cpu_mode=host-model
virt_type=kvm
inject_key=False
images_rbd_pool=vms
rbd_secret_uuid=43518166-15c4-420f-aa11-e0a681e0e459
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
hw_disk_discard=unmap
rbd_user=admin
live_migration_uri=qemu+ssh://nova_migration@%s/system?keyfile=/etc/nova/migration/identity




libvirt section in nova.conf from destination host:


[libvirt]
inject_partition=-2
inject_password=False
disk_cachemodes=network=writeback
cpu_mode=host-model
virt_type=kvm
inject_key=False
images_rbd_pool=vms
rbd_secret_uuid=43518166-15c4-420f-aa11-e0a681e0e459
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
hw_disk_discard=unmap
rbd_user=admin
live_migration_uri=qemu+ssh://nova_migration@%s/system?keyfile=/etc/nova/migration/identity__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][stable] Re-adding Jim Rollenhagen to ironic stable maintenance team?

2018-05-14 Thread Tony Breeds
On Fri, May 11, 2018 at 08:37:43AM -0400, Julia Kreger wrote:
> On Fri, May 11, 2018 at 8:20 AM, Dmitry Tantsur  wrote:
> > Hi,
> [trim]
> >> If there are no objections, I'll re-add him next week.
> >
> >
> > I don't remember if we actually can add people to these teams or it has to
> > be done by the main stable team.
> >
> I'm fairly sure I'm the person who deleted him from the group in the
> first place :(   As such, I think I has the magical powers... maybe
> ;)

I'm not sure you do have access to do that as the group is owned by
stable-main-core.

That being said I've re-added Jim.  Technically it's nest week now :)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-05-14 Thread Lance Bragstad


On 05/14/2018 05:46 PM, Doug Hellmann wrote:
> Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500:
>> On 05/14/2018 02:24 PM, Doug Hellmann wrote:
>>> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500:
 On 03/19/2018 09:22 AM, Jim Rollenhagen wrote:
> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann  > wrote:
>
> Both of those are good ideas.
>
>
> Agree. I like the socket idea a bit more as I can imagine some
> operators don't want config file changes automatically applied. Do we
> want to choose one to standardize on or allow each project (or
> operators, via config) the choice?
 Just to recap, keystone would be listening for when it's configuration
 file changes, and reinitialize the logger if the logging settings
 changed, correct?
>>> Sort of.
>>>
>>> Keystone would need to do something to tell oslo.config to re-load the
>>> config files. In services that rely on oslo.service, this is handled
>>> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so
>>> for Keystone you would want to do something similar.
>>>
>>> That is, you want to wait for an explicit notification from the operator
>>> that you should reload the config, and not just watch for the file to
>>> change. We could talk about using file modification as a trigger, but
>>> reloading is something that may need to be staged across several
>>> services in order so we chose for the first version to make the trigger
>>> explicit. Relying on watching files will also fail when the modified
>>> data is not in a file (which will be possible when we finish the driver
>>> work described in
>>> http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html).
>> Hmm, these are good points. I wonder if just converting to use
>> oslo.service would be a lower bar then?
> I thought keystone had moved away from that direction toward deploying
> only within Apache? I may be out of touch, or have misunderstood
> something, though.

Oh - never mind... For some reason I was thinking there was a way to use
oslo.service and Apache.

Either way, I'll do some more digging before tomorrow. I have this as a
topic on keystone's meeting agenda to go through our options [0]. If we
do come up with something that doesn't involve intercepting signals
(specifically for the reason noted by Kristi and Jim in the mod_wsgi
documentation), should the community goal be updated to include that
option? Just thinking that we can't be the only service in this position.

[0] https://etherpad.openstack.org/p/keystone-weekly-meeting

>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0

2018-05-14 Thread Doug Hellmann
Excerpts from Ivan Kolodyazhny's message of 2018-05-14 22:20:42 +0300:
> Hi all,
> 
> From the Horizon's perspective, it would be good to support Django 1.11 as
> long as we can since it's an LTS release [2].
> Django 2.0 support is also extremely important because of it's the first
> step in a python3-only environment and step forward on supporting
> next Django 2.2 LTS release which will be released next April.
> 
> We have to be careful to not break existing plugins and deployments by
> introducing new Django version requirement.
> We need to work more closely with plugins teams to getting everything ready
> for Django 2.0+ before we change our requirements.txt.
> I don't want to introduce any breaking changes for current plugins so we
> need to to be sure that each plugin supports Django 2.0. It means
> plugins have to have voting Django 2.0 jobs on their gates at least. I'll
> do my best on this effort and will work with plugins teams to do as
> much as we can in Rocky timeframe.

That sounds like a good plan, thanks Ivan.

Doug

> 
> [2] https://www.djangoproject.com/download/
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> 
> On Mon, May 14, 2018 at 4:30 PM, Akihiro Motoki  wrote:
> 
> >
> >
> > 2018年5月14日(月) 21:42 Doug Hellmann :
> >
> >> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900:
> >> > 2018年5月12日(土) 3:04 Doug Hellmann :
> >> >
> >> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900:
> >> > > > Hi zigo and horizon plugin maintainers,
> >> > > >
> >> > > > Horizon itself already supports Django 2.0 and horizon unit test
> >> covers
> >> > > > Django 2.0 with Python 3.5.
> >> > > >
> >> > > > A question to all is whether we change the upper bound of Django
> >> from
> >> > > <2.0
> >> > > > to <2.1.
> >> > > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2.
> >> > > > (Note that Django 1.11 will continue to be used for python 2.7
> >> > > environment.)
> >> > >
> >> > > Do we need to cap it at all? We've been trying to express our
> >> > > dependencies without caps and rely on the constraints list to
> >> > > test using a common version because this offers the most flexibility
> >> as
> >> > > we move to newer versions over time.
> >> > >
> >> >
> >> > The main reason we cap django version so far is that django minor
> >> version
> >> > releases
> >> > contain some backward incompatible changes and also drop deprecated
> >> > features.
> >> > A new django minor version release like 1.11 usually breaks horizon and
> >> > plugins
> >> > as horizon developers are not always checking django deprecations.
> >>
> >> OK. Having the cap in place makes it more complicated to test
> >> upgrading, and then upgrade. Because we no longer synchronize
> >> requirements, changing openstack/requirements does not trigger the
> >> bot to propose the same change to all of the projects using the
> >> dependency. Someone will have to do that by hand in the future, as we
> >> are doing with eventlet right now
> >> (https://review.openstack.org/#/q/topic:uncap-eventlet).
> >>
> >> Without the cap, we can test the upgrade by proposing a constraint
> >> update and running the horizon (and/or plugin) unit tests. When those
> >> tests pass, we can then step forward all at once by approving the
> >> constraint change.
> >>
> >
> > Thanks for the detail context.
> >
> > Honestly I am not sure which is better to cap or uncap the django version.
> > We can try uncapping now and see what happens in the community.
> >
> > cross-horizon-(py27|py35) jobs of openstack/requirements checks
> > if horizon works with a new version. it works for horizon, but perhaps it
> > potentially
> > break horizon plugins as it takes time to catch up with such changes.
> > On the other hand, a version bump in upper-constraints.txt would be
> > a good trigger for horizon plugin maintainers to sync all requirements.
> >
> > In addition, requirements are not synchronized automatically,
> > so it seems not feasible to propose requirements changes per django
> > version change.
> >
> >
> >>
> >> >
> >> > I have a question on uncapping the django version.
> >> > How can users/operators know which versions are supported?
> >> > Do they need to check upper-constraints.txt?
> >>
> >> We do tell downstream consumers that the upper-constraints.txt file is
> >> the set of things we test with, and that any other combination of
> >> packages would need to be tested on their systems separately.
> >>
> >> >
> >> > > > There are several points we should consider:
> >> > > > - If we change it in global-requirements.txt, it means Django 2.0
> >> will be
> >> > > > used for python3.5 environment.
> >> > > > - Not a small number of horizon plugins still do not support Django
> >> 2.0,
> >> > > so
> >> > > > bumping the upper bound to <2.1 will break their py35 tests.
> >> > > > - From my experience of Django 2.0 support in some 

Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-05-14 Thread Doug Hellmann
Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500:
> 
> On 05/14/2018 02:24 PM, Doug Hellmann wrote:
> > Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500:
> >> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote:
> >>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann  >>> > wrote:
> >>>
> >>> Both of those are good ideas.
> >>>
> >>>
> >>> Agree. I like the socket idea a bit more as I can imagine some
> >>> operators don't want config file changes automatically applied. Do we
> >>> want to choose one to standardize on or allow each project (or
> >>> operators, via config) the choice?
> >> Just to recap, keystone would be listening for when it's configuration
> >> file changes, and reinitialize the logger if the logging settings
> >> changed, correct?
> > Sort of.
> >
> > Keystone would need to do something to tell oslo.config to re-load the
> > config files. In services that rely on oslo.service, this is handled
> > with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so
> > for Keystone you would want to do something similar.
> >
> > That is, you want to wait for an explicit notification from the operator
> > that you should reload the config, and not just watch for the file to
> > change. We could talk about using file modification as a trigger, but
> > reloading is something that may need to be staged across several
> > services in order so we chose for the first version to make the trigger
> > explicit. Relying on watching files will also fail when the modified
> > data is not in a file (which will be possible when we finish the driver
> > work described in
> > http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html).
> 
> Hmm, these are good points. I wonder if just converting to use
> oslo.service would be a lower bar then?

I thought keystone had moved away from that direction toward deploying
only within Apache? I may be out of touch, or have misunderstood
something, though.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul repo insertion in update/upgrade CI

2018-05-14 Thread Wesley Hayutin
On Mon, May 14, 2018 at 11:36 AM Jiří Stránský  wrote:

> Hi,
>
> this is mainly for CI folks and whom-it-may-concern.
>
> Recently we came across the topic of how to enable/disable zuul repos at
> various places in the CI jobs. For normal deploy jobs there's no need to
> customize, but for update/upgrade jobs there is. It's not entirely
> straightforward and there's quite a variety of enable/disable spots and
> combinations which can be useful.
>
> Even though improvements in this area are not very likely to get
> implemented right away, i had some thoughts on the topic so i wanted to
> capture them. I put the ideas into an etherpad:
>
> https://etherpad.openstack.org/p/tripleo-ci-zuul-repo-insertion
>
> Feel free to put some more thoughts there or ping me on IRC with
> anything related.
>
>
> Thanks
>
> Jirka
>
>
Thanks Jirka!!


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Wesley Hayutin
On Sun, May 13, 2018 at 11:50 PM Tristan Cacqueray 
wrote:

> On May 14, 2018 2:44 am, Wesley Hayutin wrote:
> [snip]
> > I do think it would be helpful to say have a one week change window where
> > folks are given the opportunity to preflight check a new image and the
> > potential impact on the job workflow the updated image may have.
> [snip]
>
> How about adding a periodic job that setup centos-release-cr in a pre
> task? This should highlight issues with up-coming updates:
> https://wiki.centos.org/AdditionalResources/Repositories/CR
>
> -Tristan
>

Thanks for the suggestion Tristan, going to propose using this repo at the
next TripleO mtg.

Thanks


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] forum etherpads now available ...

2018-05-14 Thread Jay S Bryant

All,

I have etherpads created for our Cinder related Forum discussions:

 * Tuesday, 5/22 11:00 to 11:40 - Room 221-222 - Cinder High
   Availability (HA) Discussion
   -https://etherpad.openstack.org/p/YVR18-cinder-ha-forum
 * Tuesday, 5/22 11:50 to 12:30 - Room 221-222 - Multi-attach
   Introduction and Future Direction
   -https://etherpad.openstack.org/p/YVR18-cinder-mutiattach-forum
 * Wednesday, 5/23 9:40 to 10:30 - Room 221-222 - Cinder's
   Documentation Discussion
   -https://etherpad.openstack.org/p/YVR18-cinder-documentation-forum

We also have the session on using the placement service:

 * Monday 5/21 16:20 to 17:00 - Planning to use Placement in
   Cinderhttps://etherpad.openstack.org/p/YVR-cinder-placement

Please take some time to look at the etherpads before the forum and add 
your thoughts/questions for discussion.


Thank you!

Jay Bryant

(jungleboyj)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster

2018-05-14 Thread John Dickinson



On 14 May 2018, at 13:43, Pete Zaitcev wrote:


On Thu, 10 May 2018 20:07:03 +0800
Yuxin Wang  wrote:

I'm working on a swift project. Our customer cares about S3 
compatibility very much. I tested our swift cluster with 
ceph/s3-tests and analyzed the failed cases. It turns out that lots 
of the failed cases are related to unique container/bucket. But as we 
know, containers are just unique in a tenant/project.

[...]
Do you have any ideas on how to do or maybe why not to do? I'd highly 
appreciate any suggestions.


I don't have a recipy, but here's a thought: try making all the 
accounts
that need the interoperability with S3 belong to the same Keystone 
tenant.

As long as you do not give those accounts the owner role (one of those
listed in operator_roles=), they will not be able to access each 
other's
buckets (Swift containers). Unfortunately, I think they will not be 
able

to create any buckets either, but perhaps it's something that can be
tweaked - for sure if you're willing to far enough to make new 
middleware.


-- Pete




Pete's idea is interesting. The upstream Swift community has talked 
about what it will take to support this sort of S3 compatibility, and 
we've got some pretty good ideas. We'd love your help to implement 
something. You can find us in #openstack-swift in freenode IRC.


As a general overview, swift3 (which has now been integrated into 
Swift's repo as the "s3api" middleware) maps S3 buckets to a unique 
(account, container) pair in Swift. This mapping is critical because the 
Swift account plays a part in Swift's data placement algorithm. This 
allows both you and I to both have an "images" container in the same 
Swift cluster in our respective accounts. However, AWS doesn't have an 
exposed "thing" that's analogous to the account. In order to fill in 
this missing info, we have to map the S3 bucket name to the appropriate 
(account, container) pair in Swift. Currently, the s3api middleware does 
this by encoding the account name into the auth token. This way, when 
you and I are each accessing our own "images" container as a bucket via 
the S3 API, our requests go to the right place and do the right thing.


This mapping technique has a couple of significant limits. First, we 
can't do the mapping without the token, so unauthenticated (ie public) 
S3 API calls can never work. Second, bucket names are not unique. This 
second issue may or may not be a bug. In your case, it's an issue, but 
it may be of benefit to others. Either way, it's a difference from the 
way S3 works.


In order to fix this, we need a new way to do the bucket->(account, 
container) mapping. One idea is to have a key-value registry. There may 
be other ways to solve this too, but it's not a trivial change. We'd 
welcome your help in figuring out the right solution!


--John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster

2018-05-14 Thread Pete Zaitcev
On Thu, 10 May 2018 20:07:03 +0800
Yuxin Wang  wrote:

> I'm working on a swift project. Our customer cares about S3 compatibility 
> very much. I tested our swift cluster with ceph/s3-tests and analyzed the 
> failed cases. It turns out that lots of the failed cases are related to 
> unique container/bucket. But as we know, containers are just unique in a 
> tenant/project.
>[...]
> Do you have any ideas on how to do or maybe why not to do? I'd highly 
> appreciate any suggestions.

I don't have a recipy, but here's a thought: try making all the accounts
that need the interoperability with S3 belong to the same Keystone tenant.
As long as you do not give those accounts the owner role (one of those
listed in operator_roles=), they will not be able to access each other's
buckets (Swift containers). Unfortunately, I think they will not be able
to create any buckets either, but perhaps it's something that can be
tweaked - for sure if you're willing to far enough to make new middleware.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-05-14 Thread Lance Bragstad


On 05/14/2018 02:24 PM, Doug Hellmann wrote:
> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500:
>> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote:
>>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann >> > wrote:
>>>
>>> Both of those are good ideas.
>>>
>>>
>>> Agree. I like the socket idea a bit more as I can imagine some
>>> operators don't want config file changes automatically applied. Do we
>>> want to choose one to standardize on or allow each project (or
>>> operators, via config) the choice?
>> Just to recap, keystone would be listening for when it's configuration
>> file changes, and reinitialize the logger if the logging settings
>> changed, correct?
> Sort of.
>
> Keystone would need to do something to tell oslo.config to re-load the
> config files. In services that rely on oslo.service, this is handled
> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so
> for Keystone you would want to do something similar.
>
> That is, you want to wait for an explicit notification from the operator
> that you should reload the config, and not just watch for the file to
> change. We could talk about using file modification as a trigger, but
> reloading is something that may need to be staged across several
> services in order so we chose for the first version to make the trigger
> explicit. Relying on watching files will also fail when the modified
> data is not in a file (which will be possible when we finish the driver
> work described in
> http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html).

Hmm, these are good points. I wonder if just converting to use
oslo.service would be a lower bar then?

>
>> Would that suffice for the goal? We'd be explicit in checking for
>> logging option changes, so modifications to other configuration options
>> shouldn't affect anything, should they?
> Yes, oslo.config deals with all of that.
>
> Each configuration option has a flag saying whether or not it is
> mutable (defaults to False). When oslo.config is told to "mutate",
> it reloads the data sources and reports as warnings any config
> options that changed that are not mutable.
>
> For any options that are marked mutable and have been changed, it
> calls the "mutate hooks" that have been registered by calling
> ConfigOpts.register_mutate_hook(), passing some information about
> which options changed and what changes were made.
>
> There's a little more information in
> https://docs.openstack.org/oslo.config/latest/reference/mutable.html but
> I notice that does not cover the hooks. The one for oslo.log is in
> http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/log.py#n229
>
> For the goal, however, all you need to do is set up some way to trigger
> the call to mutate_config_files() and then document that.
>
>>> I believe adding those things to oslo.service would make them
>>> available to all applications. 
>>>
>>>
>>> Not necessarily - this discussion started when the Keystone team was
>>> discussing how to implement this, given that keystone doesn't use
>>> oslo.service. That said, it should be easy to implement in services
>>> that don't want this dependency, so +1.
>>>
>>> // jim
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-14 Thread Alex Schultz
On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya  wrote:
> An update for your review please folks
>
>> Bogdan Dobrelya  writes:
>>
>>> Hello.
>>> As Zuul documentation [0] explains, the names "check", "gate", and
>>> "post"  may be altered for more advanced pipelines. Is it doable to
>>> introduce, for particular openstack projects, multiple check
>>> stages/steps as check-1, check-2 and so on? And is it possible to make
>>> the consequent steps reusing environments from the previous steps
>>> finished with?
>>>
>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve
>>> with this "virtual RFE", and using such multi-staged check pipelines,
>>> is reducing (ideally, de-duplicating) some of the common steps for
>>> existing CI jobs.
>>
>>
>> What you're describing sounds more like a job graph within a pipeline.
>> See:
>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies
>> for how to configure a job to run only after another job has completed.
>> There is also a facility to pass data between such jobs.
>>
>> ... (skipped) ...
>>
>> Creating a job graph to have one job use the results of the previous job
>> can make sense in a lot of cases.  It doesn't always save *time*
>> however.
>>
>> It's worth noting that in OpenStack's Zuul, we have made an explicit
>> choice not to have long-running integration jobs depend on shorter pep8
>> or tox jobs, and that's because we value developer time more than CPU
>> time.  We would rather run all of the tests and return all of the
>> results so a developer can fix all of the errors as quickly as possible,
>> rather than forcing an iterative workflow where they have to fix all the
>> whitespace issues before the CI system will tell them which actual tests
>> broke.
>>
>> -Jim
>
>
> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
> undercloud deployments vs upgrades testing (and some more). Given that those
> undercloud jobs have not so high fail rates though, I think Emilien is right
> in his comments and those would buy us nothing.
>
> From the other side, what do you think folks of making the
> tripleo-ci-centos-7-3nodes-multinode depend on
> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily
> and long running, and is non-voting. It deploys (see featuresets configs
> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the
> containers-multinode fails - see the CI stats page [4]. I've found only a 2
> cases there for the otherwise situation, when containers-multinode fails,
> but 3nodes-multinode passes. So cutting off those future failures via the
> dependency added, *would* buy us something and allow other jobs to wait less
> to commence, by a reasonable price of somewhat extended time of the main
> zuul pipeline. I think it makes sense and that extended CI time will not
> overhead the RDO CI execution times so much to become a problem. WDYT?
>

I'm not sure it makes sense to add a dependency on other deployment
tests. It's going to add additional time to the CI run because the
upgrade won't start until well over an hour after the rest of the
jobs.  The only thing I could think of where this makes more sense is
to delay the deployment tests until the pep8/unit tests pass.  e.g.
let's not burn resources when the code is bad. There might be
arguments about lack of information from a deployment when developing
things but I would argue that the patch should be vetted properly
first in a local environment before taking CI resources.

Thanks,
-Alex

> [0] https://review.openstack.org/#/c/568275/
> [1] https://review.openstack.org/#/c/568278/
> [2] https://review.openstack.org/#/c/568326/
> [3]
> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html
> [4] http://tripleo.org/cistatus.html
>
> * ignore the column 1, it's obsolete, all CI jobs now using configs download
> AFAICT...
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telemetry] [ceilometer] Ceilometer-file-publisher-compression-csv-format

2018-05-14 Thread Kwan, Louie
Hi All,

Any weekly meeting for Telemetry?  I would like to discuss what we can do for 
the next step for the following review?

https://review.openstack.org/#/c/562768/

Ping in the IRC a few times and please advice the next step.

Thanks.
Louie


From: Kwan, Louie
Sent: Friday, May 04, 2018 10:03 AM
To: openstack-dev@lists.openstack.org; julien.dan...@enovance.com
Subject: [openstack-dev] [Telemetry] [ceilometer] 
Ceilometer-file-publisher-compression-csv-format

Reaching out to Rocky PTL and others.

What could be the next step?

Thanks.
Louie

-Original Message-
From: Kwan, Louie [mailto:louie.k...@windriver.com]
Sent: Monday, April 23, 2018 4:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ceilometer] 
Ceilometer-file-publisher-compression-csv-format

Submitted the following review on April 19,

https://review.openstack.org/#/c/562768/

Would like to know who else could be on the reviewer list and anything else for 
the next step?

Thanks.
Louie
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-05-14 Thread Doug Hellmann
Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500:
> 
> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote:
> >
> > On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann  > > wrote:
> >
> > Both of those are good ideas.
> >
> >
> > Agree. I like the socket idea a bit more as I can imagine some
> > operators don't want config file changes automatically applied. Do we
> > want to choose one to standardize on or allow each project (or
> > operators, via config) the choice?
> 
> Just to recap, keystone would be listening for when it's configuration
> file changes, and reinitialize the logger if the logging settings
> changed, correct?

Sort of.

Keystone would need to do something to tell oslo.config to re-load the
config files. In services that rely on oslo.service, this is handled
with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so
for Keystone you would want to do something similar.

That is, you want to wait for an explicit notification from the operator
that you should reload the config, and not just watch for the file to
change. We could talk about using file modification as a trigger, but
reloading is something that may need to be staged across several
services in order so we chose for the first version to make the trigger
explicit. Relying on watching files will also fail when the modified
data is not in a file (which will be possible when we finish the driver
work described in
http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html).

> 
> Would that suffice for the goal? We'd be explicit in checking for
> logging option changes, so modifications to other configuration options
> shouldn't affect anything, should they?

Yes, oslo.config deals with all of that.

Each configuration option has a flag saying whether or not it is
mutable (defaults to False). When oslo.config is told to "mutate",
it reloads the data sources and reports as warnings any config
options that changed that are not mutable.

For any options that are marked mutable and have been changed, it
calls the "mutate hooks" that have been registered by calling
ConfigOpts.register_mutate_hook(), passing some information about
which options changed and what changes were made.

There's a little more information in
https://docs.openstack.org/oslo.config/latest/reference/mutable.html but
I notice that does not cover the hooks. The one for oslo.log is in
http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/log.py#n229

For the goal, however, all you need to do is set up some way to trigger
the call to mutate_config_files() and then document that.

> 
> >
> > I believe adding those things to oslo.service would make them
> > available to all applications. 
> >
> >
> > Not necessarily - this discussion started when the Keystone team was
> > discussing how to implement this, given that keystone doesn't use
> > oslo.service. That said, it should be easy to implement in services
> > that don't want this dependency, so +1.
> >
> > // jim
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0

2018-05-14 Thread Ivan Kolodyazhny
Hi all,

>From the Horizon's perspective, it would be good to support Django 1.11 as
long as we can since it's an LTS release [2].
Django 2.0 support is also extremely important because of it's the first
step in a python3-only environment and step forward on supporting
next Django 2.2 LTS release which will be released next April.

We have to be careful to not break existing plugins and deployments by
introducing new Django version requirement.
We need to work more closely with plugins teams to getting everything ready
for Django 2.0+ before we change our requirements.txt.
I don't want to introduce any breaking changes for current plugins so we
need to to be sure that each plugin supports Django 2.0. It means
plugins have to have voting Django 2.0 jobs on their gates at least. I'll
do my best on this effort and will work with plugins teams to do as
much as we can in Rocky timeframe.

[2] https://www.djangoproject.com/download/

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Mon, May 14, 2018 at 4:30 PM, Akihiro Motoki  wrote:

>
>
> 2018年5月14日(月) 21:42 Doug Hellmann :
>
>> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900:
>> > 2018年5月12日(土) 3:04 Doug Hellmann :
>> >
>> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900:
>> > > > Hi zigo and horizon plugin maintainers,
>> > > >
>> > > > Horizon itself already supports Django 2.0 and horizon unit test
>> covers
>> > > > Django 2.0 with Python 3.5.
>> > > >
>> > > > A question to all is whether we change the upper bound of Django
>> from
>> > > <2.0
>> > > > to <2.1.
>> > > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2.
>> > > > (Note that Django 1.11 will continue to be used for python 2.7
>> > > environment.)
>> > >
>> > > Do we need to cap it at all? We've been trying to express our
>> > > dependencies without caps and rely on the constraints list to
>> > > test using a common version because this offers the most flexibility
>> as
>> > > we move to newer versions over time.
>> > >
>> >
>> > The main reason we cap django version so far is that django minor
>> version
>> > releases
>> > contain some backward incompatible changes and also drop deprecated
>> > features.
>> > A new django minor version release like 1.11 usually breaks horizon and
>> > plugins
>> > as horizon developers are not always checking django deprecations.
>>
>> OK. Having the cap in place makes it more complicated to test
>> upgrading, and then upgrade. Because we no longer synchronize
>> requirements, changing openstack/requirements does not trigger the
>> bot to propose the same change to all of the projects using the
>> dependency. Someone will have to do that by hand in the future, as we
>> are doing with eventlet right now
>> (https://review.openstack.org/#/q/topic:uncap-eventlet).
>>
>> Without the cap, we can test the upgrade by proposing a constraint
>> update and running the horizon (and/or plugin) unit tests. When those
>> tests pass, we can then step forward all at once by approving the
>> constraint change.
>>
>
> Thanks for the detail context.
>
> Honestly I am not sure which is better to cap or uncap the django version.
> We can try uncapping now and see what happens in the community.
>
> cross-horizon-(py27|py35) jobs of openstack/requirements checks
> if horizon works with a new version. it works for horizon, but perhaps it
> potentially
> break horizon plugins as it takes time to catch up with such changes.
> On the other hand, a version bump in upper-constraints.txt would be
> a good trigger for horizon plugin maintainers to sync all requirements.
>
> In addition, requirements are not synchronized automatically,
> so it seems not feasible to propose requirements changes per django
> version change.
>
>
>>
>> >
>> > I have a question on uncapping the django version.
>> > How can users/operators know which versions are supported?
>> > Do they need to check upper-constraints.txt?
>>
>> We do tell downstream consumers that the upper-constraints.txt file is
>> the set of things we test with, and that any other combination of
>> packages would need to be tested on their systems separately.
>>
>> >
>> > > > There are several points we should consider:
>> > > > - If we change it in global-requirements.txt, it means Django 2.0
>> will be
>> > > > used for python3.5 environment.
>> > > > - Not a small number of horizon plugins still do not support Django
>> 2.0,
>> > > so
>> > > > bumping the upper bound to <2.1 will break their py35 tests.
>> > > > - From my experience of Django 2.0 support in some plugins, the
>> required
>> > > > changes are relatively simple like [1].
>> > > >
>> > > > I created an etherpad page to track Django 2.0 support in horizon
>> > > plugins.
>> > > > https://etherpad.openstack.org/p/django20-support
>> > > >
>> > > > I proposed Django 2.0 support patches to several projects which I
>> think
>> > > are
>> > > > 

Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-14 Thread Sagi Shnaidman
Hi, Bogdan

I like the idea with undercloud job. Actually if undercloud fails, I'd stop
all other jobs, because it doens't make sense to run them. Seeing the same
failure in 10 jobs doesn't add too much. So maybe adding undercloud job as
dependency for all multinode jobs would be great idea. I think it's worth
to check also how long it will delay jobs. Will all jobs wait until
undercloud job is running? Or they will be aborted when undercloud job is
failing?

However I'm very sceptical about multinode containers and scenarios jobs,
they could fail because of very different reasons, like race conditions in
product or infra issues. Having skipping some of them will lead to more
rechecks from devs trying to discover all problems in a row, which will
delay the development process significantly.

Thanks


On Mon, May 14, 2018 at 7:15 PM, Bogdan Dobrelya 
wrote:

> An update for your review please folks
>
> Bogdan Dobrelya  writes:
>>
>> Hello.
>>> As Zuul documentation [0] explains, the names "check", "gate", and
>>> "post"  may be altered for more advanced pipelines. Is it doable to
>>> introduce, for particular openstack projects, multiple check
>>> stages/steps as check-1, check-2 and so on? And is it possible to make
>>> the consequent steps reusing environments from the previous steps
>>> finished with?
>>>
>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve
>>> with this "virtual RFE", and using such multi-staged check pipelines,
>>> is reducing (ideally, de-duplicating) some of the common steps for
>>> existing CI jobs.
>>>
>>
>> What you're describing sounds more like a job graph within a pipeline.
>> See: https://docs.openstack.org/infra/zuul/user/config.html#attr-
>> job.dependencies
>> for how to configure a job to run only after another job has completed.
>> There is also a facility to pass data between such jobs.
>>
>> ... (skipped) ...
>>
>> Creating a job graph to have one job use the results of the previous job
>> can make sense in a lot of cases.  It doesn't always save *time*
>> however.
>>
>> It's worth noting that in OpenStack's Zuul, we have made an explicit
>> choice not to have long-running integration jobs depend on shorter pep8
>> or tox jobs, and that's because we value developer time more than CPU
>> time.  We would rather run all of the tests and return all of the
>> results so a developer can fix all of the errors as quickly as possible,
>> rather than forcing an iterative workflow where they have to fix all the
>> whitespace issues before the CI system will tell them which actual tests
>> broke.
>>
>> -Jim
>>
>
> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
> undercloud deployments vs upgrades testing (and some more). Given that
> those undercloud jobs have not so high fail rates though, I think Emilien
> is right in his comments and those would buy us nothing.
>
> From the other side, what do you think folks of making the
> tripleo-ci-centos-7-3nodes-multinode depend on
> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite
> faily and long running, and is non-voting. It deploys (see featuresets
> configs [3]*) a 3 nodes in HA fashion. And it seems almost never passing,
> when the containers-multinode fails - see the CI stats page [4]. I've found
> only a 2 cases there for the otherwise situation, when containers-multinode
> fails, but 3nodes-multinode passes. So cutting off those future failures
> via the dependency added, *would* buy us something and allow other jobs to
> wait less to commence, by a reasonable price of somewhat extended time of
> the main zuul pipeline. I think it makes sense and that extended CI time
> will not overhead the RDO CI execution times so much to become a problem.
> WDYT?
>
> [0] https://review.openstack.org/#/c/568275/
> [1] https://review.openstack.org/#/c/568278/
> [2] https://review.openstack.org/#/c/568326/
> [3] https://docs.openstack.org/tripleo-quickstart/latest/feature
> -configuration.html
> [4] http://tripleo.org/cistatus.html
>
> * ignore the column 1, it's obsolete, all CI jobs now using configs
> download AFAICT...
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Jeremy Stanley
On 2018-05-14 18:56:51 + (+), Jeremy Stanley wrote:
[...]
> Gödel's completeness theorem at work
[...]

More accurately, Gödel's first incompleteness theorem, I suppose. ;)
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Jeremy Stanley
On 2018-05-14 12:00:05 -0600 (-0600), Wesley Hayutin wrote:
[...]
> Project non-voting check jobs on the node-pool image creation job
> perhaps could be the canary in the coal mine we are seeking. Maybe
> we could see if that would be something that could be useful to
> both infra and to various OpenStack projects?
[...]

This presumes that Nodepool image builds are Zuul jobs, which they
aren't (at least not today). Long, long ago in a CI system not so
far away, our DevStack-specific image builds were in fact CI jobs
and for a while back then we did run DevStack's "smoke" tests as an
acceptance test before putting a new image into service. At the time
we discovered that even deploying DevStack was too complex and racy
to make for a viable acceptance test. The lesson we learned is that
most of the image regressions we were concerned with preventing
required testing complex enough to be a significant regression
magnet itself (Gödel's completeness theorem at work, I expect?).

That said, the idea of turning more of Nodepool's tasks into Zuul
jobs is an interesting one worthy of lengthy discussion sometime.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-05-14 Thread Lance Bragstad


On 03/19/2018 09:22 AM, Jim Rollenhagen wrote:
>
> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann  > wrote:
>
> Both of those are good ideas.
>
>
> Agree. I like the socket idea a bit more as I can imagine some
> operators don't want config file changes automatically applied. Do we
> want to choose one to standardize on or allow each project (or
> operators, via config) the choice?

Just to recap, keystone would be listening for when it's configuration
file changes, and reinitialize the logger if the logging settings
changed, correct?

Would that suffice for the goal? We'd be explicit in checking for
logging option changes, so modifications to other configuration options
shouldn't affect anything, should they?

>
> I believe adding those things to oslo.service would make them
> available to all applications. 
>
>
> Not necessarily - this discussion started when the Keystone team was
> discussing how to implement this, given that keystone doesn't use
> oslo.service. That said, it should be easy to implement in services
> that don't want this dependency, so +1.
>
> // jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Clark Boylan
On Mon, May 14, 2018, at 10:11 AM, Wesley Hayutin wrote:
> On Mon, May 14, 2018 at 12:08 PM Clark Boylan  wrote:
> 
> > On Mon, May 14, 2018, at 8:57 AM, Wesley Hayutin wrote:
> > > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley 
> > wrote:
> > >
> > > > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote:

snip

> > > > Our automation doesn't know that there's a difference between
> > > > packages which were part of CentOS 7.4 and 7.5 any more than it
> > > > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3.
> > > > Even if we somehow managed to pause our CentOS image updates
> > > > immediately prior to 7.5, jobs would still try to upgrade those
> > > > 7.4-based images to the 7.5 packages in our mirror, right?
> > > >
> > >
> > > Understood, I suspect this will become a more widespread issue as
> > > more projects start to use containers ( not sure ).  It's my
> > understanding
> > > that
> > > there are some mechanisms in place to pin packages in the centos nodepool
> > > image so
> > > there has been some thoughts generally in the area of this issue.
> >
> > Again, I think we need to understand why containers would make this worse
> > not better. Seems like the big feature everyone talks about when it comes
> > to containers is isolating packaging whether that be python packages so
> > that nova and glance can use a different version of oslo or cohabitating
> > software that would otherwise conflict. Why do the packages on the host
> > platform so strongly impact your container package lists?
> >
> 
> I'll let others comment on that, however my thought is you don't move from
> A -> Z in one step and containers do not make everything easier
> immediately.  Like most things, it takes a little time.
> 

If the main issue is being caught in a transition period at the same time a 
minor update happens can we treat this as a temporary state? Rather than 
attempting to for solve this particular case happening again the future we 
might be better served testing that upcoming CentOS releases won't break 
tripleo due to changes in the packaging using the centos-release-cr repo as 
Tristan suggests. That should tell you if something like pacemaker were to stop 
working. Note this wouldn't require any infra side updates, you would just have 
these jobs configure the additional repo and go from there.

Then on top of that get through the transition period so that the containers 
isolate you from these changes in the way they should. Then when 7.6 happens 
you'll have hopefully identified all the broken packaging ahead of time and 
worked with upstream to address those problems (which should be important for a 
stable long term support distro) and your containers can update at whatever 
pace they choose?

I don't think it would be appropriate for Infra to stage centos minor versions 
for a couple reasons. The first is we don't support specific minor versions of 
CentOS/RHEL, we support the major version and if it updates and OpenStack stops 
working that is CI doing its job and providing that info. The other major 
concern is CentOS specifically says "We are trying to make sure people 
understand they can NOT use older minor versions and still be secure." 
Similarly to how we won't support Ubuntu 12.04 because it is no longer 
supported we shouldn't support CentOS 7.4 at this point. These are no longer 
secure platforms.

However, I think testing using the pre release repo as proposed above should 
allow you to catch issues before updates happen just as well as a staged minor 
version update would. The added benefit of using this process is you should 
know as soon as possible and not after the release has been made (helping other 
users of CentOS by not releasing broken packages in the first place).

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Wesley Hayutin
On Mon, May 14, 2018 at 12:37 PM Jeremy Stanley  wrote:

> On 2018-05-14 09:57:17 -0600 (-0600), Wesley Hayutin wrote:
> > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley 
> wrote:
> [...]
> > > Couldn't a significant burst of new packages cause the same
> > > symptoms even without it being tied to a minor version increase?
> >
> > Yes, certainly this could happen outside of a minor update of the
> > baseos.
>
> Thanks for confirming. So this is not specifically a CentOS minor
> version increase issue, it's just more likely to occur at minor
> version boundaries.
>

Correct, you got it


>
> > So the only thing out of our control is the package set on the
> > base nodepool image. If that suddenly gets updated with too many
> > packages, then we have to scramble to ensure the images and
> > containers are also udpated.
>
> It's still unclear to me why the packages on the test instance image
> (i.e. the "container host") are related to the packages in the
> container guest images at all. That would seem to be the whole point
> of having containers?
>

You are right, just note some services are not 100% containerized yet.
This doesn't happen overnight it's a process and we're getting there.


>
> > If there is a breaking change in the nodepool image for example
> > [a], we have to react to and fix that as well.
>
> I would argue that one is a terrible workaround which happened to
> show its warts. We should fix DIB's pip-and-virtualenv element
> rather than continue rely on side effects of pinning RPM versions.
> I've commented to that effect on https://launchpad.net/bugs/1770298
> just now.
>
>
k.. thanks


> > > It sounds like a problem with how the jobs are designed
> > > and expectations around distros slowly trickling package updates
> > > into the series without occasional larger bursts of package deltas.
> > > I'd like to understand more about why you upgrade packages inside
> > > your externally-produced container images at job runtime at all,
> > > rather than relying on the package versions baked into them.
> >
> > We do that to ensure the gerrit review itself and it's
> > dependencies are built via rpm and injected into the build. If we
> > did not do this the job would not be testing the change at all.
> > This is a result of being a package based deployment for better or
> > worse.
> [...]
>
> Now I'll risk jumping to proposing solutions, but have you
> considered building those particular packages in containers too?
> That way they're built against the same package versions as will be
> present in the other container images you're using rather than to
> the package versions on the host, right? Seems like it would
> completely sidestep the problem.
>

So a little background.  The containers and images used in TripleO are
rebuilt multiple times each day via periodic jobs, when they pass our
criteria they are pushed out and used upstream.
Each zuul change and it's dependencies can potentially impact a few or all
the containers in play.   We can not rebuild all the containers due to time
constraints in each job.  We have been able to mount and yum update the
containers involved with the zuul change.

Latest patch to fine tune that process is here
https://review.openstack.org/#/c/567550/


>
> > An enhancement could be to stage the new images for say one week
> > or so. Do we need the CentOS updates immediately? Is there a
> > possible path that does not create a lot of work for infra, but
> > also provides some space for projects to prep for the consumption
> > of the updates?
> [...]
>
> Nodepool builds new images constantly, but at least daily. Part of
> this is to prevent the delta of available packages/indices and other
> files baked into those images from being more than a day or so stale
> at any given point in time. The older the image, the more packages
> (on average) jobs will need to download if they want to test with
> latest package versions and the more strain it will put on our
> mirrors and on our bandwidth quotas/donors' networks.
>

Sure that makes perfect sense.  We do the same with our containers and
images.


>
> There's also a question of retention, if we're building images at
> least daily but keeping them around for 7 days (storage on the
> builders, tenant quotas for Glance in our providers) as well as the
> explosion of additional nodes we'd need since we pre-boot nodes with
> each of our images (and the idea as I understand it is that you
> would want jobs to be able to select between any of them). One
> option, I suppose, would be to switch to building images weekly
> instead of daily, but that only solves the storage and node count
> problem not the additional bandwidth and mirror load. And of course,
> nodepool would need to learn to be able to boot nodes from older
> versions of an image on record which is not a feature it has right
> now.
>

OK.. thanks for walking me through that.  It totally makes sense to be
concerned with 

Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Wesley Hayutin
On Mon, May 14, 2018 at 12:08 PM Clark Boylan  wrote:

> On Mon, May 14, 2018, at 8:57 AM, Wesley Hayutin wrote:
> > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley 
> wrote:
> >
> > > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote:
> > > [...]
>
> snip
>
> > >
> > > This _doesn't_ sound to me like a problem with how we've designed
> > > our infrastructure, unless there are additional details you're
> > > omitting.
> >
> >
> > So the only thing out of our control is the package set on the base
> > nodepool image.
> > If that suddenly gets updated with too many packages, then we have to
> > scramble to ensure the images and containers are also udpated.
> > If there is a breaking change in the nodepool image for example [a], we
> > have to react to and fix that as well.
>
> Aren't the container images independent of the hosting platform (eg what
> infra hosts)? I'm not sure I understand why the host platform updating
> implies all the container images must also be updated.
>

You make a fine point here, I think as with anything there are some bits
that are still being worked on. At this moment it's my understanding that
pacemaker and possibly a few others components are not 100% containerized
atm.  I'm not an expert in the subject and my understanding may not be
correct.  Untill you are 100% containerized there may still be some
dependencies on the base image and an impact from changes.


>
> >
> >
> > > It sounds like a problem with how the jobs are designed
> > > and expectations around distros slowly trickling package updates
> > > into the series without occasional larger bursts of package deltas.
> > > I'd like to understand more about why you upgrade packages inside
> > > your externally-produced container images at job runtime at all,
> > > rather than relying on the package versions baked into them.
> >
> >
> > We do that to ensure the gerrit review itself and it's dependencies are
> > built via rpm and injected into the build.
> > If we did not do this the job would not be testing the change at all.
> >  This is a result of being a package based deployment for better or
> worse.
>
> You'd only need to do that for the change in review, not the entire system
> right?
>

Correct there is no intention of updating the entire distribution in run
time, the intent is to have as much updated in our jobs that build the
containers and images.
Only the rpm built zuul change should be included in the update, however
some zuul changes require a CentOS base package that was not previously
installed on the container e.g. a new python dependency introduced in a
zuul change.  Previously we had not enabled any CentOS repos in the
container update, but found that was not viable 100% of the time.

We have a change to further limit the scope of the update which should help
[1], especialy when facing a minor version update.

 [1] https://review.openstack.org/#/c/567550/

>
> >
>
> snip
>
> > > Our automation doesn't know that there's a difference between
> > > packages which were part of CentOS 7.4 and 7.5 any more than it
> > > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3.
> > > Even if we somehow managed to pause our CentOS image updates
> > > immediately prior to 7.5, jobs would still try to upgrade those
> > > 7.4-based images to the 7.5 packages in our mirror, right?
> > >
> >
> > Understood, I suspect this will become a more widespread issue as
> > more projects start to use containers ( not sure ).  It's my
> understanding
> > that
> > there are some mechanisms in place to pin packages in the centos nodepool
> > image so
> > there has been some thoughts generally in the area of this issue.
>
> Again, I think we need to understand why containers would make this worse
> not better. Seems like the big feature everyone talks about when it comes
> to containers is isolating packaging whether that be python packages so
> that nova and glance can use a different version of oslo or cohabitating
> software that would otherwise conflict. Why do the packages on the host
> platform so strongly impact your container package lists?
>

I'll let others comment on that, however my thought is you don't move from
A -> Z in one step and containers do not make everything easier
immediately.  Like most things, it takes a little time.

>
> >
> > TripleO may be the exception to the rule here and that is fine, I'm more
> > interested in exploring
> > the possibilities of delivering updates in a staged fashion than
> anything.
> > I don't have insight into
> > what the possibilities are, or if other projects have similiar issues or
> > requests.  Perhaps the TripleO
> > project could share the details of our job workflow with the community
> and
> > this would make more sense.
> >
> > I appreciate your time, effort and thoughts you have shared in the
> thread.
> >
> >
> > > --
> > > Jeremy Stanley
> > >
> >
> > [a] https://bugs.launchpad.net/tripleo/+bug/1770298
>
> 

Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition

2018-05-14 Thread Victoria Martínez de la Cruz
2018-05-08 16:31 GMT-03:00 Zane Bitter :

> On 08/05/18 15:16, Matthew Treinish wrote:
>
>> Although, I don't think glance uses oslo.service even in the case where
>> it's
>> using the standalone eventlet server. It looks like it launches
>> eventlet.wsgi
>> directly:
>>
>> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
>>
>> and I don't see oslo.service in the requirements file either:
>>
>> https://github.com/openstack/glance/blob/master/requirements.txt
>>
>
> It would probably independently suffer from https://bugs.launchpad.net/man
> ila/+bug/1482633 in Python 3 then. IIUC the code started in oslo
> incubator but projects like neutron and manila converted to use the
> oslo.service version. There may be other copies of it still floating
> around...
>
> - ZB
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hi,

Jumping in now as I'm helping with py3 support efforts in the manila side.

In manila we have both support for Apache WSGI and the built-in server
(which depends in eventlet). Would it be a possible workaround to rely on
the Apache WSGI server while we wait for evenlet issues to be sorted out?
Is there any chance the upper constraints will be updated soon-ish and this
can be fixed in a newer eventlet version?

This is the only change it's preventing us to be fully py3 compatible,
hence it's a big deal for us.

Thanks,

Victoria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Jeremy Stanley
On 2018-05-14 09:57:17 -0600 (-0600), Wesley Hayutin wrote:
> On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley  wrote:
[...]
> > Couldn't a significant burst of new packages cause the same
> > symptoms even without it being tied to a minor version increase?
> 
> Yes, certainly this could happen outside of a minor update of the
> baseos.

Thanks for confirming. So this is not specifically a CentOS minor
version increase issue, it's just more likely to occur at minor
version boundaries.

> So the only thing out of our control is the package set on the
> base nodepool image. If that suddenly gets updated with too many
> packages, then we have to scramble to ensure the images and
> containers are also udpated.

It's still unclear to me why the packages on the test instance image
(i.e. the "container host") are related to the packages in the
container guest images at all. That would seem to be the whole point
of having containers?

> If there is a breaking change in the nodepool image for example
> [a], we have to react to and fix that as well.

I would argue that one is a terrible workaround which happened to
show its warts. We should fix DIB's pip-and-virtualenv element
rather than continue rely on side effects of pinning RPM versions.
I've commented to that effect on https://launchpad.net/bugs/1770298
just now.

> > It sounds like a problem with how the jobs are designed
> > and expectations around distros slowly trickling package updates
> > into the series without occasional larger bursts of package deltas.
> > I'd like to understand more about why you upgrade packages inside
> > your externally-produced container images at job runtime at all,
> > rather than relying on the package versions baked into them.
> 
> We do that to ensure the gerrit review itself and it's
> dependencies are built via rpm and injected into the build. If we
> did not do this the job would not be testing the change at all.
> This is a result of being a package based deployment for better or
> worse.
[...]

Now I'll risk jumping to proposing solutions, but have you
considered building those particular packages in containers too?
That way they're built against the same package versions as will be
present in the other container images you're using rather than to
the package versions on the host, right? Seems like it would
completely sidestep the problem.

> An enhancement could be to stage the new images for say one week
> or so. Do we need the CentOS updates immediately? Is there a
> possible path that does not create a lot of work for infra, but
> also provides some space for projects to prep for the consumption
> of the updates?
[...]

Nodepool builds new images constantly, but at least daily. Part of
this is to prevent the delta of available packages/indices and other
files baked into those images from being more than a day or so stale
at any given point in time. The older the image, the more packages
(on average) jobs will need to download if they want to test with
latest package versions and the more strain it will put on our
mirrors and on our bandwidth quotas/donors' networks.

There's also a question of retention, if we're building images at
least daily but keeping them around for 7 days (storage on the
builders, tenant quotas for Glance in our providers) as well as the
explosion of additional nodes we'd need since we pre-boot nodes with
each of our images (and the idea as I understand it is that you
would want jobs to be able to select between any of them). One
option, I suppose, would be to switch to building images weekly
instead of daily, but that only solves the storage and node count
problem not the additional bandwidth and mirror load. And of course,
nodepool would need to learn to be able to boot nodes from older
versions of an image on record which is not a feature it has right
now.

> Understood, I suspect this will become a more widespread issue as
> more projects start to use containers ( not sure ).

I'm still confused as to what makes this a container problem in the
general sense, rather than just a problem (leaky abstraction) with
how you've designed the job framework in which you're using them.

> It's my understanding that there are some mechanisms in place to
> pin packages in the centos nodepool image so there has been some
> thoughts generally in the area of this issue.
[...]

If this is a reference back to bug 1770298, as mentioned already I
think that's a mistake in diskimage-builder's stdlib which should be
corrected, not a pattern we should propagate.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] review runway status

2018-05-14 Thread melanie witt

Howdy everyone,

This is just a brief status about the blueprints currently occupying
review runways [0] and an ask for the nova-core team to give these
reviews priority for their code review focus.

* Add z/VM driver
https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky 
(jichen) [END DATE: 2018-05-15] spec amendment

https://review.openstack.org/562154 and implementation series starting
at https://review.openstack.org/523387

* Local disk serial numbers
https://blueprints.launchpad.net/nova/+spec/local-disk-serial-numbers
(mdbooth) [END DATE: 2018-05-16] series starting at
https://review.openstack.org/526346

* PowerVM Driver (esberglu) [END DATE: 2018-05-28]
  * Snapshot 
https://blueprints.launchpad.net/nova/+spec/powervm-snapshot: 
https://review.openstack.org/#/c/543023/
  * DiskAdapter parent class 
https://blueprints.launchpad.net/nova/+spec/powervm-localdisk: 
https://review.openstack.org/#/c/549053/
  *Localdisk 
https://blueprints.launchpad.net/nova/+spec/powervm-localdisk: 
https://review.openstack.org/#/c/549300/


Cheers,
-melanie

[0] https://etherpad.openstack.org/p/nova-runways-rocky

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Meeting week of May 21st cancelled

2018-05-14 Thread Julia Kreger
All,

The ironic meeting next week is cancelled as we will have some
attendees in Vancouver for the summit and forum. The next meeting will
be May 28th.

I have updated the wiki[1] page accordingly.

-Julia

[1]: https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-14 Thread Bogdan Dobrelya

An update for your review please folks


Bogdan Dobrelya  writes:


Hello.
As Zuul documentation [0] explains, the names "check", "gate", and
"post"  may be altered for more advanced pipelines. Is it doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it possible to make
the consequent steps reusing environments from the previous steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want we to solve
with this "virtual RFE", and using such multi-staged check pipelines,
is reducing (ideally, de-duplicating) some of the common steps for
existing CI jobs.


What you're describing sounds more like a job graph within a pipeline.
See: 
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies
for how to configure a job to run only after another job has completed.
There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the previous job
can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on shorter pep8
or tox jobs, and that's because we value developer time more than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as possible,
rather than forcing an iterative workflow where they have to fix all the
whitespace issues before the CI system will tell them which actual tests
broke.

-Jim


I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for 
undercloud deployments vs upgrades testing (and some more). Given that 
those undercloud jobs have not so high fail rates though, I think 
Emilien is right in his comments and those would buy us nothing.


From the other side, what do you think folks of making the
tripleo-ci-centos-7-3nodes-multinode depend on 
tripleo-ci-centos-7-containers-multinode [2]? The former seems quite 
faily and long running, and is non-voting. It deploys (see featuresets 
configs [3]*) a 3 nodes in HA fashion. And it seems almost never 
passing, when the containers-multinode fails - see the CI stats page 
[4]. I've found only a 2 cases there for the otherwise situation, when 
containers-multinode fails, but 3nodes-multinode passes. So cutting off 
those future failures via the dependency added, *would* buy us something 
and allow other jobs to wait less to commence, by a reasonable price of 
somewhat extended time of the main zuul pipeline. I think it makes sense 
and that extended CI time will not overhead the RDO CI execution times 
so much to become a problem. WDYT?


[0] https://review.openstack.org/#/c/568275/
[1] https://review.openstack.org/#/c/568278/
[2] https://review.openstack.org/#/c/568326/
[3] 
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html

[4] http://tripleo.org/cistatus.html

* ignore the column 1, it's obsolete, all CI jobs now using configs 
download AFAICT...


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [nova] Cannot live migrattion, because error:libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: cmt, mbm_total, mbm_lo

2018-05-14 Thread Sahid Orentino Ferdjaoui
On Mon, May 14, 2018 at 11:23:51AM +0800, 何健乐 wrote:
> Hi, all 
> When I did live-miration , I met the following error: result = 
> proxy_call(self._autowrap, f, *args, **kwargs)May 14 10:33:11  
> nova-compute[981335]: File 
> "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
> May 14 10:33:11  nova-compute[981335]: rv = execute(f, *args, **kwargs)
> May 14 10:33:11  nova-compute[981335]: File 
> "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
> May 14 10:33:11  nova-compute[981335]: six.reraise(c, e, tb)
> May 14 10:33:11  nova-compute[981335]: File 
> "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
> May 14 10:33:11  nova-compute[981335]: rv = meth(*args, **kwargs)
> May 14 10:33:11  nova-compute[981335]: File 
> "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3
> May 14 10:33:11  nova-compute[981335]: if ret == -1: raise libvirtError 
> ('virDomainMigrateToURI3() failed', dom=self)
> May 14 10:33:11  nova-compute[981335]: libvirtError: the CPU is incompatible 
> with host CPU: Host CPU does not provide required features: cmt, mbm_total, 
> mbm_local
> Is there any one that has solution for this problem. 
> 
> Thanks

This could be because you are running an older libvirt version on
destination node which does not know anything about the cache or
memory bandwidth monitoring features from Intel. Upgrading your
libvirt version should resolve the issue.

Or you are effectively trying to live-migrate a host-model domain to a
destination node that does not support such features. To resolve it
you should update your nova.conf to use a CPU model for your guests
that will be compatible for both of your host.

In nova.conf under section libvirt.

cpu_mode=custom
cpu_model=Haswell

Then you should restart nova-compute service and reboot --force the
instance so it will take the new cpu configuration into account.

s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] No meeting next two weeks

2018-05-14 Thread Ben Nemec
As discussed in the meeting this week, we plan to skip the Oslo meeting 
for the next two weeks.  The first is during Summit, and the second is 
the first full day back for many of us so it's unlikely there will be 
much new to talk about.  Meetings will resume as normal after that, and 
if anything comes up in the meantime we can adjust our plans if needed.


Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Clark Boylan
On Mon, May 14, 2018, at 8:57 AM, Wesley Hayutin wrote:
> On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley  wrote:
> 
> > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote:
> > [...]

snip

> >
> > This _doesn't_ sound to me like a problem with how we've designed
> > our infrastructure, unless there are additional details you're
> > omitting.
> 
> 
> So the only thing out of our control is the package set on the base
> nodepool image.
> If that suddenly gets updated with too many packages, then we have to
> scramble to ensure the images and containers are also udpated.
> If there is a breaking change in the nodepool image for example [a], we
> have to react to and fix that as well.

Aren't the container images independent of the hosting platform (eg what infra 
hosts)? I'm not sure I understand why the host platform updating implies all 
the container images must also be updated.

> 
> 
> > It sounds like a problem with how the jobs are designed
> > and expectations around distros slowly trickling package updates
> > into the series without occasional larger bursts of package deltas.
> > I'd like to understand more about why you upgrade packages inside
> > your externally-produced container images at job runtime at all,
> > rather than relying on the package versions baked into them.
> 
> 
> We do that to ensure the gerrit review itself and it's dependencies are
> built via rpm and injected into the build.
> If we did not do this the job would not be testing the change at all.
>  This is a result of being a package based deployment for better or worse.

You'd only need to do that for the change in review, not the entire system 
right?

> 

snip

> > Our automation doesn't know that there's a difference between
> > packages which were part of CentOS 7.4 and 7.5 any more than it
> > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3.
> > Even if we somehow managed to pause our CentOS image updates
> > immediately prior to 7.5, jobs would still try to upgrade those
> > 7.4-based images to the 7.5 packages in our mirror, right?
> >
> 
> Understood, I suspect this will become a more widespread issue as
> more projects start to use containers ( not sure ).  It's my understanding
> that
> there are some mechanisms in place to pin packages in the centos nodepool
> image so
> there has been some thoughts generally in the area of this issue.

Again, I think we need to understand why containers would make this worse not 
better. Seems like the big feature everyone talks about when it comes to 
containers is isolating packaging whether that be python packages so that nova 
and glance can use a different version of oslo or cohabitating software that 
would otherwise conflict. Why do the packages on the host platform so strongly 
impact your container package lists?

> 
> TripleO may be the exception to the rule here and that is fine, I'm more
> interested in exploring
> the possibilities of delivering updates in a staged fashion than anything.
> I don't have insight into
> what the possibilities are, or if other projects have similiar issues or
> requests.  Perhaps the TripleO
> project could share the details of our job workflow with the community and
> this would make more sense.
> 
> I appreciate your time, effort and thoughts you have shared in the thread.
> 
> 
> > --
> > Jeremy Stanley
> >
> 
> [a] https://bugs.launchpad.net/tripleo/+bug/1770298

I think understanding the questions above may be the important aspect of 
understanding what the underlying issue is here and how we might address it.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Wesley Hayutin
On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley  wrote:

> On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote:
> [...]
> > I think you may be conflating the notion that ubuntu or rhel/cent
> > can be updated w/o any issues to applications that run atop of the
> > distributions with what it means to introduce a minor update into
> > the upstream openstack ci workflow.
> >
> > If jobs could execute w/o a timeout the tripleo jobs would have
> > not gone red.  Since we do have constraints in the upstream like a
> > timeouts and others we have to prepare containers, images etc to
> > work efficiently in the upstream.  For example, if our jobs had
> > the time to yum update the roughly 120 containers in play in each
> > job the tripleo jobs would have just worked.  I am not advocating
> > for not having timeouts or constraints on jobs, however I am
> > saying this is an infra issue, not a distribution or distribution
> > support issue.
> >
> > I think this is an important point to consider and I view it as
> > mostly unrelated to the support claims by the distribution.  Does
> > that make sense?
> [...]
>
> Thanks, the thread jumped straight to suggesting costly fixes
> (separate images for each CentOS point release, adding an evaluation
> period or acceptance testing for new point releases, et cetera)
> without coming anywhere close to exploring the problem space. Is
> your only concern that when your jobs started using CentOS 7.5
> instead of 7.4 they took longer to run?


Yes, If they had unlimited time to run, our workflow would have everything
updated to CentOS 7.5 in the job itself and I would expect everything to
just work.


> What was the root cause? Are
> you saying your jobs consume externally-produced artifacts which lag
> behind CentOS package updates?


Yes, TripleO has externally produced overcloud images, and containers both
of which can be yum updated but we try to ensure they are frequently
recreated so the yum transaction is small.


> Couldn't a significant burst of new
> packages cause the same symptoms even without it being tied to a
> minor version increase?
>

Yes, certainly this could happen outside of a minor update of the baseos.


>
> This _doesn't_ sound to me like a problem with how we've designed
> our infrastructure, unless there are additional details you're
> omitting.


So the only thing out of our control is the package set on the base
nodepool image.
If that suddenly gets updated with too many packages, then we have to
scramble to ensure the images and containers are also udpated.
If there is a breaking change in the nodepool image for example [a], we
have to react to and fix that as well.


> It sounds like a problem with how the jobs are designed
> and expectations around distros slowly trickling package updates
> into the series without occasional larger bursts of package deltas.
> I'd like to understand more about why you upgrade packages inside
> your externally-produced container images at job runtime at all,
> rather than relying on the package versions baked into them.


We do that to ensure the gerrit review itself and it's dependencies are
built via rpm and injected into the build.
If we did not do this the job would not be testing the change at all.
 This is a result of being a package based deployment for better or worse.


> It
> seems like you're arguing that the existence of lots of new package
> versions which aren't already in your container images is the
> problem, in which case I have trouble with the rationalization of it
> being "an infra issue" insofar as it requires changes to the
> services as provided by the OpenStack Infra team.
>
> Just to be clear, we didn't "introduce a minor update into the
> upstream openstack ci workflow." We continuously pull CentOS 7
> packages into our package mirrors, and continuously rebuild our
> centos-7 images from whatever packages the distro says are current.
>

Understood, which I think is fine and probably works for most projects.
An enhancement could be to stage the new images for say one week or so.
Do we need the CentOS updates immediately? Is there a possible path that
does not create a lot of work for infra, but also provides some space for
projects
to prep for the consumption of the updates?


> Our automation doesn't know that there's a difference between
> packages which were part of CentOS 7.4 and 7.5 any more than it
> knows that there's a difference between Ubuntu 16.04.2 and 16.04.3.
> Even if we somehow managed to pause our CentOS image updates
> immediately prior to 7.5, jobs would still try to upgrade those
> 7.4-based images to the 7.5 packages in our mirror, right?
>

Understood, I suspect this will become a more widespread issue as
more projects start to use containers ( not sure ).  It's my understanding
that
there are some mechanisms in place to pin packages in the centos nodepool
image so
there has been some thoughts generally in the area of this issue.

TripleO may 

Re: [openstack-dev] FW: [docs][openstack-ansible] Stepping down from core

2018-05-14 Thread Petr Kovar
Alex,

Many thanks for your community leadership, your guidance and help that
was essential during the transition period, and really for all your efforts
that you have put into keeping the docs team up and running.

(Updated the perms accordingly.)

Thanks,
pk


On Wed, 9 May 2018 13:22:04 +
Alexandra Settle  wrote:

> Man I’m so smart I sent a Dear John letter to the ML and forgot the subject 
> header.
> 
> SMOOTH MOVE.
> 
> From: Alexandra Settle 
> Date: Wednesday, May 9, 2018 at 2:13 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: Petr Kovar , Jean-Philippe Evrard 
> 
> Subject: [openstack-dev][docs][openstack-ansible]
> 
> Hi all,
> 
> It is with a super heavy heart I have to say that I need to step down as core 
> from the OpenStack-Ansible and Documentation teams – and take a step back 
> from the community.
> 
> The last year has taken me in a completely different direction to what I 
> expected, and try as I might I just don’t have the time to be even a 
> part-time member of this great community :(
> 
> Although I’m moving on, and learning new things, nothing can beat the 
> memories of  SnowpenStack and Denver’s super awesome trains.
> 
> I know this isn’t some acceptance speech at the Oscars – but I just want to 
> thank the Foundation and everyone who donates to the travel program. Without 
> you guys, I wouldn’t have been a part of the community as much as I have been 
> and met all your lovely faces.
> 
> I have had such a great time being a part of something as exciting and new as 
> OpenStack, and I hope to continue to lurk in the background of IRC like a 
> total weirdo. I hope to perform some super shit karaoke with you all in 
> another part of the world :) (who knows, maybe I’ll just tag along to PTG’s 
> as a social outing… how cool am I?!)
> 
> I’d also like to thank Mugsie for this sweet shot which is the perfect 
> summary of my time with the OpenStack community. Read into this what you will:
> 
> [cid:image001.jpg@01D3E79F.EFDEF8E0]
> 
> Don’t be a stranger,
> 
> Alex
> 

> IRC: asettle
> Twitter: dewsday
> Email: a.set...@outlook.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Zuul repo insertion in update/upgrade CI

2018-05-14 Thread Jiří Stránský

Hi,

this is mainly for CI folks and whom-it-may-concern.

Recently we came across the topic of how to enable/disable zuul repos at 
various places in the CI jobs. For normal deploy jobs there's no need to 
customize, but for update/upgrade jobs there is. It's not entirely 
straightforward and there's quite a variety of enable/disable spots and 
combinations which can be useful.


Even though improvements in this area are not very likely to get 
implemented right away, i had some thoughts on the topic so i wanted to 
capture them. I put the ideas into an etherpad:


https://etherpad.openstack.org/p/tripleo-ci-zuul-repo-insertion

Feel free to put some more thoughts there or ping me on IRC with 
anything related.



Thanks

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-server declaring itself as up too early

2018-05-14 Thread Thomas Goirand
Hi,

It looks like to me (I'm not sure yet...) that neutron-server is
declaring itself as up when it's not. As a consequence, puppet-openstack
just fails on me when doing "neutron net-list" too early and failing.

Could it be possible that the systemd notify is called at the wrong
place? If so, how could this be fixed?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Jeremy Stanley
On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote:
[...]
> I think you may be conflating the notion that ubuntu or rhel/cent
> can be updated w/o any issues to applications that run atop of the
> distributions with what it means to introduce a minor update into
> the upstream openstack ci workflow.
> 
> If jobs could execute w/o a timeout the tripleo jobs would have
> not gone red.  Since we do have constraints in the upstream like a
> timeouts and others we have to prepare containers, images etc to
> work efficiently in the upstream.  For example, if our jobs had
> the time to yum update the roughly 120 containers in play in each
> job the tripleo jobs would have just worked.  I am not advocating
> for not having timeouts or constraints on jobs, however I am
> saying this is an infra issue, not a distribution or distribution
> support issue.
> 
> I think this is an important point to consider and I view it as
> mostly unrelated to the support claims by the distribution.  Does
> that make sense?
[...]

Thanks, the thread jumped straight to suggesting costly fixes
(separate images for each CentOS point release, adding an evaluation
period or acceptance testing for new point releases, et cetera)
without coming anywhere close to exploring the problem space. Is
your only concern that when your jobs started using CentOS 7.5
instead of 7.4 they took longer to run? What was the root cause? Are
you saying your jobs consume externally-produced artifacts which lag
behind CentOS package updates? Couldn't a significant burst of new
packages cause the same symptoms even without it being tied to a
minor version increase?

This _doesn't_ sound to me like a problem with how we've designed
our infrastructure, unless there are additional details you're
omitting. It sounds like a problem with how the jobs are designed
and expectations around distros slowly trickling package updates
into the series without occasional larger bursts of package deltas.
I'd like to understand more about why you upgrade packages inside
your externally-produced container images at job runtime at all,
rather than relying on the package versions baked into them. It
seems like you're arguing that the existence of lots of new package
versions which aren't already in your container images is the
problem, in which case I have trouble with the rationalization of it
being "an infra issue" insofar as it requires changes to the
services as provided by the OpenStack Infra team.

Just to be clear, we didn't "introduce a minor update into the
upstream openstack ci workflow." We continuously pull CentOS 7
packages into our package mirrors, and continuously rebuild our
centos-7 images from whatever packages the distro says are current.
Our automation doesn't know that there's a difference between
packages which were part of CentOS 7.4 and 7.5 any more than it
knows that there's a difference between Ubuntu 16.04.2 and 16.04.3.
Even if we somehow managed to pause our CentOS image updates
immediately prior to 7.5, jobs would still try to upgrade those
7.4-based images to the 7.5 packages in our mirror, right?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 20

2018-05-14 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs


[Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending
sometimes hits the keystone API to get glance endpoints
Fix needs some additional work: https://review.openstack.org/#/c/564528/

[Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit
when notifications are sent during live migration
We need to go throught the live migration codepath and make sure that
the different live migartion notifications sent at a proper time.

[Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth
usage db query in notifications when the virt driver does not support
collecting such data

[Medium] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to
find the proper solution.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification - lost the +2 due to a merge conflict



Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Implementation proposed but needs some work:
https://review.openstack.org/#/c/526251/ - No progress. I've pinged the 
author but no response.



Add the user id and project id of the user initiated the instance
action to the notification
-
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Implementation patch exists but still needs work
https://review.openstack.org/#/c/536243/ - No progress. I've pinged the 
author but no response.

Sending full traceback in versioned notifications
-
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
The bp was reassigned to Kevin_Zheng and he proposed a WIP patch 
https://review.openstack.org/#/c/564092/



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
Based on the PoC patch https://review.openstack.org/#/c/559076/ we see 
basic problems with the overal bp. See Matt's mail from the ML 
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129804.html



Add notification support for trusted_certs
--
This is part of the bp nova-validate-certificates implementation series 
to extend some of the instance notifications. The implementation looks 
good to me in: https://review.openstack.org/#/c/563269



Introduce Pending VM state
--
The spec https://review.openstack.org/#/c/554212 proposes some 
notification change to signal when a VM goes to PENDING state. Hovewer 
this information is already available from the versioned 
instance.update notification. The discussion in the spec is ongoing.



Weekly meeting
--
I have to cancel this week's meeting and next week most of us will be 
in Vancouver. So the next meeting will be held on 29th of May on 
#openstack-meeting-4

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180529T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Update, 14 May

2018-05-14 Thread Doug Hellmann
This is the weekly summary of work being done by the Technical
Committee members. The full list of active items is managed in the
wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard
at:https://storyboard.openstack.org/#!/project/923

== Recent Activity ==

Project updates:

https://review.openstack.org/#/c/565877/ : governance change adding
constellations repo to doc team

https://review.openstack.org/#/c/565814/ : add goal-tools repo to tc
list

https://review.openstack.org/#/c/564830/ : add
ansible-role-tripleo-keystone to governance

https://review.openstack.org/#/c/565385/ : retire kolla-kubernetes

https://review.openstack.org/#/c/566541/ : add os_blazar to
openstack-ansible

https://review.openstack.org/#/c/565538/ : remove bandit from the
governance repository

New topics:

Zane has proposed an update to the requirements for affiliation
diversity for new projects [0].

[0] https://review.openstack.org/#/c/567944/

== Ongoing Discussions ==

The patch to update the Python 3.5 goal for Kolla [1] adds a new
deliverable to the old goal, and it isn't clear whether we want to
do that. TC members, please comment in the openstack-dev thread
[2].

[1] https://review.openstack.org/557863
[2] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130236.html

The proposal to allow teams to drop python 2 support has not had
as much discussion as I expected [3][4]. There will be a forum
session covering this topic in Vancouver [5].

[3] https://review.openstack.org/561922
[4] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129866.html
[5] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21741/python-2-deprecation-timeline

The new repository for documenting constellations is set up and
ready to receive proposals.

The Adjutant project application [6] is still under review, and the
only votes registered are opposed. Last week I mentioned the TC
retrospective session, but left out mention of the session cdent
is moderating on project "boundaries" [7] and the one mugsie is
moderating on Adjutant itself [8]. Sorry for the oversight.

[6] https://review.openstack.org/553643
[7] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21739/official-projects-and-the-boundary-of-what-is-openstack
[8] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21752/adjutant-official-project-status

== TC member actions/focus/discussions for the coming week(s) ==

I have added the two items raised by TC members raised to the draft
agenda for the joint Board/TC/UC meeting to be held in Vancouver
(see the wiki page [9] under "Strategic Discussions" and "Next steps
for fixing bylaws typo"). Please keep in mind that the time allocations
and content of the meeting are still subject to change.

[9] https://wiki.openstack.org/wiki/Governance/Foundation/20May2018BoardMeeting

We will also hold a retrospective for the TC as a team on Monday
at the Forum.  Please be prepared to discuss things you think are
going well, things you think we need to change, items from our
backlog that you would like to work on, etc. [10]

[10] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective

I need to revise the patch to update the expectations for goal
champions based on existing feedback. [11]

[11] https://review.openstack.org/564060

We have several items on our backlog that need owners. TC members,
please review the storyboard list [12] and consider taking on one
of the tasks that we agreed we would do.

[12] https://storyboard.openstack.org/#!/project/923

== Contacting the TC ==

The Technical Committee uses a series of weekly "office hour" time
slots for synchronous communication. We hope that by having several
such times scheduled, we will have more opportunities to engage
with members of the community from different timezones.

Office hour times in #openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

If you have something you would like the TC to discuss, you can add
it to our office hour conversation starter etherpad
at:https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Many of us also run IRC bouncers which stay in #openstack-tc most
of the time, so please do not feel that you need to wait for an
office hour time to pose a question or offer a suggestion. You can
use the string "tc-members" to alert the members to your question.

If you expect your topic to require significant discussion or to
need input from members of the community other than the TC, please
start a mailing list discussion on openstack-dev at lists.openstack.org
and use the subject tag "[tc]" to bring it to the attention of TC
members.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [docs][openstack-ansible]

2018-05-14 Thread Jean-Philippe Evrard
Can't. use. words.

Much sadness! But happiness for you and your future, at the same time :)
It was a pleasure to work on your side.

https://media.giphy.com/media/IcGkqdUmYLFGE/giphy.gif

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0

2018-05-14 Thread Akihiro Motoki
2018年5月14日(月) 21:42 Doug Hellmann :

> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900:
> > 2018年5月12日(土) 3:04 Doug Hellmann :
> >
> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900:
> > > > Hi zigo and horizon plugin maintainers,
> > > >
> > > > Horizon itself already supports Django 2.0 and horizon unit test
> covers
> > > > Django 2.0 with Python 3.5.
> > > >
> > > > A question to all is whether we change the upper bound of Django from
> > > <2.0
> > > > to <2.1.
> > > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2.
> > > > (Note that Django 1.11 will continue to be used for python 2.7
> > > environment.)
> > >
> > > Do we need to cap it at all? We've been trying to express our
> > > dependencies without caps and rely on the constraints list to
> > > test using a common version because this offers the most flexibility as
> > > we move to newer versions over time.
> > >
> >
> > The main reason we cap django version so far is that django minor version
> > releases
> > contain some backward incompatible changes and also drop deprecated
> > features.
> > A new django minor version release like 1.11 usually breaks horizon and
> > plugins
> > as horizon developers are not always checking django deprecations.
>
> OK. Having the cap in place makes it more complicated to test
> upgrading, and then upgrade. Because we no longer synchronize
> requirements, changing openstack/requirements does not trigger the
> bot to propose the same change to all of the projects using the
> dependency. Someone will have to do that by hand in the future, as we
> are doing with eventlet right now
> (https://review.openstack.org/#/q/topic:uncap-eventlet).
>
> Without the cap, we can test the upgrade by proposing a constraint
> update and running the horizon (and/or plugin) unit tests. When those
> tests pass, we can then step forward all at once by approving the
> constraint change.
>

Thanks for the detail context.

Honestly I am not sure which is better to cap or uncap the django version.
We can try uncapping now and see what happens in the community.

cross-horizon-(py27|py35) jobs of openstack/requirements checks
if horizon works with a new version. it works for horizon, but perhaps it
potentially
break horizon plugins as it takes time to catch up with such changes.
On the other hand, a version bump in upper-constraints.txt would be
a good trigger for horizon plugin maintainers to sync all requirements.

In addition, requirements are not synchronized automatically,
so it seems not feasible to propose requirements changes per django version
change.


>
> >
> > I have a question on uncapping the django version.
> > How can users/operators know which versions are supported?
> > Do they need to check upper-constraints.txt?
>
> We do tell downstream consumers that the upper-constraints.txt file is
> the set of things we test with, and that any other combination of
> packages would need to be tested on their systems separately.
>
> >
> > > > There are several points we should consider:
> > > > - If we change it in global-requirements.txt, it means Django 2.0
> will be
> > > > used for python3.5 environment.
> > > > - Not a small number of horizon plugins still do not support Django
> 2.0,
> > > so
> > > > bumping the upper bound to <2.1 will break their py35 tests.
> > > > - From my experience of Django 2.0 support in some plugins, the
> required
> > > > changes are relatively simple like [1].
> > > >
> > > > I created an etherpad page to track Django 2.0 support in horizon
> > > plugins.
> > > > https://etherpad.openstack.org/p/django20-support
> > > >
> > > > I proposed Django 2.0 support patches to several projects which I
> think
> > > are
> > > > major.
> > > > # Do not blame me if I don't cover your project :)
> > > >
> > > > Thought?
> > >
> > > It seems like a good goal for the horizon-plugin author community
> > > to bring those projects up to date by supporting a current version
> > > of Django (and any other dependencies), especially as we discuss
> > > the impending switch over to python-3-first and then python-3-only.
> > >
> >
> > Yes, python 3 support is an important topic.
> > We also need to switch the default python version in mod_wsgi in DevStack
> > environment sooner or later.
>
> Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor
> the variable that tells devstack to use Python 3?
>

Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi (libapache2-mod-wsgi
and libapache2-mod-wsgi-py3) and as a quick look the only difference is a
module
specified in LoadModule apache directive.
I haven't tested it yet, but it seems worth explored.

Akihiro


> >
> > > If this is an area where teams need help, updating that etherpad
> > > with notes and requests for assistance will help us split up the
> > > work.
> > >
> >
> > Each team can help testing in Django 2.0 and/or python 3 support.
> > We 

Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-14 Thread Wesley Hayutin
On Sun, May 13, 2018 at 11:30 PM Jeremy Stanley  wrote:

> On 2018-05-13 20:44:25 -0600 (-0600), Wesley Hayutin wrote:
> [...]
> > I do think it would be helpful to say have a one week change
> > window where folks are given the opportunity to preflight check a
> > new image and the potential impact on the job workflow the updated
> > image may have. If I could update or create a non-voting job w/
> > the new image that would provide two things.
> >
> > 1. The first is the head's up, this new minor version of centos is
> > coming into the system and you have $x days to deal with it.
> >
> > 2. The ability to build a few non-voting jobs w/ the new image to
> > see what kind of impact it has on the workflow and deployments.
> [...]
>
> While I can see where you're coming from, right now even the Infra
> team doesn't know immediately when a new CentOS minor release starts
> to be used. The packages show up in the mirrors automatically and
> images begin to be built with them right away. There isn't a
> conscious "switch" which is thrown by anyone. This is essentially
> the same way we treat Ubuntu LTS point releases as well. If this is
> _not_ the way RHEL/CentOS are intended to be consumed (i.e. just
> upgrade to and run the latest packages available for a given major
> release series) then we should perhaps take a step back and
> reevaluate this model.


I think you may be conflating the notion that ubuntu or rhel/cent can be
updated w/o any issues to applications that run atop of the distributions
with what it means to introduce a minor update into the upstream openstack
ci workflow.

If jobs could execute w/o a timeout the tripleo jobs would have not gone
red.  Since we do have constraints in the upstream like a timeouts and
others we have to prepare containers, images etc to work efficiently in the
upstream.  For example, if our jobs had the time to yum update the roughly
120 containers in play in each job the tripleo jobs would have just
worked.  I am not advocating for not having timeouts or constraints on
jobs, however I am saying this is an infra issue, not a distribution or
distribution support issue.

I think this is an important point to consider and I view it as mostly
unrelated to the support claims by the distribution.  Does that make sense?
Thanks




> For now we have some fairly deep-driven
> assumptions in that regard which are reflected in the Linux
> distributions support policy of our project testing interface as
> documented in OpenStack governance.
> --
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-05-14 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-03-25 16:04:11 -0400:
> Excerpts from Doug Hellmann's message of 2018-03-22 16:16:06 -0400:
> > Excerpts from Doug Hellmann's message of 2018-03-21 16:02:06 -0400:
> > > Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400:
> > > > 
> > > > TL;DR
> > > > -
> > > > 
> > > > Let's stop copying exact dependency specifications into all our
> > > > projects to allow them to reflect the actual versions of things
> > > > they depend on. The constraints system in pip makes this change
> > > > safe. We still need to maintain some level of compatibility, so the
> > > > existing requirements-check job (run for changes to requirements.txt
> > > > within each repo) will change a bit rather than going away completely.
> > > > We can enable unit test jobs to verify the lower constraint settings
> > > > at the same time that we're doing the other work.
> > > 
> > > The new job definition is in https://review.openstack.org/555034 and I
> > > have updated the oslo.config patch I mentioned before to use the new job
> > > instead of one defined in the oslo.config repo (see
> > > https://review.openstack.org/550603).
> > > 
> > > I'll wait for that job patch to be reviewed and approved before I start
> > > adding the job to a bunch of other repositories.
> > > 
> > > Doug
> > 
> > The job definition for openstack-tox-lower-constraints [1] was approved
> > today (thanks AJaegar and pabelenger).
> > 
> > I have started proposing the patches to add that job to the repos listed
> > in openstack/requirements/projects.txt using the topic
> > "requirements-stop-syncing" [2]. I hope to have the rest of those
> > proposed by the end of the day tomorrow, but since they have to run in
> > batches I don't know if that will be possible.
> > 
> > The patch to remove the update proposal job is ready for review [3].
> > 
> > As is the patch to allow project requirements to diverge by changing the
> > rules in the requirements-check job [4].
> > 
> > We ran into a snag with a few of the jobs for projects that rely on
> > having service projects installed. There have been a couple of threads
> > about that recently, but Monty has promised to start another one to
> > provide all of the necessary context so we can fix the issues and move
> > ahead.
> > 
> > Doug
> > 
> 
> All of the patches to define the lower-constraints test jobs have been
> proposed [1], and many have already been approved and merged (thank you
> for your quick reviews).
> 
> A few of the jobs are failing because the projects depend on installing
> some other service from source. We will work out what to do with those
> when we solve that problem in a more general way.
> 
> A few of the jobs failed because the dependencies were wrong. In a few
> cases I was able to figure out what was wrong, but I can use some help
> from project teams more familiar with the code bases to debug the
> remaining failures.
> 
> In a few cases projects didn't have python 3 unit test jobs, so I
> configured the new job to use python 2. Teams should add a step to their
> python 3 migration plan to update the version of python used in the new
> job, when that is possible.
> 
> I believe we are now ready to proceed with updating the
> requirements-check job to relax the rules about which changes are
> allowed [2].
> 
> Doug
> 
> [1] 
> https://review.openstack.org/#/q/topic:requirements-stop-syncing+status:open
> [2] https://review.openstack.org/555402

We still have about 50 open patches related to adding the
lower-constraints test job. I'll keep those open until the third
milestone of the Rocky development cycle, and then abandon the rest to
clear my gerrit view so it is usable again.

If you want to add lower-constraints tests to your project and have
an open patch in the list [1], please take it over and fix the
settings then approve the patch (the fix usually involves making
the values in lower-constraints.txt match the values in the various
requirements.txt files).

If you don't want the job, please leave a comment on the patch to
tell me and I will abandon it.

Doug

[1] https://review.openstack.org/#/q/topic:requirements-stop-syncing+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thank you TryStack!!

2018-05-14 Thread Jimmy McArthur
Absolutely. Will take care of today. Thank you all again!!!

Thanks,
Jimmy McArthur 
512.965.4846


> On May 14, 2018, at 7:12 AM, Rain Leander  wrote:
> 
> Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still 
> denying posts / members and we're pretty much just being spammed these days. 
> Let me know either way, please.
> 
> Rain
> 
> 
>> On Mon, May 14, 2018 at 2:06 PM, Rain Leander  wrote:
>> Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still 
>> denying posts / members and we're pretty much just being spammed these days. 
>> Let me know either way, please.
>> 
>> Rain
>> 
>>> On Wed, May 2, 2018 at 4:46 PM, Jimmy McArthur  wrote:
>>> Just wanted to follow up on this.  trystack.openstack.org is now correctly 
>>> redirecting to the same place as trystack.org. 
>>> 
>>> Thanks,
>>> Jimmy
>>> 
 Jimmy McArthur April 30, 2018 at 1:01 PM
 OK - got it :)
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Jeremy Stanley April 30, 2018 at 12:29 PM
 [...]
 
 I was thrown by the fact that DNS currently has
 trystack.openstack.org as a CNAME alias for trystack.org, but
 reviewing logs on static.openstack.org it seems it may have
 previously pointed there (was receiving traffic up until around
 13:15 UTC today) so if you want to just glom that onto the current
 trystack.org redirect that may make the most sense and we can move
 forward tearing down the old infrastructure for it.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Jimmy McArthur April 30, 2018 at 12:10 PM
 Yeah... my only concern is that if traffic is actually getting there, a 
 redirect to the same place trystack.org is going might be helpful. 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Jeremy Stanley April 30, 2018 at 12:02 PM
 On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote:
 [...]
 [...]
 
 Since I don't think the trystack.o.o site ever found its way fully
 into production, it may make more sense for us to simply delete the
 records for it from DNS. Someone else probably knows more about the
 prior state of it than I though.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Jimmy McArthur April 30, 2018 at 11:07 AM
 
 
> Jeremy Stanley April 30, 2018 at 10:12 AM
> [...]
> 
> Yes, before the TryStack effort was closed down, there had been a
> plan for trystack.org to redirect to a trystack.openstack.org site
> hosted in the community infrastructure.
 When we talked to trystack we agreed to redirect trystack.org to 
 https://openstack.org/software/start since that presents alternative 
 options for people to "try openstack".  My suggestion would be to redirect 
 trystack.openstack.org to the same spot, but certainly open to other 
 suggestions :)
> At this point I expect we
> can just rip out the section for it from
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp
> as DNS appears to no longer be pointed there.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Jimmy McArthur April 30, 2018 at 9:34 AM
> I'm working on redirecting trystack.openstack.org to 
> openstack.org/software/start.  We have redirects in place for 
> trystack.org, but didn't realize trystack.openstack.org as a thing as 
> well.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0

2018-05-14 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900:
> 2018年5月12日(土) 3:04 Doug Hellmann :
> 
> > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900:
> > > Hi zigo and horizon plugin maintainers,
> > >
> > > Horizon itself already supports Django 2.0 and horizon unit test covers
> > > Django 2.0 with Python 3.5.
> > >
> > > A question to all is whether we change the upper bound of Django from
> > <2.0
> > > to <2.1.
> > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2.
> > > (Note that Django 1.11 will continue to be used for python 2.7
> > environment.)
> >
> > Do we need to cap it at all? We've been trying to express our
> > dependencies without caps and rely on the constraints list to
> > test using a common version because this offers the most flexibility as
> > we move to newer versions over time.
> >
> 
> The main reason we cap django version so far is that django minor version
> releases
> contain some backward incompatible changes and also drop deprecated
> features.
> A new django minor version release like 1.11 usually breaks horizon and
> plugins
> as horizon developers are not always checking django deprecations.

OK. Having the cap in place makes it more complicated to test
upgrading, and then upgrade. Because we no longer synchronize
requirements, changing openstack/requirements does not trigger the
bot to propose the same change to all of the projects using the
dependency. Someone will have to do that by hand in the future, as we
are doing with eventlet right now
(https://review.openstack.org/#/q/topic:uncap-eventlet).

Without the cap, we can test the upgrade by proposing a constraint
update and running the horizon (and/or plugin) unit tests. When those
tests pass, we can then step forward all at once by approving the
constraint change.

> 
> I have a question on uncapping the django version.
> How can users/operators know which versions are supported?
> Do they need to check upper-constraints.txt?

We do tell downstream consumers that the upper-constraints.txt file is
the set of things we test with, and that any other combination of
packages would need to be tested on their systems separately.

> 
> > > There are several points we should consider:
> > > - If we change it in global-requirements.txt, it means Django 2.0 will be
> > > used for python3.5 environment.
> > > - Not a small number of horizon plugins still do not support Django 2.0,
> > so
> > > bumping the upper bound to <2.1 will break their py35 tests.
> > > - From my experience of Django 2.0 support in some plugins, the required
> > > changes are relatively simple like [1].
> > >
> > > I created an etherpad page to track Django 2.0 support in horizon
> > plugins.
> > > https://etherpad.openstack.org/p/django20-support
> > >
> > > I proposed Django 2.0 support patches to several projects which I think
> > are
> > > major.
> > > # Do not blame me if I don't cover your project :)
> > >
> > > Thought?
> >
> > It seems like a good goal for the horizon-plugin author community
> > to bring those projects up to date by supporting a current version
> > of Django (and any other dependencies), especially as we discuss
> > the impending switch over to python-3-first and then python-3-only.
> >
> 
> Yes, python 3 support is an important topic.
> We also need to switch the default python version in mod_wsgi in DevStack
> environment sooner or later.

Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor
the variable that tells devstack to use Python 3?

> 
> > If this is an area where teams need help, updating that etherpad
> > with notes and requests for assistance will help us split up the
> > work.
> >
> 
> Each team can help testing in Django 2.0 and/or python 3 support.
> We need to enable corresponding server projects in development environments,
> but it is not easy to setup all projects by horizon team. Individual
> projects must be
> more familiar with their own projects.
> I sent several patches,  but I actually tested them by unit tests.
> 
> Thanks,
> Akihiro
> 
> >
> > Doug
> >
> > >
> > > Thanks,
> > > Akihiro
> > >
> > > [1] https://review.openstack.org/#/c/566476/
> > >
> > > 2018年5月8日(火) 17:45 Thomas Goirand :
> > >
> > > > Hi,
> > > >
> > > > It has been decided that, in Debian, we'll switch to Django 2.0 after
> > > > Buster will be released. Buster is to be frozen next February. This
> > > > means that we have roughly one more year before Django 1.x goes away.
> > > >
> > > > Hopefully, Horizon will be ready for it, right?
> > > >
> > > > Hoping this helps,
> > > > Cheers,
> > > >
> > > > Thomas Goirand (zigo)
> > > >
> > > >
> > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > 

Re: [openstack-dev] openstack-dev] [nova] Cannot live migrattion, because error:libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: cmt, mbm_total, mbm_lo

2018-05-14 Thread Chris Friesen

On 05/13/2018 09:23 PM, 何健乐 wrote:

Hi, all
When I did live-miration , I met the following error: |result 
||=||proxy_call(||self||._autowrap, f, ||*||args, ||*||*||kwargs)|


|May ||14| |10||:||33||:||11| |nova||-||compute[||981335||]: ||File| 
|"/usr/lib64/python2.7/site-packages/libvirt.py"||, line ||1939||, ||in| 
|migrateToURI3|
|May ||14| |10||:||33||:||11| |nova||-||compute[||981335||]: ||if| |ret 
||=||=| |-||1||: ||raise| |libvirtError (||'virDomainMigrateToURI3() 
failed'||, dom||=||self||)|
|May ||14| |10||:||33||:||11| |nova||-||compute[||981335||]: libvirtError: the 
CPU ||is| |incompatible with host CPU: Host CPU does ||not| |provide required 
features: cmt, mbm_total, mbm_local|



Is there any one that has solution for this problem.
Thanks



Can you run "virsh capabilities" and provide the "cpu" section for both the 
source and dest compute nodes?  Can you also provide the "cpu_mode", 
"cpu_model", and "cpu_model_extra_flags" options from the "libvirt" section of 
/etc/nova/nova.conf on both compute nodes?


Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thank you TryStack!!

2018-05-14 Thread Rain Leander
Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still
denying posts / members and we're pretty much just being spammed these
days. Let me know either way, please.

Rain


On Mon, May 14, 2018 at 2:06 PM, Rain Leander  wrote:

> Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still
> denying posts / members and we're pretty much just being spammed these
> days. Let me know either way, please.
>
> Rain
>
> On Wed, May 2, 2018 at 4:46 PM, Jimmy McArthur 
> wrote:
>
>> Just wanted to follow up on this.  trystack.openstack.org is now
>> correctly redirecting to the same place as trystack.org.
>>
>> Thanks,
>> Jimmy
>>
>> Jimmy McArthur 
>> April 30, 2018 at 1:01 PM
>> OK - got it :)
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Jeremy Stanley 
>> April 30, 2018 at 12:29 PM
>> [...]
>>
>> I was thrown by the fact that DNS currently has
>> trystack.openstack.org as a CNAME alias for trystack.org, but
>> reviewing logs on static.openstack.org it seems it may have
>> previously pointed there (was receiving traffic up until around
>> 13:15 UTC today) so if you want to just glom that onto the current
>> trystack.org redirect that may make the most sense and we can move
>> forward tearing down the old infrastructure for it.
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Jimmy McArthur 
>> April 30, 2018 at 12:10 PM
>> Yeah... my only concern is that if traffic is actually getting there, a
>> redirect to the same place trystack.org is going might be helpful.
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Jeremy Stanley 
>> April 30, 2018 at 12:02 PM
>> On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote:
>> [...]
>> [...]
>>
>> Since I don't think the trystack.o.o site ever found its way fully
>> into production, it may make more sense for us to simply delete the
>> records for it from DNS. Someone else probably knows more about the
>> prior state of it than I though.
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Jimmy McArthur 
>> April 30, 2018 at 11:07 AM
>>
>>
>> Jeremy Stanley 
>> April 30, 2018 at 10:12 AM
>> [...]
>>
>> Yes, before the TryStack effort was closed down, there had been a
>> plan for trystack.org to redirect to a trystack.openstack.org site
>> hosted in the community infrastructure.
>>
>> When we talked to trystack we agreed to redirect trystack.org to
>> https://openstack.org/software/start since that presents alternative
>> options for people to "try openstack".  My suggestion would be to redirect
>> trystack.openstack.org to the same spot, but certainly open to other
>> suggestions :)
>>
>> At this point I expect we
>> can just rip out the section for it from
>> https://git.openstack.org/cgit/openstack-infra/system-config
>> /tree/modules/openstack_project/manifests/static.pp
>> as DNS appears to no longer be pointed there.
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Jimmy McArthur 
>> April 30, 2018 at 9:34 AM
>> I'm working on redirecting trystack.openstack.org to
>> openstack.org/software/start.  We have redirects in place for
>> trystack.org, but didn't realize trystack.openstack.org as a thing as
>> well.
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Paul Belanger 
>> April 30, 2018 at 9:23 AM
>> The code is hosted by 

Re: [openstack-dev] [kolla] Building Kolla containers with 3rd party vendor drivers

2018-05-14 Thread Paul Bourke
> Operators that need one or more of these “additional drivers” will be 
provided
> with documentation on how the code in the “additional drivers” path 
can be
> used to build their own containers. This documentation will also 
detail how

> to combine more than one 3rd party drivers into their own container.

Yes this sounds fine. We already have a 'contrib' directory [0], so I 
think this would align with what you're suggesting.


-Paul

[0] https://github.com/openstack/kolla/tree/master/contrib

On 11/05/18 18:02, Sandhya Dasu (sadasu) wrote:

Hi Paul,
 I am happy to use the changes you proposed to
  https://github.com/openstack/kolla/blob/master/kolla/common/config.py

I was under the impression that this was disallowed for drivers that weren’t
considered “reference drivers”. If that is no longer the case, I am happy to go
this route and abandon the approach I took in my diffs in:
https://review.openstack.org/#/c/552119/.

I agree with the reasoning that Kolla cannot possibly maintain a large
number of neutron-server containers, one per plugin.

To support operators that want to build their own images, I was hoping that
we could come up with a mechanism by which the 3rd party driver owners
provide the code (template-override.j2 or Dockerfile.j2 as the case maybe)
to build their containers. This code can definitely live out-of-tree with the
drivers themselves.

Optionally, we could have them reside in-tree in Kolla in a separate directory,
say “additional drivers”. Kolla will not be responsible for building a container
per driver or for building a huge (neutron-server) container containing all
interested drivers.

Operators that need one or more of these “additional drivers” will be provided
with documentation on how the code in the “additional drivers” path can be
used to build their own containers. This documentation will also detail how
to combine more than one 3rd party drivers into their own container.

I would like the community’s input on what approach best aligns with Kolla’s
and the larger OpenStack community’s goals.

Thanks,
Sandhya

On 5/11/18, 5:35 AM, "Paul Bourke"  wrote:

 Hi Sandhya,
 
 Thanks for starting this thread. I've moved it to the mailing list so

 the discussion can be available to anyone else who is interested, I hope
 you don't mind.
 
 If your requirement is to have third party plugins (such as Cisco) that

 are not available on tarballs.openstack.org, available in Kolla, then
 this is already possible.
 
 Using the Cisco case as an example, you would simply need to submit the

 following patch to
 https://github.com/openstack/kolla/blob/master/kolla/common/config.py
 
 """

  'neutron-server-plugin-networking-cisco': {
  'type': 'git',
  'location': ('https://github.com/openstack/networking-cisco')},
 """
 
 This will then include that plugin as part of the future neutron-server

 builds.
 
 If the requirement is to have Kolla publish a neutron-server container

 with *only* the Cisco plugin, then this is where it gets a little more
 tricky. Sure, we can go the route that's proposed in your patch, but we
 end up then maintaining a massive number of neutron-server containers,
 one per plugin. It also does not address then the issue of what people
 want to do when they want a combination or mix of plugins together.
 
 So right now I feel Kolla takes a middle ground, where we publish a

 neutron-server container with a variety of common plugins. If operators
 have specific requirements, they should create their own config file and
 build their own images, which we expect any serious production setup to
 be doing anyway.
 
 -Paul
 
 On 10/05/18 18:12, Sandhya Dasu (sadasu) wrote:

 > Yes, I think there is some misunderstanding on what I am trying to 
accomplish here.
 >
 > I am utilizing existing Kolla constructs to prove that they work for 3rd 
party out of tree vendor drivers too.
 > At this point, anything that a 3rd party vendor driver does (the way 
they build their containers, where they publish it and how they generate config) 
is completely out of scope of Kolla.
 >
 > I want to use the spec as a place to articulate and discuss best 
practices and figure out what part of supporting 3rd party vendor drivers can stay 
within the Kolla tree and what should be out.
 > I have witnessed many discussions on this topic but they only take away 
I get is “there are ways to do it but it can’t be part of Kolla”.
 >
 > Using the existing kolla constructs of template-override, plugin-archive 
and config-dir, let us say the 3rd party vendor builds a container.
 > OpenStack TC does not want these containers to be part of 
tarballs.openstack.org. Kolla publishes its containers to DockerHub under the 
Kolla project.
 > If these 3rd party 

Re: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db

2018-05-14 Thread Balázs Gibizer



On Mon, May 14, 2018 at 11:49 AM, Balázs Gibizer 
 wrote:



On Thu, May 10, 2018 at 8:48 PM, Dan Smith  wrote:


Personally, I'd just make the offending tests shut up about the 
warning
and move on, but I'm also okay with the above solution if people 
prefer.


I think that was Takashi's first suggestion as well. As in this 
particular case the value stored in the field is still a UUID just 
not in the canonical format I think it is reasonable to silence the 
warning for these 3 tests.




I proposed a patch to suppress those warnings: 
https://review.openstack.org/#/c/568263


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0

2018-05-14 Thread Akihiro Motoki
2018年5月12日(土) 3:04 Doug Hellmann :

> Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900:
> > Hi zigo and horizon plugin maintainers,
> >
> > Horizon itself already supports Django 2.0 and horizon unit test covers
> > Django 2.0 with Python 3.5.
> >
> > A question to all is whether we change the upper bound of Django from
> <2.0
> > to <2.1.
> > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2.
> > (Note that Django 1.11 will continue to be used for python 2.7
> environment.)
>
> Do we need to cap it at all? We've been trying to express our
> dependencies without caps and rely on the constraints list to
> test using a common version because this offers the most flexibility as
> we move to newer versions over time.
>

The main reason we cap django version so far is that django minor version
releases
contain some backward incompatible changes and also drop deprecated
features.
A new django minor version release like 1.11 usually breaks horizon and
plugins
as horizon developers are not always checking django deprecations.

I have a question on uncapping the django version.
How can users/operators know which versions are supported?
Do they need to check upper-constraints.txt?



> > There are several points we should consider:
> > - If we change it in global-requirements.txt, it means Django 2.0 will be
> > used for python3.5 environment.
> > - Not a small number of horizon plugins still do not support Django 2.0,
> so
> > bumping the upper bound to <2.1 will break their py35 tests.
> > - From my experience of Django 2.0 support in some plugins, the required
> > changes are relatively simple like [1].
> >
> > I created an etherpad page to track Django 2.0 support in horizon
> plugins.
> > https://etherpad.openstack.org/p/django20-support
> >
> > I proposed Django 2.0 support patches to several projects which I think
> are
> > major.
> > # Do not blame me if I don't cover your project :)
> >
> > Thought?
>
> It seems like a good goal for the horizon-plugin author community
> to bring those projects up to date by supporting a current version
> of Django (and any other dependencies), especially as we discuss
> the impending switch over to python-3-first and then python-3-only.
>

Yes, python 3 support is an important topic.
We also need to switch the default python version in mod_wsgi in DevStack
environment sooner or later.


> If this is an area where teams need help, updating that etherpad
> with notes and requests for assistance will help us split up the
> work.
>

Each team can help testing in Django 2.0 and/or python 3 support.
We need to enable corresponding server projects in development environments,
but it is not easy to setup all projects by horizon team. Individual
projects must be
more familiar with their own projects.
I sent several patches,  but I actually tested them by unit tests.

Thanks,
Akihiro


>
> Doug
>
> >
> > Thanks,
> > Akihiro
> >
> > [1] https://review.openstack.org/#/c/566476/
> >
> > 2018年5月8日(火) 17:45 Thomas Goirand :
> >
> > > Hi,
> > >
> > > It has been decided that, in Debian, we'll switch to Django 2.0 after
> > > Buster will be released. Buster is to be frozen next February. This
> > > means that we have roughly one more year before Django 1.x goes away.
> > >
> > > Hopefully, Horizon will be ready for it, right?
> > >
> > > Hoping this helps,
> > > Cheers,
> > >
> > > Thomas Goirand (zigo)
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db

2018-05-14 Thread Balázs Gibizer



On Thu, May 10, 2018 at 8:48 PM, Dan Smith  wrote:
 The oslo UUIDField emits a warning if the string used as a field 
value

 does not pass the validation of the uuid.UUID(str(value)) call
 [3]. All the offending places are fixed in nova except the 
nova-manage

 cell_v2 map_instances call [1][2]. That call uses markers in the DB
 that are not valid UUIDs.


No, that call uses markers in the DB that don't fit the canonical 
string
representation of a UUID that the oslo library is looking for. There 
are

many ways to serialize a UUID:

https://en.wikipedia.org/wiki/Universally_unique_identifier#Format

The 8-4-4-4-12 format is one of them (and the most popular). Changing
the dashes to spaces does not make it not a UUID, it makes it not the
same _string_ and it's done (for better or worse) in the 
aforementioned

code to skirt the database's UUID-ignorant _string_ uniqueness
constraint.


You are right, this is oslo specific. I think this weakens the severity 
of the warning in this particular case.





 If we could fix this last offender then we could merge the patch [4]
 that changes the this warning to an exception in the nova tests to
 avoid such future rule violations.

 However I'm not sure it is easy to fix. Replacing
 'INSTANCE_MIGRATION_MARKER' at [1] to
 '----' might work


The project_id field on the object is not a UUIDField, nor is it 36
characters in the database schema. It can't be because project ids are
not guaranteed to be UUIDs.


Correct. My bad. Then this does not cause any UUID warning.




 but I don't know what to do with instance_uuid.replace(' ', '-') [2]
 to make it a valid uuid. Also I think that if there is an unfinished
 mapping in the deployment and then the marker is changed in the code
 that leads to inconsistencies.


IMHO, it would be bad to do anything that breaks people in the middle 
of

a mapping procedure. While I understand the desire to have fewer
spurious warnings in the test runs, I feel like doing anything to 
impact

the UX or performance of runtime code to make the unit test output
cleaner is a bad idea.


Thanks for confirming my original bad feelings about these kind of 
solutions.





 I'm open to any suggestions.


We already store values in this field that are not 8-4-4-4-12, and the
oslo field warning is just a warning. If people feel like we need to 
do

something, I propose we just do this:

https://review.openstack.org/#/c/567669/

It is one of those "we normally wouldn't do this with object schemas,
but we know this is okay" sort of situations.


Personally, I'd just make the offending tests shut up about the 
warning
and move on, but I'm also okay with the above solution if people 
prefer.


I think that was Takashi's first suggestion as well. As in this 
particular case the value stored in the field is still a UUID just not 
in the canonical format I think it is reasonable to silence the warning 
for these 3 tests.


Thanks,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver

2018-05-14 Thread Thierry Carrez

Fox, Kevin M wrote:

[...]
Part of the disconnect to me has been that these questions have been left up to 
the projects by and large. But, users don't use the projects. Users use 
OpenStack. Or, moving forward, they at least use a Constellation. But 
Constellation is still just a documentation construct. Not really a first class 
entity.

Currently the isolation between the Projects and the thing that the users use, the 
Constellation allows for user needs to easily slip through the cracks. Cause 
"Project X: we agree that is a problem, but its Y projects problem. Project Y: we 
agree that is a problem, but its X projects problem." No, seriously, its OpenStacks 
problem. Most of the major issues I've hit in my many years of using OpenStack were in 
that category. And there wasn't a good forum for addressing them.

A related effect of the isolation is also that the projects don't work on the 
commons nor look around too much what others are doing. Either within OpenStack 
or outside. They solve problems at the project level and say, look, I've solved 
it, but don't look at what happens when all the projects do that independently 
and push more work to the users. The end result of this lack of Leadership is 
more work for the users compared to competitors.
[...]


+1

Slicing development along component lines ("project teams") was a useful 
construct to absorb all the energy that was sent to OpenStack between 
2011 and 2016. But at our current stage (less resources, more users) I 
agree that that structure is no longer optimal.


I think we need to start thinking about ways to de-emphasize project 
teams (organizing work around code boundaries) and organize work around 
goals instead (across code boundaries). A bit like work in Kubernetes is 
tracked at SIG level, beyond code ownership. It's not an easy change, 
with project teams being so integral to our culture, but it is something 
we should start looking into.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Nominating Sundar as new core reviewer (reminder)

2018-05-14 Thread Zhipeng Huang
Hi team,

Since the meetbot not function properly after our long running review
party, I would like send out this email for archiving purpose about
Sundar's nomination to our core review team which was discussed Wed last
week.

As already stated during the team meeting last week, Sundar has been a
tremendous help on two critical specs in Rocky and had conducted great
inter-project discussions. He has been very active on team meeting despite
the time difference (7am on west coast). He has also contributed a lot on
the k8s cyborg integration design. It would be great to have Sundar on the
core team to increase our bandwidth.

Plz provide any feedback about this nomination before Wed this week :)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Add Deepak Mourya to the core team

2018-05-14 Thread Shuai Zhao
+1, welcome mourya007 !

On Mon, May 14, 2018 at 12:30 PM, Hongbin Lu  wrote:

> Hi all,
>
> This is an announcement of the following change on the Zun core reviewers
> team:
>
> + Deepak Mourya (mourya007)
>
> Deepak has been actively involving in Zun for several months. He has
> submitted several code patches to Zun, all of which are useful features or
> bug fixes. In particular, I would like to highlight that he has
> implemented the availability zone API which is a significant contribution
> to the Zun feature set. Based on his significant contribution, I would like
> to propose him to become a core reviewer of Zun.
>
> This proposal has been voted within the existing core team and is
> unanimously approved. Welcome to the core team Deepak.
>
> Best regards,
> Hongbin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev