[openstack-dev] [all][release][infra] changing release template publish-to-pypi

2018-11-02 Thread Andreas Jaeger
Doug recently introduced the template publish-to-pypi-python3 and used
it for all official projects. It only has a minimal python3 usage
(calling setup.py sdist bdist_wheel) that should work with any repo.

Thus, I pushed a a series of changes up - see
https://review.openstack.org/#/q/topic:publish-pypi - to rename
publish-to-pypi-python3 back to publish-to-pypi. Thus, everybody uses
again the same template and testing.

Note that the template publish-to-pypi-python3 contained a test to see
whether the release jobs works, this new job is now run as well. It is
only run if one of the packaging files is updated - and ensures that
uploading to pypi will work fine. If this new job now fails, please fix
the fallout so that you can safely release next time.

The first changes in this series are merging now - if this causes
problems and there is a strong need for an explicit python2 version,
please tell me,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-02 Thread Matt Riedemann

On 11/2/2018 2:22 PM, Eric Fried wrote:

Based on a (long) discussion yesterday [1] I have put up a patch [2]
whereby you can set [compute]resource_provider_association_refresh to
zero and the resource tracker will never* refresh the report client's
provider cache. Philosophically, we're removing the "healing" aspect of
the resource tracker's periodic and trusting that placement won't
diverge from whatever's in our cache. (If it does, it's because the op
hit the CLI, in which case they should SIGHUP - see below.)

*except:
- When we initially create the compute node record and bootstrap its
resource provider.
- When the virt driver's update_provider_tree makes a change,
update_from_provider_tree reflects them in the cache as well as pushing
them back to placement.
- If update_from_provider_tree fails, the cache is cleared and gets
rebuilt on the next periodic.
- If you send SIGHUP to the compute process, the cache is cleared.

This should dramatically reduce the number of calls to placement from
the compute service. Like, to nearly zero, unless something is actually
changing.

Can I get some initial feedback as to whether this is worth polishing up
into something real? (It will probably need a bp/spec if so.)

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03
[2]https://review.openstack.org/#/c/614886/

==
Background
==
In the Queens release, our friends at CERN noticed a serious spike in
the number of requests to placement from compute nodes, even in a
stable-state cloud. Given that we were in the process of adding a ton of
infrastructure to support sharing and nested providers, this was not
unexpected. Roughly, what was previously:

  @periodic_task:
  GET/resource_providers/$compute_uuid
  GET/resource_providers/$compute_uuid/inventories

became more like:

  @periodic_task:
  # In Queens/Rocky, this would still just return the compute RP
  GET /resource_providers?in_tree=$compute_uuid
  # In Queens/Rocky, this would return nothing
  GET /resource_providers?member_of=...&required=MISC_SHARES...
  for each provider returned above:  # i.e. just one in Q/R
  GET/resource_providers/$compute_uuid/inventories
  GET/resource_providers/$compute_uuid/traits
  GET/resource_providers/$compute_uuid/aggregates

In a cloud the size of CERN's, the load wasn't acceptable. But at the
time, CERN worked around the problem by disabling refreshing entirely.
(The fact that this seems to have worked for them is an encouraging sign
for the proposed code change.)

We're not actually making use of most of that information, but it sets
the stage for things that we're working on in Stein and beyond, like
multiple VGPU types, bandwidth resource providers, accelerators, NUMA,
etc., so removing/reducing the amount of information we look at isn't
really an option strategically.


A few random points from the long discussion that should probably 
re-posed here for wider thought:


* There was probably a lot of discussion about why we needed to do this 
caching and stuff in the compute in the first place. What has changed 
that we no longer need to aggressively refresh the cache on every 
periodic? I thought initially it came up because people really wanted 
the compute to be fully self-healing to any external changes, including 
hot plugging resources like disk on the host to automatically reflect 
those changes in inventory. Similarly, external user/service 
interactions with the placement API which would then be automatically 
picked up by the next periodic run - is that no longer a desire, and/or 
how was the decision made previously that simply requiring a SIGHUP in 
that case wasn't sufficient/desirable.


* I believe I made the point yesterday that we should probably not 
refresh by default, and let operators opt-in to that behavior if they 
really need it, i.e. they are frequently making changes to the 
environment, potentially by some external service (I could think of 
vCenter doing this to reflect changes from vCenter back into 
nova/placement), but I don't think that should be the assumed behavior 
by most and our defaults should reflect the "normal" use case.


* I think I've noted a few times now that we don't actually use the 
provider aggregates information (yet) in the compute service. Nova host 
aggregate membership is mirror to placement since Rocky [1] but that 
happens in the API, not the the compute. The only thing I can think of 
that relied on resource provider aggregate information in the compute is 
the shared storage providers concept, but that's not supported (yet) 
[2]. So do we need to keep retrieving aggregate information when nothing 
in compute uses it yet?


* Similarly, why do we need to get traits on each periodic? The only 
in-tree virt driver I'm aware of that *reports* traits is the libvirt 
driver for CPU features [3]. Otherwise I think the idea behind getting 

Re: [openstack-dev] [Openstack] DHCP not accessible on new compute node.

2018-11-02 Thread Torin Woltjer
I've completely wiped the node and reinstalled it, and the problem still 
persists. I can't ping instances on other compute nodes, or ping the DHCP 
ports. Instances don't get addresses or metadata when started on this node.

From: Marcio Prado 
Sent: 11/1/18 9:51 AM
To: torin.wolt...@granddial.com
Cc: openst...@lists.openstack.org
Subject: Re: [Openstack] DHCP not accessible on new compute node.
I believe you have not forgotten anything. This should probably be bug
...

As my cloud is not production, but rather masters research. I migrate
the VM live to a node that is working, restart it, after that I migrate
back to the original node that was not working and it keeps running ...

Em 30-10-2018 17:50, Torin Woltjer escreveu:
> Interestingly, I created a brand new selfservice network and DHCP
> doesn't work on that either. I've followed the instructions in the
> minimal setup (excluding the controllers as they're already set up)
> but the new node has no access to the DHCP agent in neutron it seems.
> Is there a likely component that I've overlooked?
>
> _TORIN WOLTJER_
>
> GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY
>
> 616.776.1066 EXT. 2006
> _WWW.GRANDDIAL.COM [1]_
>
> -
>
> FROM: "Torin Woltjer"
> SENT: 10/30/18 10:48 AM
> TO: , "openst...@lists.openstack.org"
>
> SUBJECT: Re: [Openstack] DHCP not accessible on new compute node.
>
> I deleted both DHCP ports and they recreated as you said. However,
> instances are still unable to get network addresses automatically.
>
> _TORIN WOLTJER_
>
> GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY
>
> 616.776.1066 EXT. 2006
> _ [1] [1]WWW.GRANDDIAL.COM [1]_
>
> -
>
> FROM: Marcio Prado
> SENT: 10/29/18 6:23 PM
> TO: torin.wolt...@granddial.com
> SUBJECT: Re: [Openstack] DHCP not accessible on new compute node.
> The door is recreated automatically. The problem like I said is not in
> DHCP, but for some reason, erasing and waiting for OpenStack to
> re-create the port often solves the problem.
>
> Please, if you can find out the problem in fact, let me know. I'm very
> interested to know.
>
> You can delete the door without fear. OpenStack will recreate in a
> short
> time.
>
> Links:
> --
> [1] http://www.granddial.com

--
Marcio Prado
Analista de TI - Infraestrutura e Redes
Fone: (35) 9.9821-3561
www.marcioprado.eti.br


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-02 Thread Eric Fried
All-

Based on a (long) discussion yesterday [1] I have put up a patch [2]
whereby you can set [compute]resource_provider_association_refresh to
zero and the resource tracker will never* refresh the report client's
provider cache. Philosophically, we're removing the "healing" aspect of
the resource tracker's periodic and trusting that placement won't
diverge from whatever's in our cache. (If it does, it's because the op
hit the CLI, in which case they should SIGHUP - see below.)

*except:
- When we initially create the compute node record and bootstrap its
resource provider.
- When the virt driver's update_provider_tree makes a change,
update_from_provider_tree reflects them in the cache as well as pushing
them back to placement.
- If update_from_provider_tree fails, the cache is cleared and gets
rebuilt on the next periodic.
- If you send SIGHUP to the compute process, the cache is cleared.

This should dramatically reduce the number of calls to placement from
the compute service. Like, to nearly zero, unless something is actually
changing.

Can I get some initial feedback as to whether this is worth polishing up
into something real? (It will probably need a bp/spec if so.)

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03
[2] https://review.openstack.org/#/c/614886/

==
Background
==
In the Queens release, our friends at CERN noticed a serious spike in
the number of requests to placement from compute nodes, even in a
stable-state cloud. Given that we were in the process of adding a ton of
infrastructure to support sharing and nested providers, this was not
unexpected. Roughly, what was previously:

 @periodic_task:
 GET /resource_providers/$compute_uuid
 GET /resource_providers/$compute_uuid/inventories

became more like:

 @periodic_task:
 # In Queens/Rocky, this would still just return the compute RP
 GET /resource_providers?in_tree=$compute_uuid
 # In Queens/Rocky, this would return nothing
 GET /resource_providers?member_of=...&required=MISC_SHARES...
 for each provider returned above:  # i.e. just one in Q/R
 GET /resource_providers/$compute_uuid/inventories
 GET /resource_providers/$compute_uuid/traits
 GET /resource_providers/$compute_uuid/aggregates

In a cloud the size of CERN's, the load wasn't acceptable. But at the
time, CERN worked around the problem by disabling refreshing entirely.
(The fact that this seems to have worked for them is an encouraging sign
for the proposed code change.)

We're not actually making use of most of that information, but it sets
the stage for things that we're working on in Stein and beyond, like
multiple VGPU types, bandwidth resource providers, accelerators, NUMA,
etc., so removing/reducing the amount of information we look at isn't
really an option strategically.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching

2018-11-02 Thread Bogdan Dobrelya

Hello folks.
Here is an update for today. I crated a draft [0], and spend some time 
with building LaTeX with live-updating for the compiled PDF... The 
latter is only informational, if someone wants to contribute, please 
follow the instructions listed by the link (hint: you need no to have 
any LaTeX experience, only basic markdown knowledge should be enough!)


[0] 
https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors


On 10/31/18 6:54 PM, Ildiko Vancsa wrote:

Hi,

Thank you for sharing your proposal.

I think this is a very interesting topic with a list of possible solutions some 
of which this group is also discussing. It would also be great to learn more 
about the IEEE activities and have experience about the process in this group 
on the way forward.

I personally do not have experience with IEEE conferences, but I’m happy to 
help with the paper if I can.

Thanks,
Ildikó




(added from the parallel thread)

On 2018. Oct 31., at 19:11, Mike Bayer  wrote:

On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya  wrote:


(cross-posting openstack-dev)

Hello.
[tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data
consistency requirements and challenges" a position paper [0] (papers
submitting deadline is Nov 8).

The problem scope is synchronizing control plane and/or
deployments-specific data (not necessary limited to OpenStack) across
remote Edges and central Edge and management site(s). Including the same
aspects for overclouds and undercloud(s), in terms of TripleO; and other
deployment tools of your choice.

Another problem is to not go into different solutions for Edge
deployments management and control planes of edges. And for tenants as
well, if we think of tenants also doing Edge deployments based on Edge
Data Replication as a Service, say for Kubernetes/OpenShift on top of
OpenStack.

So the paper should name the outstanding problems, define data
consistency requirements and pose possible solutions for synchronization
and conflicts resolving. Having maximum autonomy cases supported for
isolated sites, with a capability to eventually catch up its distributed
state. Like global database [1], or something different perhaps (see
causal-real-time consistency model [2],[3]), or even using git. And
probably more than that?.. (looking for ideas)



I can offer detail on whatever aspects of the "shared  / global
database" idea.  The way we're doing it with Galera for now is all
about something simple and modestly effective for the moment, but it
doesn't have any of the hallmarks of a long-term, canonical solution,
because Galera is not well suited towards being present on many
(dozens) of endpoints. The concept that the StarlingX folks were
talking about, that of independent databases that are synchronized
using some kind of middleware is potentially more scalable, however I
think the best approach would be API-level replication, that is, you
have a bunch of Keystone services and there is a process that is
regularly accessing the APIs of these keystone services and
cross-publishing state amongst all of them.   Clearly the big
challenge with that is how to resolve conflicts, I think the answer
would lie in the fact that the data being replicated would be of
limited scope and potentially consist of mostly or fully
non-overlapping records.

That is, I think "global database" is a cheap way to get what would be
more effective as asynchronous state synchronization between identity
services.


Recently we’ve been also exploring federation with an IdP (Identity Provider) 
master: 
https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users

One of the pros is that it removes the need for synchronization and potentially 
increases scalability.

Thanks,
Ildikó




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 29 October 2018

2018-11-02 Thread Colleen Murphy
# Keystone Team Update - Week of 29 October 2018

## News

### Berlin Summit

Somewhat quiet week as we've been getting into summit prep mode. We'll have two 
forum sessions, one for operator feedback[1] and one to discuss Keystone as an 
IdP Proxy[2]. We'll have our traditional project update[3] and project 
onboarding[4] along with many keystone-related talks from the 
team[5][6][7][8][9][10].

[1] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22792/keystone-operator-feedback
[2] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22791/keystone-as-an-identity-provider-proxy
[3] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22728/keystone-project-updates
[4] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22727/keystone-project-onboarding
[5] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22557/enforcing-quota-consistently-with-unified-limits
[6] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22320/towards-an-open-cloud-exchange
[7] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22044/pushing-keystone-over-the-edge
[8] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22607/a-seamlessly-federated-multi-cloud
[9] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21977/openstack-policy-101
[10] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21976/dynamic-policy-for-openstack-with-open-policy-agent

## Open Specs

Stein specs: https://bit.ly/2Pi6dGj

Ongoing specs: https://bit.ly/2OyDLTh

## Recently Merged Changes

Search query: https://bit.ly/2pquOwT

We merged 31 changes this week.

## Changes that need Attention

Search query: https://bit.ly/2PUk84S

There are 45 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

## Bugs

This week we opened 5 new bugs and closed 9.

Bugs opened (5) 
Bug #180 (keystone:Low) opened by Egor Panfilov 
https://bugs.launchpad.net/keystone/+bug/180 
Bug #1800651 (keystone:Undecided) opened by Ramon Heidrich 
https://bugs.launchpad.net/keystone/+bug/1800651 
Bug #1801095 (keystone:Undecided) opened by artem.v.vasilyev 
https://bugs.launchpad.net/keystone/+bug/1801095 
Bug #1801309 (keystone:Undecided) opened by wangxiyuan 
https://bugs.launchpad.net/keystone/+bug/1801309 
Bug #1801101 (keystoneauth:Undecided) opened by Egor Panfilov 
https://bugs.launchpad.net/keystoneauth/+bug/1801101 

Bugs closed (5) 
Bug #1553224 (keystone:Wishlist) 
https://bugs.launchpad.net/keystone/+bug/1553224 
Bug #1642988 (keystone:Wishlist) 
https://bugs.launchpad.net/keystone/+bug/1642988 
Bug #1710329 (keystone:Undecided) 
https://bugs.launchpad.net/keystone/+bug/1710329 
Bug #1713574 (keystoneauth:Undecided) 
https://bugs.launchpad.net/keystoneauth/+bug/1713574 
Bug #1801101 (keystoneauth:Undecided) 
https://bugs.launchpad.net/keystoneauth/+bug/1801101 

Bugs fixed (4) 
Bug #1788415 (keystone:High) fixed by Lance Bragstad 
https://bugs.launchpad.net/keystone/+bug/1788415 
Bug #1797876 (keystone:High) fixed by Vishakha Agarwal 
https://bugs.launchpad.net/keystone/+bug/1797876 
Bug #1798716 (keystone:Low) fixed by wangxiyuan 
https://bugs.launchpad.net/keystone/+bug/1798716 
Bug #1800017 (keystonemiddleware:Medium) fixed by Guang Yee 
https://bugs.launchpad.net/keystonemiddleware/+bug/1800017

## Milestone Outlook

https://releases.openstack.org/stein/schedule.html

Our spec proposal freeze for Stein was two weeks ago, so barring extraordinary 
circumstances we'll be working on refining our remaining three Stein specs for 
the spec freeze after the new year.

## Shout-outs

Thanks Nathan Kinder for all the ldappool fixes!

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator and 
https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-02 Thread James Slagle
On Fri, Nov 2, 2018 at 9:39 AM Dan Prince  wrote:
>
> I pushed a patch[1] to update our containerized deployment
> architecture docs yesterday. There are 2 new fairly useful sections we
> can leverage with TripleO's stepwise deployment. They appear to be
> used somewhat sparingly so I wanted to get the word out.
>
> The first is 'deploy_steps_tasks' which gives you a means to run
> Ansible snippets on each node/role in a stepwise fashion during
> deployment. Previously it was only possible to execute puppet or
> docker commands where as now that we have deploy_steps_tasks we can
> execute ad-hoc ansible in the same manner.
>
> The second is 'external_deploy_tasks' which allows you to use run
> Ansible snippets on the Undercloud during stepwise deployment. This is
> probably most useful for driving an external installer but might also
> help with some complex tasks that need to originate from a single
> Ansible client.

+1


> The only downside I see to these approaches is that both appear to be
> implemented with Ansible's default linear strategy. I saw shardy's
> comment here [2] that the :free strategy does not yet apparently work
> with the any_errors_fatal option. Perhaps we can reach out to someone
> in the Ansible community in this regard to improve running these
> things in parallel like TripleO used to work with Heat agents.

It's effectively parallel across one role at a time at the moment, up
to the number of configured forks (default: 25). The reason it won't
parallelize across roles, is because it's a different task file used
with import_tasks for each role. Ansible won't run that in parallel
since the task list is different.

I was able to make this parallel across roles for the pre and post
deployments by making the task file the same for each role, and
controlling the difference with group and host vars:
https://review.openstack.org/#/c/574474/
From Ansible's perspective, the task list is now the same for each
host, although different things will be done depending on the value of
vars for each host.

It's possible a similar approach could be done with the other
interfaces you point out here.

In addition to the any_errors_fatal issue when using strategy:free, is
that you'd also lose the grouping of the task output per role after
each task finishes. This is mostly cosmetic, but using free does
create a lot more noisier output IMO.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Diversity and Inclusion Survey

2018-11-02 Thread Amy Marrich
The Diversity and Inclusion WG is still looking for your assistance in
reaching and including data from as many members of our community as
possible.

We revised the Diversity Survey that was originally distributed to the
Community in the Fall of 2015 and reached out in August with our new
survey.  We are looking to update our view of the OpenStack community and
it's diversity. We are pleased to be working with members of the CHAOSS
project who have signed confidentiality agreements in order to assist us in
the following ways:

1) Assistance in analyzing the results
2) And feeding the results into the CHAOSS software and metrics development
work so that we can help other Open Source projects

Please take the time to fill out the survey and share it with others in the
community. The survey can be found at:

https://www.surveymonkey.com/r/OpenStackDiversity

Thank you for assisting us in this important task! Please feel free to
reach out to me via email, in Berlin, or to myself or any WG member in
#openstack-diversity!

Amy Marrich (spotz)
Diversity and Inclusion Working Group Chair
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-02 Thread Eric Harney

On 11/1/18 4:44 PM, Jay Bryant wrote:

On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:


Hi,all

  Recently, I use the nfs driver as the cinder-backup backend, when I
use it to backup the volume snapshot, the result is return the
NotImplementedError[1].And the nfs.py doesn't has the
create_volume_from_snapshot function. Does the community plan to achieve
it which is as nfs as the cinder-backup backend?Can you tell me about
this?Thank you very much!

Rambo,


The NFS driver doesn't have full snapshot support. I am not sure if that
function missing was an oversight or not. I would reach out to Eric Harney
as he implemented that code.

Jay



create_volume_from_snapshot is implemented in the NFS driver.  It is in 
the remotefs code that the NFS driver inherits from.


But, I'm not sure I understand what's being asked here -- how is this 
related to using NFS as the backup backend?






[1]
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142








Best Regards
Rambo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] StoryBoard Forum Session: Remaining Blockers

2018-11-02 Thread Matt Riedemann

On 11/1/2018 7:22 PM, Kendall Nelson wrote:
We've made a lot of progress in StoryBoard-land over the last couple of 
releases cleaning up bugs, fixing UI annoyances, and adding features 
that people have requested. All along we've also continued to migrate 
projects as they've become unblocked. While there are still a few 
blockers on our to-do list, we want to make sure our list is complete[1].


We have a session at the upcoming forum to collect any remaining 
blockers that you may have encountered while messing around with the dev 
storyboard[2] site or using the real storyboard interacting with 
projects that have already migrated. If you encountered any issues that  
are blocking your project from migrating, please come share them with 
with us[3]. Hope to see you there!


-Kendall (diablo_rojo) & the StoryBoard team

[1] https://storyboard.openstack.org/#!/worklist/493 



I'm not sure why/how but you seem to have an encoded URL for this [1] 
link, which when I was using it redirected me to my own dashboard in 
storyboard. The real link, 
https://storyboard.openstack.org/#!/worklist/493, does work though. Just 
FYI for anyone else having the same problem.



[2] https://storyboard-dev.openstack.org/
[2] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22839/storyboard-migration-the-remaining-blockers 




--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-02 Thread Juan Antonio Osorio Robles
Thanks! We have been slow to update our docs. I did put up a blog post
about these sections of the templates [1], in case folks find that useful.


[1] http://jaormx.github.io/2018/dissecting-tripleo-service-templates-p2/

On 11/2/18 3:39 PM, Dan Prince wrote:
> I pushed a patch[1] to update our containerized deployment
> architecture docs yesterday. There are 2 new fairly useful sections we
> can leverage with TripleO's stepwise deployment. They appear to be
> used somewhat sparingly so I wanted to get the word out.
>
> The first is 'deploy_steps_tasks' which gives you a means to run
> Ansible snippets on each node/role in a stepwise fashion during
> deployment. Previously it was only possible to execute puppet or
> docker commands where as now that we have deploy_steps_tasks we can
> execute ad-hoc ansible in the same manner.
>
> The second is 'external_deploy_tasks' which allows you to use run
> Ansible snippets on the Undercloud during stepwise deployment. This is
> probably most useful for driving an external installer but might also
> help with some complex tasks that need to originate from a single
> Ansible client.
>
> The only downside I see to these approaches is that both appear to be
> implemented with Ansible's default linear strategy. I saw shardy's
> comment here [2] that the :free strategy does not yet apparently work
> with the any_errors_fatal option. Perhaps we can reach out to someone
> in the Ansible community in this regard to improve running these
> things in parallel like TripleO used to work with Heat agents.
>
> This is also how host_prep_tasks is implemented which BTW we should
> now get rid of as a duplicate architectural step since we have
> deploy_steps_tasks anyway.
>
> [1] https://review.openstack.org/#/c/614822/
> [2] 
> http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-02 Thread Dan Prince
I pushed a patch[1] to update our containerized deployment
architecture docs yesterday. There are 2 new fairly useful sections we
can leverage with TripleO's stepwise deployment. They appear to be
used somewhat sparingly so I wanted to get the word out.

The first is 'deploy_steps_tasks' which gives you a means to run
Ansible snippets on each node/role in a stepwise fashion during
deployment. Previously it was only possible to execute puppet or
docker commands where as now that we have deploy_steps_tasks we can
execute ad-hoc ansible in the same manner.

The second is 'external_deploy_tasks' which allows you to use run
Ansible snippets on the Undercloud during stepwise deployment. This is
probably most useful for driving an external installer but might also
help with some complex tasks that need to originate from a single
Ansible client.

The only downside I see to these approaches is that both appear to be
implemented with Ansible's default linear strategy. I saw shardy's
comment here [2] that the :free strategy does not yet apparently work
with the any_errors_fatal option. Perhaps we can reach out to someone
in the Ansible community in this regard to improve running these
things in parallel like TripleO used to work with Heat agents.

This is also how host_prep_tasks is implemented which BTW we should
now get rid of as a duplicate architectural step since we have
deploy_steps_tasks anyway.

[1] https://review.openstack.org/#/c/614822/
[2] 
http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs?

2018-11-02 Thread Ken Giusti
Hi,

There does seem to be something currently wonky with SSL &
oslo.messaging.   I'm looking into it now.

And there's this recently reported issue:

https://bugs.launchpad.net/oslo.messaging/+bug/1800957

In the above bug something seems to have broken SSL between ocata and
pike.  The current suspected change is a patch that fixed a threading issue.

Stay tuned...


On Thu, Nov 1, 2018 at 3:53 AM Thomas Goirand  wrote:

> On 10/31/18 2:40 PM, Mohammed Naser wrote:
> > For what it’s worth: I ran into the same issue.  I think the problem
> lies a bit deeper because it’s a problem with kombu as when debugging I saw
> that Oslo messaging tried to connect and hung after.
> >
> > Sent from my iPhone
> >
> >> On Oct 31, 2018, at 2:29 PM, Thomas Goirand  wrote:
> >>
> >> Hi,
> >>
> >> It took me a long long time to figure out that my SSL setup was wrong
> >> when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo
> >> (or heat itself) never warn me that something was wrong, I just got
> >> nothing working, and no log at all.
> >>
> >> I'm sure I wouldn't be the only one happy about having this type of
> >> problems being yelled out loud in the logs. Right now, it does work if I
> >> turn off SSL, though I'm still not sure what's wrong in my setup, and
> >> I'm given no clue if the issue is on rabbitmq-server or on the client
> >> side (ie: heat, in my current case).
> >>
> >> Just a wishlist... :)
> >> Cheers,
> >>
> >> Thomas Goirand (zigo)
>
> I've opened a bug here:
>
> https://bugs.launchpad.net/oslo.messaging/+bug/1801011
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][nova][openstack-ansible] About Rabbitmq warning problems on nova-compute side

2018-11-02 Thread Ken Giusti
Hi Gokhan,

There's been a flurry of folks reporting issues recently related to pike
and SSL.   See:

https://bugs.launchpad.net/oslo.messaging/+bug/1800957
and
https://bugs.launchpad.net/oslo.messaging/+bug/1801011

I'm currently working on this - no status yet.

As a test would it be possible to try disabling SSL in your configuration
to see if the problem persists?


On Thu, Nov 1, 2018 at 7:53 AM Gökhan IŞIK (BİLGEM BTE) <
gokhan.i...@tubitak.gov.tr> wrote:

> Hi folks,
>
> I have problems about rabbitmq on nova-compute side. I see lots of
> warnings in log file like that “client unexpectedly closed TCP
> connection”.[1]
>
> I have a HA OpenStack environment on ubuntu 16.04.5 which is installed
> with Openstack Ansible Project. My OpenStack environment version is Pike.
> My environment consists of 3 controller nodes ,23 compute nodes and 1 log
> node. Cinder volume service is installed on compute nodes and I am using
> NetApp Storage.
>
> I tried lots of configs on nova about oslo messaging and rabbitmq side,
> but I didn’t resolve this problem. My latest configs are below:
>
> rabbitmq.config is : http://paste.openstack.org/show/733767/
>
> nova.conf is: http://paste.openstack.org/show/733768/
>
> Services versions are : http://paste.openstack.org/show/733769/
>
>
> Can you share your experiences on rabbitmq side and How can I solve these
> warnings on nova-compute side ? What will you advice ?
>
>
> [1] http://paste.openstack.org/show/733766/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-44

2018-11-02 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-44.html

Good morning, it's placement update time.

# Most Important

Lately attention has been primarily on specs, database migration
tooling, and progress on documentation. These remain the important
areas.

# What's Changed

* [Placement docs](https://docs.openstack.org/placement/latest/)

* Upgrade-to-placement in deployment tooling
  
[thread](http://lists.openstack.org/pipermail/openstack-dev/2018-October/136075.html)

# Bugs

* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16.
  +1.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 11.

# Specs

Progress continues on reviewing specs.

* 
  Account for host agg allocation ratio in placement
  (Still in rocky/)

* 
  Add subtree filter for GET /resource_providers

* 
  Resource provider - request group mapping in allocation candidate

* 
  VMware: place instances on resource pool
  (still in rocky/)

* 
  Standardize CPU resource tracking

* 
  Allow overcommit of dedicated CPU
  (Has an alternative which changes allocations to a float)

* 
  Modelling passthrough devices for report to placement

* 
  Spec: allocation candidates in tree

* 
  Nova Cyborg interaction specification.

* 
  supporting virtual NVDIMM devices

* 
  Spec: Support filtering by forbidden aggregate

* 
  Proposes NUMA topology with RPs

* 
  Count quota based on resource class

* 
  WIP: High Precision Event Timer (HPET) on x86 guests

* 
  Add support for emulated virtual TPM

* 
  Adds spec for instance live resize

* 
  Provider config YAML file

# Main Themes

## Making Nested Useful

The nested allocations support has merged. That was the stuff that
was on this topic:

* 

There are some reshaper patches in progress.

* 

I suspect we need some real world fiddling with nested workloads to
have any real confidence with this stuff.

## Extraction

There continue to be three main tasks in regard to placement
extraction:

1. upgrade and integration testing
2. database schema migration and management
3. documentation publishing

Most of this work is now being tracked on a [new
etherpad](https://etherpad.openstack.org/p/placement-extract-stein-4).
If you're looking for something to do (either code or review), there
is a good place to look to find something.

The db-related work is getting very close, which will allow grenade
and devstack changes to merge.

# Other

Various placement changes out in the world.

* 
  Improve handling of default allocation ratios

* 

  Neutron minimum bandwidth implementation

* 
  Add OWNERSHIP $SERVICE traits

* 
  Puppet: Initial cookiecutter and import from nova::placement

* 
  zun: Use placement for unified resource management

* 
  Update allocation ratio when config changes

* 
  Deal with root_id None in resource provider

* 
  Use long rpc timeout in select_destinations

* 
  Bandwith Resource Providers!

* 
  Harden placement init under wsgi

* 
  Using gabbi-tempest for integration tests.

* 
  Make tox -ereleasenotes work

* 
  placement: Add a doc describing a quick live environment

* 
  Adding alembic environment

* 

  Blazar using the placement-api

* 
  Placement role for ansible project config

* 
  hyperv bump

[openstack-dev] [nova] Announcing new Focal Point for s390x libvirt/kvm Nova

2018-11-02 Thread Andreas Scheuring
Dear Nova Community,
I want to announce the new focal point for Nova s390x libvirt/kvm.

Please welcome "Cathy Zhang” to the Nova team. She and her team will be 
responsible for maintaining the s390x libvirt/kvm Thirdparty CI  [1] and any 
s390x specific code in nova and os-brick.
I personally took a new opportunity already a few month ago but kept 
maintaining the CI as good as possible. With new manpower we can hopefully 
contribute more to the community again.

You can reach her via
* email: bjzhj...@linux.vnet.ibm.com
* IRC: Cathyz

Cathy, I wish you and your team all the best for this exciting role! I also 
want to say thank you for the last years. It was a great time, I learned a lot 
from you all, will miss it!

Cheers, 

Andreas (irc: scheuran)


[1] https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_zKVM_CI

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev