[openstack-dev] [Searchlight] Weekly report for Stein R-23

2018-11-05 Thread Trinh Nguyen
Hi team,

*TL;DR,* we now focus on developing the use cases for Searchlight to
attract more users as well as contributors.

Here is the report for last week, Stein R-23 [1]. Let me know if you have
any questions.

[1]
https://www.dangtrinh.com/2018/11/searchlight-weekly-report-stein-r-23.html

Bests,

-- 
*Trinh Nguyen*
*www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Ghanshyam Mann
  On Tue, 06 Nov 2018 05:50:03 +0900 Dmitry Tantsur  
wrote  
 > 
 > 
 > On Mon, Nov 5, 2018, 20:07 Julia Kreger  *removes all of the hats*
 > *removes years of dust from unrelated event planning hat, and puts it on for 
 > a moment*
 > 
 > In my experience, events of any nature where convention venue space is 
 > involved, are essentially set in stone before being publicly advertised as 
 > contracts are put in place for hotel room booking blocks as well as the 
 > convention venue space. These spaces are also typically in a relatively high 
 > demand limiting the access and available times to schedule. Often venues 
 > also give preference (and sometimes even better group discounts) to repeat 
 > events as they are typically a known entity and will have somewhat known 
 > needs so the venue and hotel(s) can staff appropriately. 
 > 
 > tl;dr, I personally wouldn't expect any changes to be possible at this point.
 > 
 > *removes event planning hat of past life, puts personal scheduling hat on*
 > I imagine that as a community, it is near impossible to schedule something 
 > avoiding holidays for everyone in the community.
 > 
 > I'm not taking about everyone. And I'm mostly fine with my holiday, but the 
 > conflicts with Russia and Japan seem huge. This certainly does not help our 
 > effort to engage people outside of NA/EU.
 > Quick googling suggests that the week of May 13th would have much fewer 
 > conflicts.
 > 
 > I personally have lost count of the number of holidays and special days that 
 > I've spent on business trips over the past four years. While I may be an 
 > out-lier in my feelings on this subject, I'm not upset, annoyed, or even 
 > bitter about lost times. This community is part of my family.
 > 
 > Sure :)
 > But outside of our small nice circle there is a huge world of people who may 
 > not share our feeling and the level of commitment to openstack. These 
 > occasional contributors we talked about when discussing the cycle length. I 
 > don't think asking them to abandon 3-5 days of holidays is a productive way 
 > to engage them.
 > And again, as much as I love meeting you all, I think we're outgrowing the 
 > format of these meetings..
 > Dmitry

Yeah, in case of Japan it is full week holiday starting from April 29th. I 
remember most of the May summits did not conflict with Golden week but this is. 
I am not sure if any solution to this now but we should consider such things in 
future. 

-gmann

 > 
 > -Julia
 > 
 > On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur  wrote:
 > Hi all,
 >  
 >  Not sure how official the information about the next summit is, but it's on 
 > the 
 >  web site [1], so I guess worth asking..
 >  
 >  Are we planning for the summit to overlap with the May holidays? The 1st of 
 > May 
 >  is a holiday in big part of the world. We ask people to skip it in addition 
 > to 
 >  3+ weekend days they'll have to spend working and traveling.
 >  
 >  To make it worse, 1-3 May are holidays in Russia this time. To make it even 
 >  worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it 
 >  considered? Is it possible to move the days to less conflicting time 
 > (mid-May 
 >  maybe)?
 >  
 >  Dmitry
 >  
 >  [1] https://www.openstack.org/summit/denver-2019/
 >  [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)
 >  
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >   __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FIPS Compliance

2018-11-05 Thread Joshua Cornutt
Sean,

I'm, too, am very interested in this particular discussion and working
towards getting OpenStack working out-of-the-box on FIPS systems. I've
submitted a few patches
(https://review.openstack.org/#/q/owner:%22Joshua+Cornutt%22) recently
and plan on going down my laundry list of patches I've made while
deploying Red Hat OpenStack 10 (Newton), 13 (Queens), and community
master on "FIPS mode" RHEL 7 servers.

I've seen a lot of debate in other communities on how to approach the
subject ranging from full MD5-to-SHAx transitions to putting in
FIPS-aware logic to decide hashes based on the system to just deciding
that the hashes aren't used for real security and thus are "mostly OK"
by FIPS 140-2 standards (resulting in awkward distro-specific versions
of popular crypto libraries with built-in FIPS awareness). Personally,
I've been more in favor of a sweeping MD5-to-SHAx transition due to
popular crypto libraries (OpenSSL, hashlib, NSS) indiscriminately
disabling MD5 hash methods on FIPS mode systems. With SHA-1 collisions
already happening, I imagine it will meet the FIPS banhammer in the
not-so-distant future which is why I have generally been recommending
SHA-256 as an MD5 replacement, despite the larger output size (mostly
an issue for fixed-sized database columns).

There is definite pressure being put on some entities (commercial as
well as government / DoD) to move core systems to FIPS mode and
auditors are looking more and more closely at this particular subject
and requiring strong justification for not meeting FIPS compliance on
systems both at the hardware and software levels.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open

2018-11-05 Thread Tony Breeds

Hi all,

   Time is running out for you to have your say in the T release name
poll.  We have just under 3 days left.  If you haven't voted please do!

On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote:
> Hi folks,
> 
> It is time again to cast your vote for the naming of the T Release.
> As with last time we'll use a public polling option over per user private URLs
> for voting.  This means, everybody should proceed to use the following URL to
> cast their vote:
> 
>   
> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df=b9e448b340787f0e
> 
> We've selected a public poll to ensure that the whole community, not just 
> gerrit
> change owners get a vote.  Also the size of our community has grown such that 
> we
> can overwhelm CIVS if using private urls.  A public can mean that users
> behind NAT, proxy servers or firewalls may receive an message saying
> that your vote has already been lodged, if this happens please try
> another IP.
> 
> Because this is a public poll, results will currently be only viewable by 
> myself
> until the poll closes. Once closed, I'll post the URL making the results
> viewable to everybody. This was done to avoid everybody seeing the results 
> while
> the public poll is running.
> 
> The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results 
> will be
> posted shortly after.
> 
> [1] https://governance.openstack.org/tc/reference/release-naming.html
> ---
> 
> According to the Release Naming Process, this poll is to determine the
> community preferences for the name of the T release of OpenStack. It is
> possible that the top choice is not viable for legal reasons, so the second or
> later community preference could wind up being the name.
> 
> Release Name Criteria
> -
> 
> Each release name must start with the letter of the ISO basic Latin alphabet
> following the initial letter of the previous release, starting with the
> initial release of "Austin". After "Z", the next name should start with
> "A" again.
> 
> The name must be composed only of the 26 characters of the ISO basic Latin
> alphabet. Names which can be transliterated into this character set are also
> acceptable.
> 
> The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region under
> consideration must be declared before the opening of nominations, as part of
> the initiation of the selection process.
> 
> The name must be a single word with a maximum of 10 characters. Words that
> describe the feature should not be included, so "Foo City" or "Foo Peak"
> would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may make
> an exception for one or more of them to be considered in the Condorcet poll.
> The naming official is responsible for presenting the list of exceptional
> names for consideration to the TC before the poll opens.
> 
> Exact Geographic Region
> ---
> 
> The Geographic Region from where names for the S release will come is Colorado
> 
> Proposed Names
> --
> 
> * Tarryall
> * Teakettle
> * Teller
> * Telluride
> * Thomas : the Tank Engine
> * Thornton
> * Tiger
> * Tincup
> * Timnath
> * Timber
> * Tiny Town
> * Torreys
> * Trail
> * Trinidad
> * Treasure
> * Troublesome
> * Trussville
> * Turret
> * Tyrone
> 
> Proposed Names that do not meet the criteria (accepted by the TC)
> -
> 
> * Train : Many Attendees of the first Denver PTG have a story to tell about 
> the trains near the PTG hotel.  We could celebrate those stories with this 
> name
> 
> Yours Tony.



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FIPS Compliance

2018-11-05 Thread Sean McGinnis
I'm interested in some feedback from the community, particularly those running
OpenStack deployments, as to whether FIPS compliance [0][1] is something folks
are looking for.

I've been seeing small changes starting to be proposed here and there for
things like MD5 usage related to its incompatibility to FIPS mode. But looking
across a wider stripe of our repos, it appears like it would be a wider effort
to be able to get all OpenStack services compatible with FIPS mode.

This should be a fairly easy thing to test, but before we put in much effort
into updating code and figuring out testing, I'd like to see some input on
whether something like this is needed.

Thanks for any input on this.

Sean

[0] https://en.wikipedia.org/wiki/FIPS_140-2
[1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NOVA] pci alias device_type and numa_policy and device_type meanings

2018-11-05 Thread Manuel Sopena Ballesteros
Dear Openstack community.

I am setting up pci passthrough for GPUs using aliases.

I was wondering the meaning of the fields device_type and numa_policy and how 
should I use them as I could not find much details in the official 
documentation.

https://docs.openstack.org/nova/rocky/admin/pci-passthrough.html#configure-nova-api-controller
https://docs.openstack.org/nova/rocky/configuration/config.html#pci

thank you very much

Manuel

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Ansible getting bumped up from 2.4 -> 2.6.6

2018-11-05 Thread Wesley Hayutin
Greetings,

Please be aware of the following patch [1].  This updates ansible in
queens, rocky, and stein.
 This was just pointed out to me, and I didn't see it coming so I thought
I'd email the group.

That is all, thanks

[1] https://review.rdoproject.org/r/#/c/14960
-- 

Wes Hayutin

Associate MANAGER

Red Hat



whayu...@redhat.comT: +1919 <+19197544114>4232509 IRC:  weshay


View my calendar and check my availability for meetings HERE

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Openstack-sigs] Dropping lazy translation support

2018-11-05 Thread Ben Nemec



On 11/5/18 3:13 PM, Matt Riedemann wrote:

On 11/5/2018 1:36 PM, Doug Hellmann wrote:

I think the lazy stuff was all about the API responses. The log
translations worked a completely different way.


Yeah maybe. And if so, I came across this in one of the blueprints:

https://etherpad.openstack.org/p/disable-lazy-translation

Which says that because of a critical bug, the lazy translation was 
disabled in Havana to be fixed in Icehouse but I don't think that ever 
happened before IBM developers dropped it upstream, which is further 
justification for nuking this code from the various projects.




It was disabled last-minute, but I'm pretty sure it was turned back on 
(hence why we're hitting issues today). I still see coercion code in 
oslo.log that was added to fix the problem[1] (I think). I could be 
wrong about that since this code has undergone significant changes over 
the years, but it looks to me like we're still forcing things to be 
unicode.[2]


1: https://review.openstack.org/#/c/49230/3/openstack/common/log.py
2: 
https://github.com/openstack/oslo.log/blob/a9ba6c544cbbd4bd804dcd5e38d72106ea0b8b8f/oslo_log/formatters.py#L414


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Dropping lazy translation support

2018-11-05 Thread Doug Hellmann
Matt Riedemann  writes:

> On 11/5/2018 1:36 PM, Doug Hellmann wrote:
>> I think the lazy stuff was all about the API responses. The log
>> translations worked a completely different way.
>
> Yeah maybe. And if so, I came across this in one of the blueprints:
>
> https://etherpad.openstack.org/p/disable-lazy-translation
>
> Which says that because of a critical bug, the lazy translation was 
> disabled in Havana to be fixed in Icehouse but I don't think that ever 
> happened before IBM developers dropped it upstream, which is further 
> justification for nuking this code from the various projects.

I agree.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Dropping lazy translation support

2018-11-05 Thread Matt Riedemann

On 11/5/2018 1:36 PM, Doug Hellmann wrote:

I think the lazy stuff was all about the API responses. The log
translations worked a completely different way.


Yeah maybe. And if so, I came across this in one of the blueprints:

https://etherpad.openstack.org/p/disable-lazy-translation

Which says that because of a critical bug, the lazy translation was 
disabled in Havana to be fixed in Icehouse but I don't think that ever 
happened before IBM developers dropped it upstream, which is further 
justification for nuking this code from the various projects.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee status update for 5 November

2018-11-05 Thread Doug Hellmann
This is the weekly summary of work being done by the Technical Committee
members. The full list of active items is managed in the wiki:
https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard at:
https://storyboard.openstack.org/#!/project/923

== Recent Activity ==

It has been three weeks since the last update email, in part due to my
absence. We have lots of updates this time around.

Project updates:

* Add os_manila to openstack-ansible:
  https://review.openstack.org/#/c/608403/
* Add cells charms and interfaces:
  https://review.openstack.org/#/c/608866/
* Add octavia charm: https://review.openstack.org/#/c/608283/
* Add puppet-crane: https://review.openstack.org/#/c/610015/
* Add openstack-helm images repository:
  https://review.openstack.org/#/c/611895/
* Add blazar-specs repository: https://review.openstack.org/#/c/612431/
* Add openstack-helm docs repository:
  https://review.openstack.org/#/c/611896/
* Retire anchor: https://review.openstack.org/#/c/611187/
* Remove Dragonflow from governance:
  https://review.openstack.org/#/c/613856/

Other updates:

* Reword "open source" definition in 4 Opens document to remove language
  that does not come through clearly when translated:
  https://review.openstack.org/#/c/613894/
* Support "Train" as a candidate name for the T series:
  https://review.openstack.org/#/c/611511/
* Update the charter section on TC meetings:
  https://review.openstack.org/#/c/608751/

== TC Meetings ==

In order to fulfill our obligations under the OpenStack Foundation
bylaws, the TC needs to hold meetings at least once each quarter. We
agreed to meet monthly, and to emphasize agenda items that help us move
initiatives forward while leaving most of the discussion of those topics
to the mailing list. Our first meeting was held on 1 Nov. The agendas
for all of our meetings will be sent to the openstack-dev mailing list
in advance, and links to the logs and summary will be sent as a follow
up after the meeting.

* http://lists.openstack.org/pipermail/openstack-dev/2018-November/136220.html

The next meeting will be 6 December 1400 UTC in #openstack-tc

== Team Liaisons ==

The TC liaisons to each project team for the Stein series are now
assigned. Please contact your liaison if you have any issues the TC can
help with, and watch for email from them to check in with your team
before the end of this development cycle.

* https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams

== Sessions at the Forum ==

Many of us will be meeting in Berlin next week for the OpenStack Summit
and Forum. There are several sessions related to project governance or
community that may be of interest.

* Getting OpenStack Users Involved in the Project:
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22813/getting-openstack-users-involved-in-the-project
* Community outreach when culture, time zones, and language differ:
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22820/community-outreach-when-culture-time-zones-and-language-differ
* Wednesday Keynote segment, Community Contributor Recognition & How to
  Get Started:
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22959/community-contributor-recognition-and-how-to-get-started
* Expose SIGs and WGs:
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22750/expose-sigs-and-wgs
* Cross-technical leadership session (OpenStack, Kata, StarlingX,
  Airship, Zuul):
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22815/cross-technical-leadership-session-openstack-kata-starlingx-airship-zuul
* "Vision for OpenStack clouds" discussion:
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22818/vision-for-openstack-clouds-discussion
* Technical Committee Vision Retrospective:
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22825/technical-committee-vision-retrospective
* T series community goal discussion:
  
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22814/t-series-community-goal-discussion

== Ongoing Discussions ==

We have several governance changes up for review related to deciding how
we will manage future Python 3 upgrades (including adding 3.7 and
possibly dropping 3.5 during Stein).

* Make python 3 testing requirement less specific:
  https://review.openstack.org/#/c/611010/
* Explicitly declare stein supported runtimes:
  https://review.openstack.org/#/c/611080/
* Resolution on keeping up with Python 3 releases:
  https://review.openstack.org/#/c/613145/

== TC member actions/focus/discussions for the coming week(s) ==

The TC, UC, and leadership of other foundation projects will join the
foundation Board for a joint leadership meeting on 12 November. See the
wiki for details.

* https://wiki.openstack.org/wiki/Governance/Foundation/12Nov2018BoardMeeting

== Contacting the TC ==

The Technical 

Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Matt Riedemann

On 11/5/2018 1:17 PM, Matt Riedemann wrote:
I'm thinking of a case like, resize and instance but rather than 
confirm/revert it, the user deletes the instance. That would cleanup the 
allocations from the target node but potentially not from the source node.


Well this case is at least not an issue:

https://review.openstack.org/#/c/615644/

It took me a bit to sort out how that worked but it does and I've added 
a test to confirm it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Dmitry Tantsur
On Mon, Nov 5, 2018, 20:07 Julia Kreger  *removes all of the hats*
>
> *removes years of dust from unrelated event planning hat, and puts it on
> for a moment*
>
> In my experience, events of any nature where convention venue space is
> involved, are essentially set in stone before being publicly advertised as
> contracts are put in place for hotel room booking blocks as well as the
> convention venue space. These spaces are also typically in a relatively
> high demand limiting the access and available times to schedule. Often
> venues also give preference (and sometimes even better group discounts) to
> repeat events as they are typically a known entity and will have somewhat
> known needs so the venue and hotel(s) can staff appropriately.
>
> tl;dr, I personally wouldn't expect any changes to be possible at this
> point.
>
> *removes event planning hat of past life, puts personal scheduling hat on*
>
> I imagine that as a community, it is near impossible to schedule something
> avoiding holidays for everyone in the community.
>

I'm not taking about everyone. And I'm mostly fine with my holiday, but the
conflicts with Russia and Japan seem huge. This certainly does not help our
effort to engage people outside of NA/EU.

Quick googling suggests that the week of May 13th would have much fewer
conflicts.


> I personally have lost count of the number of holidays and special days
> that I've spent on business trips over the past four years. While I may be
> an out-lier in my feelings on this subject, I'm not upset, annoyed, or even
> bitter about lost times. This community is part of my family.
>

Sure :)

But outside of our small nice circle there is a huge world of people who
may not share our feeling and the level of commitment to openstack. These
occasional contributors we talked about when discussing the cycle length. I
don't think asking them to abandon 3-5 days of holidays is a productive way
to engage them.

And again, as much as I love meeting you all, I think we're outgrowing the
format of these meetings..

Dmitry


> -Julia
>
> On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur  wrote:
>
>> Hi all,
>>
>> Not sure how official the information about the next summit is, but it's
>> on the
>> web site [1], so I guess worth asking..
>>
>> Are we planning for the summit to overlap with the May holidays? The 1st
>> of May
>> is a holiday in big part of the world. We ask people to skip it in
>> addition to
>> 3+ weekend days they'll have to spend working and traveling.
>>
>> To make it worse, 1-3 May are holidays in Russia this time. To make it
>> even
>> worse than worse, the week of 29th is the Golden Week in Japan [2]. Was
>> it
>> considered? Is it possible to move the days to less conflicting time
>> (mid-May
>> maybe)?
>>
>> Dmitry
>>
>> [1] https://www.openstack.org/summit/denver-2019/
>> [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] liaison assignments

2018-11-05 Thread Doug Hellmann
TC members,

I have updated the liaison assignments to fill in all of the
gaps. Please take a moment to review the list [1] so you know your
assignments.

Next week will be a good opportunity to touch bases with your teams.

Doug

[1] https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Jeremy Stanley
On 2018-11-05 11:06:14 -0800 (-0800), Julia Kreger wrote:
[...]
> I imagine that as a community, it is near impossible to schedule
> something avoiding holidays for everyone in the community.
[...]

Scheduling events that time of year is particularly challenging
anyway because of the proximity of Ramadan, Passover and
Easter/Lent. (We've already conflicted with Passover at least once
in the past, if memory serves.) So yes, any random week you pick is
already likely to hit a major public or religious holiday for
some part of the World, and then you also have to factor in
availability of venues and other logistics.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Jay S Bryant



On 11/5/2018 1:06 PM, Julia Kreger wrote:

*removes all of the hats*

*removes years of dust from unrelated event planning hat, and puts it 
on for a moment*


In my experience, events of any nature where convention venue space is 
involved, are essentially set in stone before being publicly 
advertised as contracts are put in place for hotel room booking blocks 
as well as the convention venue space. These spaces are also typically 
in a relatively high demand limiting the access and available times to 
schedule. Often venues also give preference (and sometimes even better 
group discounts) to repeat events as they are typically a known entity 
and will have somewhat known needs so the venue and hotel(s) can staff 
appropriately.


tl;dr, I personally wouldn't expect any changes to be possible at this 
point.


*removes event planning hat of past life, puts personal scheduling hat on*

I imagine that as a community, it is near impossible to schedule 
something avoiding holidays for everyone in the community.


I personally have lost count of the number of holidays and special 
days that I've spent on business trips over the past four years. While 
I may be an out-lier in my feelings on this subject, I'm not upset, 
annoyed, or even bitter about lost times. This community is part of my 
family.



Agreed.

-Julia

On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur > wrote:


Hi all,

Not sure how official the information about the next summit is,
but it's on the
web site [1], so I guess worth asking..

Are we planning for the summit to overlap with the May holidays?
The 1st of May
is a holiday in big part of the world. We ask people to skip it in
addition to
3+ weekend days they'll have to spend working and traveling.

To make it worse, 1-3 May are holidays in Russia this time. To
make it even
worse than worse, the week of 29th is the Golden Week in Japan
[2]. Was it
considered? Is it possible to move the days to less conflicting
time (mid-May
maybe)?

Someone else had raised the fact that this also appears to overlap with 
Pycon and wondered if the date could be changed.  I told them the same 
thing.  Once these things are announced they are, more or less, an 
immovable object.


Dmitry

[1] https://www.openstack.org/summit/denver-2019/
[2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Dropping lazy translation support

2018-11-05 Thread Doug Hellmann
Matt Riedemann  writes:

> This is a follow up to a dev ML email [1] where I noticed that some 
> implementations of the upgrade-checkers goal were failing because some 
> projects still use the oslo_i18n.enable_lazy() hook for lazy log message 
> translation (and maybe API responses?).
>
> The very old blueprints related to this can be found here [2][3][4].
>
> If memory serves me correctly from my time working at IBM on this, this 
> was needed to:
>
> 1. Generate logs translated in other languages.
>
> 2. Return REST API responses if the "Accept-Language" header was used 
> and a suitable translation existed for that language.
>
> #1 is a dead horse since I think at least the Ocata summit when we 
> agreed to no longer translate logs since no one used them.
>
> #2 is probably something no one knows about. I can't find end-user 
> documentation about it anywhere. It's not tested and therefore I have no 
> idea if it actually works anymore.
>
> I would like to (1) deprecate the oslo_i18n.enable_lazy() function so 
> new projects don't use it and (2) start removing the enable_lazy() usage 
> from existing projects like keystone, glance and cinder.
>
> Are there any users, deployments or vendor distributions that still rely 
> on this feature? If so, please speak up now.
>
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2018-November/136285.html
> [2] https://blueprints.launchpad.net/oslo-incubator/+spec/i18n-messages
> [3] https://blueprints.launchpad.net/nova/+spec/i18n-messages
> [4] https://blueprints.launchpad.net/nova/+spec/user-locale-api
>
> -- 
>
> Thanks,
>
> Matt
>
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

I think the lazy stuff was all about the API responses. The log
translations worked a completely different way.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Matt Riedemann

On 11/5/2018 12:28 PM, Mohammed Naser wrote:

Have you dug into any of the operations around these instances to
determine what might have gone wrong? For example, was a live migration
performed recently on these instances and if so, did it fail? How about
evacuations (rebuild from a down host).

To be honest, I have not, however, I suspect a lot of those happen from the
fact that it is possible that the service which makes the claim is not the
same one that deletes it

I'm not sure if this is something that's possible but say the compute2 makes
a claim for migrating to compute1 but something fails there, the revert happens
in compute1 but compute1 is already borked so it doesn't work

This isn't necessarily the exact case that's happening but it's a summary
of what I believe happens.



The computes don't create the resource allocations in placement though, 
the scheduler does, unless this deployment still has at least one 
compute that is 

The compute service should only be removing allocations for things like 
server delete, failed move operation (cleanup the allocations created by 
the scheduler), or a successful move operation (cleanup the allocations 
for the source node held by the migration record).


I wonder if you have migration records (from the cell DB migrations 
table) holding allocations in placement for some reason, even though the 
migration is complete. I know you have an audit script to look for 
allocations that are not held by instances, assuming those instances 
have been deleted and the allocations were leaked, but they could have 
also been held by the migration record and maybe leaked that way? 
Although if you delete the instance, the related migrations records are 
also removed (but maybe not their allocations?). I'm thinking of a case 
like, resize and instance but rather than confirm/revert it, the user 
deletes the instance. That would cleanup the allocations from the target 
node but potentially not from the source node.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Julia Kreger
*removes all of the hats*

*removes years of dust from unrelated event planning hat, and puts it on
for a moment*

In my experience, events of any nature where convention venue space is
involved, are essentially set in stone before being publicly advertised as
contracts are put in place for hotel room booking blocks as well as the
convention venue space. These spaces are also typically in a relatively
high demand limiting the access and available times to schedule. Often
venues also give preference (and sometimes even better group discounts) to
repeat events as they are typically a known entity and will have somewhat
known needs so the venue and hotel(s) can staff appropriately.

tl;dr, I personally wouldn't expect any changes to be possible at this
point.

*removes event planning hat of past life, puts personal scheduling hat on*

I imagine that as a community, it is near impossible to schedule something
avoiding holidays for everyone in the community.

I personally have lost count of the number of holidays and special days
that I've spent on business trips over the past four years. While I may be
an out-lier in my feelings on this subject, I'm not upset, annoyed, or even
bitter about lost times. This community is part of my family.

-Julia

On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur  wrote:

> Hi all,
>
> Not sure how official the information about the next summit is, but it's
> on the
> web site [1], so I guess worth asking..
>
> Are we planning for the summit to overlap with the May holidays? The 1st
> of May
> is a holiday in big part of the world. We ask people to skip it in
> addition to
> 3+ weekend days they'll have to spend working and traveling.
>
> To make it worse, 1-3 May are holidays in Russia this time. To make it
> even
> worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it
> considered? Is it possible to move the days to less conflicting time
> (mid-May
> maybe)?
>
> Dmitry
>
> [1] https://www.openstack.org/summit/denver-2019/
> [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Announcing new Focal Point for s390x libvirt/kvm Nova

2018-11-05 Thread melanie witt

On Fri, 2 Nov 2018 09:47:42 +0100, Andreas Scheuring wrote:

Dear Nova Community,
I want to announce the new focal point for Nova s390x libvirt/kvm.

Please welcome "Cathy Zhang” to the Nova team. She and her team will be 
responsible for maintaining the s390x libvirt/kvm Thirdparty CI  [1] and any s390x 
specific code in nova and os-brick.
I personally took a new opportunity already a few month ago but kept 
maintaining the CI as good as possible. With new manpower we can hopefully 
contribute more to the community again.

You can reach her via
* email: bjzhj...@linux.vnet.ibm.com
* IRC: Cathyz

Cathy, I wish you and your team all the best for this exciting role! I also 
want to say thank you for the last years. It was a great time, I learned a lot 
from you all, will miss it!

Cheers,

Andreas (irc: scheuran)


[1] https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_zKVM_CI


Thanks Andreas, for sending this note. It has been a pleasure working 
with you over these years. We wish you the best of luck in your new 
opportunity!


Welcome to the Nova community, Cathy! We look forward to working with 
you. Please feel free to reach out to us on IRC in the #openstack-nova 
channel and on this mailing list with the [nova] tag to ask questions 
and share info.


Best,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Community Infrastructure Berlin Summit Onboarding Session

2018-11-05 Thread Clark Boylan
Hello everyone,

My apologies for cross posting but wanted to make sure the various developer 
groups saw this.

Rather than use the Infrastructure Onboarding session in Berlin [0] for 
infrastructure sysadmin/developer onboarding, I thought we could use the time 
for user onboarding. We've got quite a few new groups interacting with us 
recently, and it would probably be useful to have a session on what we do, how 
people can take advantage of this, and so on.

I've been brainstorming ideas on this etherpad [1]. If you think you'll attend 
the session and find any of these subjects to be useful please +1 them. Also 
feel free to add additional topics.

I expect this will be an informal session that directly targets the interests 
of those attending. Please do drop by if you have any interest in using this 
infrastructure at all. This is your chance to better understand Zuul job 
configuration, the test environments themselves, the metrics and data we 
collect, and basically anything else related to the community developer 
infrastructure.

[0] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22950/infrastructure-project-onboarding
[1] https://etherpad.openstack.org/p/openstack-infra-berlin-onboarding

Hope to see you there,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Mohammed Naser
On Mon, Nov 5, 2018 at 4:17 PM Matt Riedemann  wrote:
>
> On 11/4/2018 4:22 AM, Mohammed Naser wrote:
> > Just for information sake, a clean state cloud which had no reported issues
> > over maybe a period of 2-3 months already has 4 allocations which are
> > incorrect and 12 allocations pointing to the wrong resource provider, so I
> > think this comes does to committing to either "self-healing" to fix those
> > issues or not.
>
> Is this running Rocky or an older release?

In this case, this is inside a Queens cloud, I can run the same script
on a Rocky
cloud too.

> Have you dug into any of the operations around these instances to
> determine what might have gone wrong? For example, was a live migration
> performed recently on these instances and if so, did it fail? How about
> evacuations (rebuild from a down host).

To be honest, I have not, however, I suspect a lot of those happen from the
fact that it is possible that the service which makes the claim is not the
same one that deletes it

I'm not sure if this is something that's possible but say the compute2 makes
a claim for migrating to compute1 but something fails there, the revert happens
in compute1 but compute1 is already borked so it doesn't work

This isn't necessarily the exact case that's happening but it's a summary
of what I believe happens.

> By "4 allocations which are incorrect" I assume that means they are
> pointing at the correct compute node resource provider but the values
> for allocated VCPU, MEMORY_MB and DISK_GB is wrong? If so, how do the
> allocations align with old/new flavors used to resize the instance? Did
> the resize fail?

The allocated flavours usually are not wrong, they are simply associated
to the wrong resource provider (so it feels like failed migration or resize).

> Are there mixed compute versions at all, i.e. are you moving instances
> around during a rolling upgrade?

Nope

> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-05 Thread Dan Prince
On Mon, Nov 5, 2018 at 4:06 AM Cédric Jeanneret  wrote:
>
> On 11/2/18 2:39 PM, Dan Prince wrote:
> > I pushed a patch[1] to update our containerized deployment
> > architecture docs yesterday. There are 2 new fairly useful sections we
> > can leverage with TripleO's stepwise deployment. They appear to be
> > used somewhat sparingly so I wanted to get the word out.
>
> Good thing, it's important to highlight this feature and explain how it
> works, big thumb up Dan!
>
> >
> > The first is 'deploy_steps_tasks' which gives you a means to run
> > Ansible snippets on each node/role in a stepwise fashion during
> > deployment. Previously it was only possible to execute puppet or
> > docker commands where as now that we have deploy_steps_tasks we can
> > execute ad-hoc ansible in the same manner.
>
> I'm wondering if such a thing could be used for the "inflight
> validations" - i.e. a step to validate a service/container is working as
> expected once it's deployed, in order to get early failure.
> For instance, we deploy a rabbitmq container, and right after it's
> deployed, we'd like to ensure it's actually running and works as
> expected before going forward in the deploy.
>
> Care to have a look at that spec[1] and see if, instead of adding a new
> "validation_tasks" entry, we could "just" use the "deploy_steps_tasks"
> with the right step number? That would be really, really cool, and will
> probably avoid a lot of code in the end :).

It could work fine I think. As deploy-steps-tasks runs before the
"common container/baremetal" actions special care would need to be
taken so that validations for a containers startup occur at the
beginning of the next step. So a container started at step 2 would be
validated early in step 3. This may also require us to have a "post"
deploy_steps_tasks" iteration so that we can validate late starting
containers.

If if we use the more generic deploy_steps_tasks section we'd probably
rely on conventions to always add Ansible tags onto the validation
tasks. These could be useful for those wanting to selectively execute
them externally (not sure if that was part of your spec but I could
see someone wanting this).

Dan

>
> Thank you!
>
> C.
>
> [1] https://review.openstack.org/#/c/602007/
>
> >
> > The second is 'external_deploy_tasks' which allows you to use run
> > Ansible snippets on the Undercloud during stepwise deployment. This is
> > probably most useful for driving an external installer but might also
> > help with some complex tasks that need to originate from a single
> > Ansible client.
> >
> > The only downside I see to these approaches is that both appear to be
> > implemented with Ansible's default linear strategy. I saw shardy's
> > comment here [2] that the :free strategy does not yet apparently work
> > with the any_errors_fatal option. Perhaps we can reach out to someone
> > in the Ansible community in this regard to improve running these
> > things in parallel like TripleO used to work with Heat agents.
> >
> > This is also how host_prep_tasks is implemented which BTW we should
> > now get rid of as a duplicate architectural step since we have
> > deploy_steps_tasks anyway.
> >
> > [1] https://review.openstack.org/#/c/614822/
> > [2] 
> > http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Cédric Jeanneret
> Software Engineer
> DFG:DF
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Dropping lazy translation support

2018-11-05 Thread Matt Riedemann
This is a follow up to a dev ML email [1] where I noticed that some 
implementations of the upgrade-checkers goal were failing because some 
projects still use the oslo_i18n.enable_lazy() hook for lazy log message 
translation (and maybe API responses?).


The very old blueprints related to this can be found here [2][3][4].

If memory serves me correctly from my time working at IBM on this, this 
was needed to:


1. Generate logs translated in other languages.

2. Return REST API responses if the "Accept-Language" header was used 
and a suitable translation existed for that language.


#1 is a dead horse since I think at least the Ocata summit when we 
agreed to no longer translate logs since no one used them.


#2 is probably something no one knows about. I can't find end-user 
documentation about it anywhere. It's not tested and therefore I have no 
idea if it actually works anymore.


I would like to (1) deprecate the oslo_i18n.enable_lazy() function so 
new projects don't use it and (2) start removing the enable_lazy() usage 
from existing projects like keystone, glance and cinder.


Are there any users, deployments or vendor distributions that still rely 
on this feature? If so, please speak up now.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-November/136285.html

[2] https://blueprints.launchpad.net/oslo-incubator/+spec/i18n-messages
[3] https://blueprints.launchpad.net/nova/+spec/i18n-messages
[4] https://blueprints.launchpad.net/nova/+spec/user-locale-api

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching

2018-11-05 Thread Bogdan Dobrelya
Update: I have yet found co-authors, I'll keep drafting that position 
paper [0],[1]. Just did some baby steps so far. I'm open for feedback 
and contributions!


PS. Deadline is Nov 9 03:00 UTC, but *may be* it will be extended, if 
the event chairs decide to do so. Fingers crossed.


[0] 
https://github.com/bogdando/papers-ieee#in-the-current-development-looking-for-co-authors


[1] 
https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf


On 11/5/18 3:06 PM, Bogdan Dobrelya wrote:

Thank you for a reply, Flavia:


Hi Bogdan
sorry for the late reply - yesterday was a Holiday here in Brazil!
I am afraid I will not be able to engage in this collaboration with
such a short time...we had to have started this initiative a little
earlier...


That's understandable.

I hoped though a position paper is something we (all who reads that, not 
just you and me) could achieve in a couple of days, without a lot of 
research associated. That's a postion paper, which is not expected to 
contain formal prove or implementation details. The vision for tooling 
is the hardest part though, and indeed requires some time.


So let me please [tl;dr] the outcome of that position paper:

* position: given Always Available autonomy support as a starting point,
   define invariants for both operational and data storage consistency
   requirements of control/management plane (I've already drafted some in
   [0])

* vision: show that in the end that data synchronization and conflict
   resolving solution just boils down to having a causally
   consistent KVS (either causal+ or causal-RT, or lazy replication
   based, or anything like that), and cannot be achieved with *only*
   transactional distributed database, like Galera cluster. The way how
   to show that is an open question, we could refer to the existing
   papers (COPS, causal-RT, lazy replication et al) and claim they fit
   the defined invariants nicely, while transactional DB cannot fit it
   by design (it's consensus protocols require majority/quorums to
   operate and being always available for data put/write operations).
   We probably may omit proving that obvious thing formally? At least for
   the postion paper...

* opportunity: that is basically designing and implementing of such a
   causally-consistent KVS solution (see COPS library as example) for
   OpenStack, and ideally, unifying it for PaaS operators
   (OpenShift/Kubernetes) and tenants willing to host their containerized
   workloads on PaaS distributed over a Fog Cloud of Edge clouds and
   leverage its data synchronization and conflict resolving solution
   as-a-service. Like Amazon dynamo DB, for example, except that fitting
   the edge cases of another cloud stack :)

[0] 
https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/challenges.md



As for working collaboratively with latex, I would recommend using
overleaf - it is not that difficult and has lots of edition resources
as markdown and track changes, for instance.
Thanks and good luck!
Flavia




On 11/2/18 5:32 PM, Bogdan Dobrelya wrote:

Hello folks.
Here is an update for today. I crated a draft [0], and spend some time 
with building LaTeX with live-updating for the compiled PDF... The 
latter is only informational, if someone wants to contribute, please 
follow the instructions listed by the link (hint: you need no to have 
any LaTeX experience, only basic markdown knowledge should be enough!)


[0] 
https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors 



On 10/31/18 6:54 PM, Ildiko Vancsa wrote:

Hi,

Thank you for sharing your proposal.

I think this is a very interesting topic with a list of possible 
solutions some of which this group is also discussing. It would also 
be great to learn more about the IEEE activities and have experience 
about the process in this group on the way forward.


I personally do not have experience with IEEE conferences, but I’m 
happy to help with the paper if I can.


Thanks,
Ildikó




(added from the parallel thread)
On 2018. Oct 31., at 19:11, Mike Bayer  
wrote:


On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya redhat.com> wrote:


(cross-posting openstack-dev)

Hello.
[tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data
consistency requirements and challenges" a position paper [0] (papers
submitting deadline is Nov 8).

The problem scope is synchronizing control plane and/or
deployments-specific data (not necessary limited to OpenStack) across
remote Edges and central Edge and management site(s). Including the 
same
aspects for overclouds and undercloud(s), in terms of TripleO; and 
other

deployment tools of your choice.

Another problem is to not go into different solutions for Edge
deployments management and control planes of edges. And for tenants as
well, if we think of tenants also doing Edge deployments based on Edge
Data Replication as a Service, say for Kubernetes/OpenShift on top of

Re: [openstack-dev] [nova] Announcing new Focal Point for s390x libvirt/kvm Nova

2018-11-05 Thread Matt Riedemann

On 11/2/2018 3:47 AM, Andreas Scheuring wrote:

Dear Nova Community,
I want to announce the new focal point for Nova s390x libvirt/kvm.

Please welcome "Cathy Zhang” to the Nova team. She and her team will be 
responsible for maintaining the s390x libvirt/kvm Thirdparty CI  [1] and any s390x 
specific code in nova and os-brick.
I personally took a new opportunity already a few month ago but kept 
maintaining the CI as good as possible. With new manpower we can hopefully 
contribute more to the community again.

You can reach her via
* email:bjzhj...@linux.vnet.ibm.com
* IRC: Cathyz

Cathy, I wish you and your team all the best for this exciting role! I also 
want to say thank you for the last years. It was a great time, I learned a lot 
from you all, will miss it!

Cheers,

Andreas (irc: scheuran)


[1]https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_zKVM_CI


Welcome Cathy.

Andreas - thanks for the update and good luck on the new position.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about live-resize the instance

2018-11-05 Thread Matt Riedemann

On 11/4/2018 10:17 PM, Chen CH Ji wrote:
Yes, this has been discussed for long time and If I remember this 
correctly seems S PTG also had some discussion on it (maybe public Cloud 
WG ? ), Claudiu has been pushing this for several cycles and he actually 
had some code at [1] but no additional progress there...
[1] 
https://review.openstack.org/#/q/status:abandoned+topic:bp/instance-live-resize


It's a question of priorities. It's a complicated change and low 
priority, in my opinion. We've said several times before that we'd do 
it, but there are a lot of other higher priority efforts taking the 
attention of the core team. Getting agreement on the spec is the first 
step and then the runways process should be used to deal with actual 
code reviews, but I think the spec review has stalled (I know I am 
guilty of not looking at the latest updates to the spec).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][upgrade-checkers] FYI on "TypeError: Message objects do not support addition." errors

2018-11-05 Thread Matt Riedemann
If you are seeing this error when implementing and running the upgrade 
check command in your project:


Traceback (most recent call last):
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", 
line 184, in main

return conf.command.action_fn()
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", 
line 134, in check

print(t)
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", 
line 237, in __str__

return self.__unicode__()
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", 
line 243, in __unicode__

return self.get_string()
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", 
line 995, in get_string

lines.append(self._stringify_header(options))
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", 
line 1066, in _stringify_header
bits.append(" " * lpad + self._justify(fieldname, width, 
self._align[field]) + " " * rpad)
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", 
line 187, in _justify

return text + excess * " "
  File 
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_i18n/_message.py", 
line 230, in __add__

raise TypeError(msg)
TypeError: Message objects do not support addition.

It is due to calling oslo_i18n.enable_lazy() somewhere in the command 
import path. That should be removed from the project since lazy 
translation is not supported in openstack and as an effort was abandoned 
several years ago. It is probably still called in a lot of "big 
tent/stackforge" projects because of initially copying it from the more 
core projects. Anyway, just remove it.


I'm talking with the oslo team about deprecating that interface so 
projects don't mistakenly use it and expect great things to happen.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs?

2018-11-05 Thread Ken Giusti
Hi Mohammed,

What release of openstack are you using?  (ocata, pike, etc)

Also just to confirm my understanding:  you do see the SSL connections come
up, but after some time they 'hang' - what do you mean by 'hang'?  Do the
connections drop?  Or do the connections remain up but you start seeing
messages (RPC calls) time out?

thanks,

On Wed, Oct 31, 2018 at 9:40 AM Mohammed Naser  wrote:

> For what it’s worth: I ran into the same issue.  I think the problem lies
> a bit deeper because it’s a problem with kombu as when debugging I saw that
> Oslo messaging tried to connect and hung after.
>
> Sent from my iPhone
>
> > On Oct 31, 2018, at 2:29 PM, Thomas Goirand  wrote:
> >
> > Hi,
> >
> > It took me a long long time to figure out that my SSL setup was wrong
> > when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo
> > (or heat itself) never warn me that something was wrong, I just got
> > nothing working, and no log at all.
> >
> > I'm sure I wouldn't be the only one happy about having this type of
> > problems being yelled out loud in the logs. Right now, it does work if I
> > turn off SSL, though I'm still not sure what's wrong in my setup, and
> > I'm given no clue if the issue is on rabbitmq-server or on the client
> > side (ie: heat, in my current case).
> >
> > Just a wishlist... :)
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and Mykola Yakovliev for Patrole core

2018-11-05 Thread MONTEIRO, FELIPE C
Since there have only been approvals for Sergey and Mykola for Patrole core, 
welcome to the team!

Felipe

> -Original Message-
> From: BARTRA, RICK
> Sent: Monday, October 29, 2018 2:56 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and
> Mykola Yakovliev for Patrole core
> 
> *** Security Advisory: This Message Originated Outside of AT ***.
> Reference http://cso.att.com/EmailSecurity/IDSP.html for more
> information.
> 
> +1 for both of them as well.
> 
> 
> 
> On 10/29/18, 2:54 PM, "MONTEIRO, FELIPE C"  wrote:
> 
> 
> 
> 
> 
> 
> 
> -Original Message-
> 
> From: Ghanshyam Mann [mailto:gm...@ghanshyammann.com]
> 
> Sent: Monday, October 22, 2018 7:09 PM
> 
> To: OpenStack Development Mailing List \  d...@lists.openstack.org>
> 
> Subject: Re: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and
> Mykola Yakovliev for Patrole core
> 
> 
> 
> +1 for both of them. They have been doing great work in Patrole and will
> be good addition in team.
> 
> 
> 
> -gmann
> 
> 
> 
> 
> 
>   On Tue, 23 Oct 2018 03:34:51 +0900 MONTEIRO, FELIPE C
>  wrote 
> 
>  >
> 
>  > Hi,
> 
>  >
> 
>  >  I would like to nominate Sergey Vilgelm and Mykola Yakovliev for
> Patrole core as they have both done excellent work the past cycle in
> improving the Patrole framework as well as increasing Neutron Patrole test
> coverage, which includes various  Neutron plugins/extensions as well like
> fwaas. I believe they will both make an excellent addition to the Patrole core
> team.
> 
>  >
> 
>  >  Please vote with a +1/-1 for the nomination, which will stay open for
> one week.
> 
>  >
> 
>  >  Felipe
> 
>  >
> __
> 
> 
>  > OpenStack Development Mailing List (not for usage questions)
> 
>  > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> 
>  > https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-
> 2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-
> aw=Hr9uSwCAFUivOWpV_I3WfWWX2j2FaDOJgydFtf1xADs=tM-
> 1KJHy12lSqbDfRmZNuc_dgRsAjBqLMZchmEVHGEo=
> 
>  >
> 
> 
> 
> 
> 
> 
> 
> 
> __
> 
> 
> OpenStack Development Mailing List (not for usage questions)
> 
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> 
> https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-
> 2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-
> aw=Hr9uSwCAFUivOWpV_I3WfWWX2j2FaDOJgydFtf1xADs=tM-
> 1KJHy12lSqbDfRmZNuc_dgRsAjBqLMZchmEVHGEo=
> 
> 
> 
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-
> 2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-
> aw=VUGWFp3_1Kffqqftl0BNEQGU0tY7YI6cAPQCOO6l4OA=-
> vCce7mua2bf0wEyUxPbuGJLmhhaV8Geu3ImAgP4MHA=
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Dmitry Tantsur

Hi all,

Not sure how official the information about the next summit is, but it's on the 
web site [1], so I guess worth asking..


Are we planning for the summit to overlap with the May holidays? The 1st of May 
is a holiday in big part of the world. We ask people to skip it in addition to 
3+ weekend days they'll have to spend working and traveling.


To make it worse, 1-3 May are holidays in Russia this time. To make it even 
worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it 
considered? Is it possible to move the days to less conflicting time (mid-May 
maybe)?


Dmitry

[1] https://www.openstack.org/summit/denver-2019/
[2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][upgrade-checkers] Week R-23 Update

2018-11-05 Thread Matt Riedemann
There is not much news this week. There are several open changes which 
add the base command framework to projects [1]. Those need reviews from 
the related core teams. gmann and I have been trying to go through them 
first to make sure they are ready for core review.


There is one neutron change to note [2] which adds an extension point 
for neutron stadium projects (and ML2 plugins?) to hook in their own 
upgrade checks. Given the neutron architecture, this makes sense. My 
only worry is about making sure the interface is clearly defined, but I 
suspect this isn't the first time the neutron team has had to deal with 
something like this.


[1] https://review.openstack.org/#/q/topic:upgrade-checkers+status:open
[2] https://review.openstack.org/#/c/615196/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Matt Riedemann

On 11/5/2018 5:52 AM, Chris Dent wrote:

* We need to have further discussion and investigation on
   allocations getting out of sync. Volunteers?


This is something I've already spent a lot of time on with the 
heal_allocations CLI, and have already started asking mnaser questions 
about this elsewhere in the thread.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Matt Riedemann

On 11/4/2018 4:22 AM, Mohammed Naser wrote:

Just for information sake, a clean state cloud which had no reported issues
over maybe a period of 2-3 months already has 4 allocations which are
incorrect and 12 allocations pointing to the wrong resource provider, so I
think this comes does to committing to either "self-healing" to fix those
issues or not.


Is this running Rocky or an older release?

Have you dug into any of the operations around these instances to 
determine what might have gone wrong? For example, was a live migration 
performed recently on these instances and if so, did it fail? How about 
evacuations (rebuild from a down host).


By "4 allocations which are incorrect" I assume that means they are 
pointing at the correct compute node resource provider but the values 
for allocated VCPU, MEMORY_MB and DISK_GB is wrong? If so, how do the 
allocations align with old/new flavors used to resize the instance? Did 
the resize fail?


Are there mixed compute versions at all, i.e. are you moving instances 
around during a rolling upgrade?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-11-05 Thread Alex Schultz
On Mon, Nov 5, 2018 at 3:47 AM Bogdan Dobrelya  wrote:
>
> Let's also think of removing puppet-tripleo from the base container.
> It really brings the world-in (and yum updates in CI!) each job and each
> container!
> So if we did so, we should then either install puppet-tripleo and co on
> the host and bind-mount it for the docker-puppet deployment task steps
> (bad idea IMO), OR use the magical --volumes-from 
> option to mount volumes from some "puppet-config" sidecar container
> inside each of the containers being launched by docker-puppet tooling.
>

This does bring an interesting point as we also include this in
overcloud-full. I know Dan had a patch to stop using the
puppet-tripleo from the host[0] which is the opposite of this.  While
these yum updates happen a bunch in CI, they aren't super large
updates. But yes I think we need to figure out the correct way forward
with these packages.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/550848/


> On 10/31/18 6:35 PM, Alex Schultz wrote:
> >
> > So this is a single layer that is updated once and shared by all the
> > containers that inherit from it. I did notice the same thing and have
> > proposed a change in the layering of these packages last night.
> >
> > https://review.openstack.org/#/c/614371/
> >
> > In general this does raise a point about dependencies of services and
> > what the actual impact of adding new ones to projects is. Especially
> > in the container world where this might be duplicated N times
> > depending on the number of services deployed.  With the move to
> > containers, much of the sharedness that being on a single host
> > provided has been lost at a cost of increased bandwidth, memory, and
> > storage usage.
> >
> > Thanks,
> > -Alex
> >
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Tetsuro Nakamura
Thus we should only read from placement:
> * at compute node startup
> * when a write fails
> And we should only write to placement:
> * at compute node startup
> * when the virt driver tells us something has changed


I agree with this.

We could also prepare an interface for operators/other-projects to force
nova to pull fresh information from placement and put it into its cache in
order to avoid predictable conflicts.

Is that right? If it is not right, can we do that? If not, why not?


The same question from me.
Refreshing periodically strategy might be now an optional optimization for
smaller clouds?

2018年11月5日(月) 20:53 Chris Dent :

> On Sun, 4 Nov 2018, Jay Pipes wrote:
>
> > Now that we have generation markers protecting both providers and
> consumers,
> > we can rely on those generations to signal to the scheduler report
> client
> > that it needs to pull fresh information about a provider or consumer.
> So,
> > there's really no need to automatically and blindly refresh any more.
>
> I agree with this ^.
>
> I've been trying to tease out the issues in this thread and on the
> associated review [1] and I've decided that much of my confusion
> comes from the fact that we refer to a thing which is a "cache" in
> the resource tracker and either trusting it more or not having it at
> all, and I think that's misleading. To me a "cache" has multiple
> clients and there's some need for reconciliation and invalidation
> amongst them. The thing that's in the resource tracker is in one
> process, changes to it are synchronized; it's merely a data structure.
>
> Some words follow where I try to tease things out a bit more (mostly
> for my own sake, but if it helps other people, great). At the very
> end there's a bit of list of suggested todos for us to consider.
>
> What we have is a data structure which represents the resource
> tracker and virtdirver's current view on what providers and
> associates it is aware of. We maintain a boundary between the RT and
> the virtdriver that means there's "updating" going on that sometimes
> is a bit fussy to resolve (cf. recent adjustments to allocation
> ratio handling).
>
> In the old way, every now and again we get a bunch of info from
> placement to confirm that our view is right and try to reconcile
> things.
>
> What we're considering moving towards is only doing that "get a
> bunch of info from placement" when we fail to write to placement
> because of a generation conflict.
>
> Thus we should only read from placement:
>
> * at compute node startup
> * when a write fails
>
> And we should only write to placement:
>
> * at compute node startup
> * when the virt driver tells us something has changed
>
> Is that right? If it is not right, can we do that? If not, why not?
>
> Because generations change, often, they guard against us making
> changes in ignorance and allow us to write blindly and only GET when
> we fail. We've got this everywhere now, let's use it. So, for
> example, even if something else besides the compute is adding
> traits, it's cool. We'll fail when we (the compute) try to clobber.
>
> Elsewhere in the thread several other topics were raised. A lot of
> that boil back to "what are we actually trying to do in the
> periodics?". As is often the case (and appropriately so) what we're
> trying to do has evolved and accreted in an organic fashion and it
> is probably time for us to re-evaluate and make sure we're doing the
> right stuff. The first step is writing that down. That aspect has
> always been pretty obscure or tribal to me, I presume so for others.
> So doing a legit audit of that code and the goals is something we
> should do.
>
> Mohammed's comments about allocations getting out of sync are
> important. I agree with him that it would be excellent if we could
> go back to self-healing those, especially because of the "wait for
> the computes to automagically populate everything" part he mentions.
> However, that aspect, while related to this, is not quite the same
> thing. The management of allocations and the management of
> inventories (and "associates") is happening from different angles.
>
> And finally, even if we turn off these refreshes to lighten the
> load, placement still needs to be capable of dealing with frequent
> requests, so we have something to fix there. We need to do the
> analysis to find out where the cost is and implement some solutions.
> At the moment we don't know where it is. It could be:
>
> * In the database server
> * In the python code that marshals the data around those calls to
>the database
> * In the python code that handles the WSGI interactions
> * In the web server that is talking to the python code
>
> belmoreira's document [2] suggests some avenues of investigation
> (most CPU time is in user space and not waiting) but we'd need a bit
> more information to plan any concrete next steps:
>
> * what's the web server and which wsgi configuration?
> * where's the database, if it's different what's 

Re: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching

2018-11-05 Thread Bogdan Dobrelya

Thank you for a reply, Flavia:


Hi Bogdan
sorry for the late reply - yesterday was a Holiday here in Brazil!
I am afraid I will not be able to engage in this collaboration with
such a short time...we had to have started this initiative a little
earlier...


That's understandable.

I hoped though a position paper is something we (all who reads that, not 
just you and me) could achieve in a couple of days, without a lot of 
research associated. That's a postion paper, which is not expected to 
contain formal prove or implementation details. The vision for tooling 
is the hardest part though, and indeed requires some time.


So let me please [tl;dr] the outcome of that position paper:

* position: given Always Available autonomy support as a starting point,
  define invariants for both operational and data storage consistency
  requirements of control/management plane (I've already drafted some in
  [0])

* vision: show that in the end that data synchronization and conflict
  resolving solution just boils down to having a causally
  consistent KVS (either causal+ or causal-RT, or lazy replication
  based, or anything like that), and cannot be achieved with *only*
  transactional distributed database, like Galera cluster. The way how
  to show that is an open question, we could refer to the existing
  papers (COPS, causal-RT, lazy replication et al) and claim they fit
  the defined invariants nicely, while transactional DB cannot fit it
  by design (it's consensus protocols require majority/quorums to
  operate and being always available for data put/write operations).
  We probably may omit proving that obvious thing formally? At least for
  the postion paper...

* opportunity: that is basically designing and implementing of such a
  causally-consistent KVS solution (see COPS library as example) for
  OpenStack, and ideally, unifying it for PaaS operators
  (OpenShift/Kubernetes) and tenants willing to host their containerized
  workloads on PaaS distributed over a Fog Cloud of Edge clouds and
  leverage its data synchronization and conflict resolving solution
  as-a-service. Like Amazon dynamo DB, for example, except that fitting
  the edge cases of another cloud stack :)

[0] 
https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/challenges.md



As for working collaboratively with latex, I would recommend using
overleaf - it is not that difficult and has lots of edition resources
as markdown and track changes, for instance.
Thanks and good luck!
Flavia




On 11/2/18 5:32 PM, Bogdan Dobrelya wrote:

Hello folks.
Here is an update for today. I crated a draft [0], and spend some time 
with building LaTeX with live-updating for the compiled PDF... The 
latter is only informational, if someone wants to contribute, please 
follow the instructions listed by the link (hint: you need no to have 
any LaTeX experience, only basic markdown knowledge should be enough!)


[0] 
https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors 



On 10/31/18 6:54 PM, Ildiko Vancsa wrote:

Hi,

Thank you for sharing your proposal.

I think this is a very interesting topic with a list of possible 
solutions some of which this group is also discussing. It would also 
be great to learn more about the IEEE activities and have experience 
about the process in this group on the way forward.


I personally do not have experience with IEEE conferences, but I’m 
happy to help with the paper if I can.


Thanks,
Ildikó




(added from the parallel thread)
On 2018. Oct 31., at 19:11, Mike Bayer  
wrote:


On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya redhat.com> wrote:


(cross-posting openstack-dev)

Hello.
[tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data
consistency requirements and challenges" a position paper [0] (papers
submitting deadline is Nov 8).

The problem scope is synchronizing control plane and/or
deployments-specific data (not necessary limited to OpenStack) across
remote Edges and central Edge and management site(s). Including the 
same
aspects for overclouds and undercloud(s), in terms of TripleO; and 
other

deployment tools of your choice.

Another problem is to not go into different solutions for Edge
deployments management and control planes of edges. And for tenants as
well, if we think of tenants also doing Edge deployments based on Edge
Data Replication as a Service, say for Kubernetes/OpenShift on top of
OpenStack.

So the paper should name the outstanding problems, define data
consistency requirements and pose possible solutions for 
synchronization

and conflicts resolving. Having maximum autonomy cases supported for
isolated sites, with a capability to eventually catch up its 
distributed

state. Like global database [1], or something different perhaps (see
causal-real-time consistency model [2],[3]), or even using git. And
probably more than that?.. (looking for ideas)



I can offer detail on whatever aspects of the "shared  / global
database" 

Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-05 Thread Chris Dent

On Sun, 4 Nov 2018, Monty Taylor wrote:

I've floated a half-baked version of this idea to a few people, but lemme try 
again with some new words.


What if we added support for serving vendor data files from the root of a 
primary URL as-per RFC 5785. Specifically, support deployers adding a json 
file to .well-known/openstack/client that would contain what we currently 
store in the openstacksdk repo and were just discussing splitting out.


Sounds like a good plan.

I'm still a vexed that we need to know a cloud's primary host, then
this URL, then get a url for auth and from there start gathering up
information about the services and then their endpoints.

All of that seems of one piece to me and there should be one way to
do it.

But in the absence of that, this is a good plan.


What do people think?


I think cats are nice and so is this plan.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Chris Dent

On Sun, 4 Nov 2018, Jay Pipes wrote:

Now that we have generation markers protecting both providers and consumers, 
we can rely on those generations to signal to the scheduler report client 
that it needs to pull fresh information about a provider or consumer. So, 
there's really no need to automatically and blindly refresh any more.


I agree with this ^.

I've been trying to tease out the issues in this thread and on the
associated review [1] and I've decided that much of my confusion
comes from the fact that we refer to a thing which is a "cache" in
the resource tracker and either trusting it more or not having it at
all, and I think that's misleading. To me a "cache" has multiple
clients and there's some need for reconciliation and invalidation
amongst them. The thing that's in the resource tracker is in one
process, changes to it are synchronized; it's merely a data structure.

Some words follow where I try to tease things out a bit more (mostly
for my own sake, but if it helps other people, great). At the very
end there's a bit of list of suggested todos for us to consider.

What we have is a data structure which represents the resource
tracker and virtdirver's current view on what providers and
associates it is aware of. We maintain a boundary between the RT and
the virtdriver that means there's "updating" going on that sometimes
is a bit fussy to resolve (cf. recent adjustments to allocation
ratio handling).

In the old way, every now and again we get a bunch of info from
placement to confirm that our view is right and try to reconcile
things.

What we're considering moving towards is only doing that "get a
bunch of info from placement" when we fail to write to placement
because of a generation conflict.

Thus we should only read from placement:

* at compute node startup
* when a write fails

And we should only write to placement:

* at compute node startup
* when the virt driver tells us something has changed

Is that right? If it is not right, can we do that? If not, why not?

Because generations change, often, they guard against us making
changes in ignorance and allow us to write blindly and only GET when
we fail. We've got this everywhere now, let's use it. So, for
example, even if something else besides the compute is adding
traits, it's cool. We'll fail when we (the compute) try to clobber.

Elsewhere in the thread several other topics were raised. A lot of
that boil back to "what are we actually trying to do in the
periodics?". As is often the case (and appropriately so) what we're
trying to do has evolved and accreted in an organic fashion and it
is probably time for us to re-evaluate and make sure we're doing the
right stuff. The first step is writing that down. That aspect has
always been pretty obscure or tribal to me, I presume so for others.
So doing a legit audit of that code and the goals is something we
should do.

Mohammed's comments about allocations getting out of sync are
important. I agree with him that it would be excellent if we could
go back to self-healing those, especially because of the "wait for
the computes to automagically populate everything" part he mentions.
However, that aspect, while related to this, is not quite the same
thing. The management of allocations and the management of
inventories (and "associates") is happening from different angles.

And finally, even if we turn off these refreshes to lighten the
load, placement still needs to be capable of dealing with frequent
requests, so we have something to fix there. We need to do the
analysis to find out where the cost is and implement some solutions.
At the moment we don't know where it is. It could be:

* In the database server
* In the python code that marshals the data around those calls to
  the database
* In the python code that handles the WSGI interactions
* In the web server that is talking to the python code

belmoreira's document [2] suggests some avenues of investigation
(most CPU time is in user space and not waiting) but we'd need a bit
more information to plan any concrete next steps:

* what's the web server and which wsgi configuration?
* where's the database, if it's different what's the load there?

I suspect there's a lot we can do to make our code more correct and
efficient. And beyond that there is a great deal of standard run-of-
the mill server-side caching and etag handling that we could
implement if necessary. That is: treat placement like a web app that
needs to be optimized in the usual ways.

As Eric suggested at the start of the thread, this kind of
investigation is expected and normal. We've not done something
wrong. Make it, make it correct, make it fast is the process.
We're oscillating somewhere between 2 and 3.

So in terms of actions:

* I'm pretty well situated to do some deeper profiling and
  benchmarking of placement to find the elbows in that.

* It seems like Eric and Jay are probably best situated to define
  and refine what should really be going on with the 

Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-11-05 Thread Cédric Jeanneret


On 11/5/18 11:47 AM, Bogdan Dobrelya wrote:
> Let's also think of removing puppet-tripleo from the base container.
> It really brings the world-in (and yum updates in CI!) each job and each
> container!
> So if we did so, we should then either install puppet-tripleo and co on
> the host and bind-mount it for the docker-puppet deployment task steps
> (bad idea IMO), OR use the magical --volumes-from 
> option to mount volumes from some "puppet-config" sidecar container
> inside each of the containers being launched by docker-puppet tooling.

And, in addition, I'd rather see the "podman" thingy as a bind-mount,
especially since we MUST get the same version in all the calls.

> 
> On 10/31/18 6:35 PM, Alex Schultz wrote:
>>
>> So this is a single layer that is updated once and shared by all the
>> containers that inherit from it. I did notice the same thing and have
>> proposed a change in the layering of these packages last night.
>>
>> https://review.openstack.org/#/c/614371/
>>
>> In general this does raise a point about dependencies of services and
>> what the actual impact of adding new ones to projects is. Especially
>> in the container world where this might be duplicated N times
>> depending on the number of services deployed.  With the move to
>> containers, much of the sharedness that being on a single host
>> provided has been lost at a cost of increased bandwidth, memory, and
>> storage usage.
>>
>> Thanks,
>> -Alex
>>
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-11-05 Thread Bogdan Dobrelya

Let's also think of removing puppet-tripleo from the base container.
It really brings the world-in (and yum updates in CI!) each job and each 
container!
So if we did so, we should then either install puppet-tripleo and co on 
the host and bind-mount it for the docker-puppet deployment task steps 
(bad idea IMO), OR use the magical --volumes-from  
option to mount volumes from some "puppet-config" sidecar container 
inside each of the containers being launched by docker-puppet tooling.


On 10/31/18 6:35 PM, Alex Schultz wrote:


So this is a single layer that is updated once and shared by all the
containers that inherit from it. I did notice the same thing and have
proposed a change in the layering of these packages last night.

https://review.openstack.org/#/c/614371/

In general this does raise a point about dependencies of services and
what the actual impact of adding new ones to projects is. Especially
in the container world where this might be duplicated N times
depending on the number of services deployed.  With the move to
containers, much of the sharedness that being on a single host
provided has been lost at a cost of increased bandwidth, memory, and
storage usage.

Thanks,
-Alex



--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Open API 3.0 for OpenStack API

2018-11-05 Thread Edison Xiang
Hi team,

I submit a forum [1] named "Cross-project Open API 3.0 support".
We can make more discussions about that in this forum in berlin.
Feel free to add your ideas here [2].
Welcome to join us.
Thanks very much.

[1]
https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=open+api
[2] https://etherpad.openstack.org/p/api-berlin-forum-brainstorming


Best Regards,
Edison Xiang

On Thu, Oct 11, 2018 at 7:48 PM Gilles Dubreuil  wrote:

>
>
> On 11/10/18 00:18, Jeremy Stanley wrote:
>
> On 2018-10-10 13:24:28 +1100 (+1100), Gilles Dubreuil wrote:
>
> On 09/10/18 23:58, Jeremy Stanley wrote:
>
> On 2018-10-09 08:52:52 -0400 (-0400), Jim Rollenhagen wrote:
> [...]
>
> It seems to me that a major goal of openstacksdk is to hide
> differences between clouds from the user. If the user is meant
> to use a GraphQL library themselves, we lose this and the user
> needs to figure it out themselves. Did I understand that
> correctly?
>
> This is especially useful where the SDK implements business
> logic for common operations like "if the user requested A and
> the cloud supports features B+C+D then use those to fulfil the
> request, otherwise fall back to using features E+F".
>
>
> The features offered to the user don't have to change, it's just a
> different architecture.
>
> The user doesn't have to deal with a GraphQL library, only the
> client applications (consuming OpenStack APIs). And there are also
> UI tools such as GraphiQL which allow to interact directly with
> GraphQL servers.
>
>
> My point was simply that SDKs provide more than a simple translation
> of network API calls and feature discovery. There can also be rather
> a lot of "business logic" orchestrating multiple primitive API calls
> to reach some more complex outcome. The services don't want to embed
> this orchestrated business logic themselves, and it makes little
> sense to replicate the same algorithms in every single application
> which wants to make use of such composite functionality. There are
> common actions an application might wish to take which involve
> speaking to multiple APIs for different services to make specific
> calls in a particular order, perhaps feeding the results of one into
> the next.
>
> Can you explain how GraphQL eliminates the above reasons for an SDK?
>
>
> What I meant is the communication part of any SDK interfacing between
> clients and API services can be handled by GraphQL client librairies.
> So instead of having to rely on modules (imported or native) to carry the
> REST communications, we're dealing with data provided by GraphQL libraries
> (which are also modules but standardized as GraphQL is a specification).
> So as you mentioned there is still need to provide the data wrap in
> objects or any adequate struct to present to the consumers.
>
> Having a Schema helps both API and clients developers because the data is
> clearly typed and graphed. Backend devs can focus on resolving the data for
> each node/leaf while the clients can focus on what they need and not how to
> get it.
>
> To relate to $subject, by building the data model (graph) we obtain a
> schema and introspection. That's a big saver in term of resources.
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Using externally stored keys for encryption

2018-11-05 Thread Markus Hentsch
Dear Mohammed,

with SecuStack we've been integrating end-to-end (E2E) transfer of
secrets into the OpenStack code. From your problem description, it
sounds like our implementation would address some of your points. For
below explanation, I will refer to those secrets as "keys".

Our solution works as follows:

- when the user creates an encrypted resource, they may specify to use
E2E key transfer instead of Barbican
- the resource will be allocated and enter a state where it is waiting
for the transmission of the key
- the user establishes an E2E relationship with the compute/volume host
where the resource has been scheduled
- the key is encrypted (asymmetrically) on the user side specifically
for this host (using its public key) and transferred through the API to
this host
- the key reaches the compute/volume host, is decrypted by the host's
private key and is then used temporarily for the duration of the
resource creation and discarded afterwards

Whenever such resource is to be used (instance booted or volume
attached), a similar workflow is triggered on-demand that requires the
key to be transferred via the E2E channel again.

Our solution is complemented by an extension of the Barbican workflow
which also allows users to specify secret IDs and manage them manually
for encrypted resources instead of having OpenStack handle all of that
automatically. This represents a solution that is kind of in-between the
current OpenStack and our E2E approach. We have not looked into external
Barbican integration yet, though.

We do plan to contribute our E2E key transfer and user-centric key
control to OpenStack, if we can obtain support for this idea. However,
we are currently in the middle of trying to contribute image encryption
to OpenStack, which is already proving to be a lengthy process as it
involves a lot of different teams. The E2E stuff would be an even bigger
change across the components. Unfortunately, we currently don't have the
resources to tackle two huge contributions at the same time as it
requires a lot of effort getting multiple teams to agree on a single
solution.


Best regards,

Markus Hentsch


Mohammed Naser wrote:
> Hi everyone:
> 
> I've been digging around the documentation of Nova, Cinder and the
> encrypted disks feature and I've been a bit stumped on something which
> I think is a very relevant use case that might not be possible (or it
> is and I have totally missed it!)
> 
> It seems that both Cinder and Nova assume that secrets are always
> stored within the Barbican deployment in the same cloud.  This makes a
> lot of sense however in scenarios where the consumer of an OpenStack
> cloud wants to operate it without trusting the cloud, they won't be
> able to have encrypted volumes that make sense, an example:
> 
> - Create encrypted volume, keys are stored in Barbican
> - Boot VM using said encrypted volume, Nova pulls keys from Barbican,
> starts VM..
> 
> However, this means that the deployer can at anytime pull down the
> keys and decrypt things locally to do $bad_things.  However, if we had
> something like any of the following two ideas:
> 
> - Allow for "run-time" providing secret on boot (maybe something added
> to the start/boot VM API?)
> - Allow for pointing towards an external instance of Barbican
> 
> By using those 2, we allow OpenStack users to operate their VMs
> securely and allowing them to have control over their keys.  If they
> want to revoke all access, they can shutdown all the VMs and cut
> access to their key storage management and not worry about someone
> just pulling them down from the internal Barbican.
> 
> Hopefully I did a good job explaining this use case and I'm just
> wondering if this is a thing that's possible at the moment or if we
> perhaps need to look into it.
> 
> Thanks,
> Mohammed
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-05 Thread Mohammed Naser


Sent from my iPhone

> On Nov 5, 2018, at 10:19 AM, Thierry Carrez  wrote:
> 
> Monty Taylor wrote:
>> [...]
>> What if we added support for serving vendor data files from the root of a 
>> primary URL as-per RFC 5785. Specifically, support deployers adding a json 
>> file to .well-known/openstack/client that would contain what we currently 
>> store in the openstacksdk repo and were just discussing splitting out.
>> [...]
>> What do people think?
> 
> I love the idea of public clouds serving that file directly, and the user 
> experience you get from it. The only two drawbacks on top of my head would be:
> 
> - it's harder to discover available compliant openstack clouds from the 
> client.
> 
> - there is no vetting process, so there may be failures with weird clouds 
> serving half-baked files that people may blame the client tooling for.
> 
> I still think it's a good idea, as in theory it aligns the incentive of 
> maintaining the file with the most interested stakeholder. It just might need 
> some extra communication to work seamlessly.

I’m thinking out loud here but perhaps a simple linter that a cloud provider 
can run will help them make sure that everything is functional. 

> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-05 Thread Thierry Carrez

Monty Taylor wrote:

[...]
What if we added support for serving vendor data files from the root of 
a primary URL as-per RFC 5785. Specifically, support deployers adding a 
json file to .well-known/openstack/client that would contain what we 
currently store in the openstacksdk repo and were just discussing 
splitting out.

[...]
What do people think?


I love the idea of public clouds serving that file directly, and the 
user experience you get from it. The only two drawbacks on top of my 
head would be:


- it's harder to discover available compliant openstack clouds from the 
client.


- there is no vetting process, so there may be failures with weird 
clouds serving half-baked files that people may blame the client tooling 
for.


I still think it's a good idea, as in theory it aligns the incentive of 
maintaining the file with the most interested stakeholder. It just might 
need some extra communication to work seamlessly.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Belmiro Moreira
Thanks Eric for the patch.
This will help keeping placement calls under control.

Belmiro


On Sun, Nov 4, 2018 at 1:01 PM Jay Pipes  wrote:

> On 11/02/2018 03:22 PM, Eric Fried wrote:
> > All-
> >
> > Based on a (long) discussion yesterday [1] I have put up a patch [2]
> > whereby you can set [compute]resource_provider_association_refresh to
> > zero and the resource tracker will never* refresh the report client's
> > provider cache. Philosophically, we're removing the "healing" aspect of
> > the resource tracker's periodic and trusting that placement won't
> > diverge from whatever's in our cache. (If it does, it's because the op
> > hit the CLI, in which case they should SIGHUP - see below.)
> >
> > *except:
> > - When we initially create the compute node record and bootstrap its
> > resource provider.
> > - When the virt driver's update_provider_tree makes a change,
> > update_from_provider_tree reflects them in the cache as well as pushing
> > them back to placement.
> > - If update_from_provider_tree fails, the cache is cleared and gets
> > rebuilt on the next periodic.
> > - If you send SIGHUP to the compute process, the cache is cleared.
> >
> > This should dramatically reduce the number of calls to placement from
> > the compute service. Like, to nearly zero, unless something is actually
> > changing.
> >
> > Can I get some initial feedback as to whether this is worth polishing up
> > into something real? (It will probably need a bp/spec if so.)
> >
> > [1]
> >
> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03
> > [2] https://review.openstack.org/#/c/614886/
> >
> > ==
> > Background
> > ==
> > In the Queens release, our friends at CERN noticed a serious spike in
> > the number of requests to placement from compute nodes, even in a
> > stable-state cloud. Given that we were in the process of adding a ton of
> > infrastructure to support sharing and nested providers, this was not
> > unexpected. Roughly, what was previously:
> >
> >   @periodic_task:
> >   GET /resource_providers/$compute_uuid
> >   GET /resource_providers/$compute_uuid/inventories
> >
> > became more like:
> >
> >   @periodic_task:
> >   # In Queens/Rocky, this would still just return the compute RP
> >   GET /resource_providers?in_tree=$compute_uuid
> >   # In Queens/Rocky, this would return nothing
> >   GET /resource_providers?member_of=...=MISC_SHARES...
> >   for each provider returned above:  # i.e. just one in Q/R
> >   GET /resource_providers/$compute_uuid/inventories
> >   GET /resource_providers/$compute_uuid/traits
> >   GET /resource_providers/$compute_uuid/aggregates
> >
> > In a cloud the size of CERN's, the load wasn't acceptable. But at the
> > time, CERN worked around the problem by disabling refreshing entirely.
> > (The fact that this seems to have worked for them is an encouraging sign
> > for the proposed code change.)
> >
> > We're not actually making use of most of that information, but it sets
> > the stage for things that we're working on in Stein and beyond, like
> > multiple VGPU types, bandwidth resource providers, accelerators, NUMA,
> > etc., so removing/reducing the amount of information we look at isn't
> > really an option strategically.
>
> I support your idea of getting rid of the periodic refresh of the cache
> in the scheduler report client. Much of that was added in order to
> emulate the original way the resource tracker worked.
>
> Most of the behaviour in the original resource tracker (and some of the
> code still in there for dealing with (surprise!) PCI passthrough devices
> and NUMA topology) was due to doing allocations on the compute node (the
> whole claims stuff). We needed to always be syncing the state of the
> compute_nodes and pci_devices table in the cell database with whatever
> usage information was being created/modified on the compute nodes [0].
>
> All of the "healing" code that's in the resource tracker was basically
> to deal with "soft delete", migrations that didn't complete or work
> properly, and, again, to handle allocations becoming out-of-sync because
> the compute nodes were responsible for allocating (as opposed to the
> current situation we have where the placement service -- via the
> scheduler's call to claim_resources() -- is responsible for allocating
> resources [1]).
>
> Now that we have generation markers protecting both providers and
> consumers, we can rely on those generations to signal to the scheduler
> report client that it needs to pull fresh information about a provider
> or consumer. So, there's really no need to automatically and blindly
> refresh any more.
>
> Best,
> -jay
>
> [0] We always need to be syncing those tables because those tables,
> unlike the placement database's data modeling, couple both inventory AND
> usage in the same table structure...
>
> [1] again, except for PCI devices and NUMA 

Re: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-05 Thread Cédric Jeanneret
On 11/2/18 2:39 PM, Dan Prince wrote:
> I pushed a patch[1] to update our containerized deployment
> architecture docs yesterday. There are 2 new fairly useful sections we
> can leverage with TripleO's stepwise deployment. They appear to be
> used somewhat sparingly so I wanted to get the word out.

Good thing, it's important to highlight this feature and explain how it
works, big thumb up Dan!

> 
> The first is 'deploy_steps_tasks' which gives you a means to run
> Ansible snippets on each node/role in a stepwise fashion during
> deployment. Previously it was only possible to execute puppet or
> docker commands where as now that we have deploy_steps_tasks we can
> execute ad-hoc ansible in the same manner.

I'm wondering if such a thing could be used for the "inflight
validations" - i.e. a step to validate a service/container is working as
expected once it's deployed, in order to get early failure.
For instance, we deploy a rabbitmq container, and right after it's
deployed, we'd like to ensure it's actually running and works as
expected before going forward in the deploy.

Care to have a look at that spec[1] and see if, instead of adding a new
"validation_tasks" entry, we could "just" use the "deploy_steps_tasks"
with the right step number? That would be really, really cool, and will
probably avoid a lot of code in the end :).

Thank you!

C.

[1] https://review.openstack.org/#/c/602007/

> 
> The second is 'external_deploy_tasks' which allows you to use run
> Ansible snippets on the Undercloud during stepwise deployment. This is
> probably most useful for driving an external installer but might also
> help with some complex tasks that need to originate from a single
> Ansible client.
> 
> The only downside I see to these approaches is that both appear to be
> implemented with Ansible's default linear strategy. I saw shardy's
> comment here [2] that the :free strategy does not yet apparently work
> with the any_errors_fatal option. Perhaps we can reach out to someone
> in the Ansible community in this regard to improve running these
> things in parallel like TripleO used to work with Heat agents.
> 
> This is also how host_prep_tasks is implemented which BTW we should
> now get rid of as a duplicate architectural step since we have
> deploy_steps_tasks anyway.
> 
> [1] https://review.openstack.org/#/c/614822/
> [2] 
> http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bug deputy report

2018-11-05 Thread 270162781
Hi all, I'm zhaobo, I was the bug deputy for the last week and I'm afraid that 
cannot attending the comming upstream meeting so I'm sending out this report: 
Last week there are some high priority bugs for neutron . What a quiet week. 
;-) Also some bugs need to attention, I list them here: [High] Race conditions 
in neutron_tempest_plugin/scenario/test_security_groups.py 
https://bugs.launchpad.net/neutron/+bug/1801306 TEMPEST CI FAILURE, the result 
is caused by some tempest tests operate the default SG, that will affect other 
test cases. Migration causes downtime while doing bulk_pull 
https://bugs.launchpad.net/neutron/+bug/1801104 Seems refresh the local cache 
so frequently, also if the records we want from neutron-server are so large, 
could we improve the query filter for improve performance on agent rpc? [Need 
Attention] Network: concurrent issue for create network operation 
https://bugs.launchpad.net/neutron/+bug/1800417 This looks like an exist issue 
for master. I think this must be an issue, but the bug lacks some logs for deep 
into, so hope the reporter can provider more details about it at first. 
[RFE]Neutron API Server: unexpected behavior with multiple long live clients 
https://bugs.launchpad.net/neutron/+bug/1800599 I have changed this bug to a 
new RFE, as it introduces an new mechanism to make sure each request can be 
processed to the maximum extend if possible, without long time waiting on 
client side. Thanks, Best Regards, ZhaoBo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev