[openstack-dev] [requirements] Our job is done, time to close up shop.

2018-03-31 Thread Matthew Thode
The requirements project had a good run, but things seem to be winding
down.  We only break openstack a couple times a cycle now, and that's
just not acceptable.  The graph must go up and to the right.  So, it's
time for the requirements project to close up shop.  So long and thanks
for all the fish.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/networking-midonet failed

2018-03-31 Thread Tony Breeds
On Sat, Mar 31, 2018 at 06:17:41AM +, A mailing list for the OpenStack 
Stable Branch test reports. wrote:
> Build failed.
> 
> - build-openstack-sphinx-docs 
> http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/build-openstack-sphinx-docs/b20c665/html/
>  : SUCCESS in 5m 48s
> - openstack-tox-py27 
> http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/openstack-tox-py27/75db3fe/
>  : FAILURE in 11m 49s
 

I'm not sure what's going on here but as with stable/ocata the
networking-midonet periodic-stable jobs have been failing like this for
close to a week.

Can someone from that team take a look

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet][Openstack-stable-maint] Stable check of openstack/networking-midonet failed

2018-03-31 Thread Tony Breeds
On Sat, Mar 31, 2018 at 06:17:07AM +, A mailing list for the OpenStack 
Stable Branch test reports. wrote:
> Build failed.
> 
> - build-openstack-sphinx-docs 
> http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/ocata/build-openstack-sphinx-docs/2f351df/html/
>  : SUCCESS in 6m 25s
> - openstack-tox-py27 
> http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/ocata/openstack-tox-py27/c558974/
>  : FAILURE in 14m 22s

I'm not sure what's going on here but the networking-midonet
periodic-stable jobs have been failing like this for close to a week.

Can someone from that team take a look

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-31 Thread Nadathur, Sundar

Hi Eric and all,
    Thank you very much for considering my concerns and coming back 
with an improved solution. Glad that no blood was shed in the process.


I took this proposal and worked out its details, as I understand them, 
in this etherpad:

 https://etherpad.openstack.org/p/Cyborg-Nova-Multifunction
The intention of this detailed scheme is to include GPUs, FPGAs and all 
devices, but the focus may be more on FPGAs.


This scheme at first keeps the restriction that a multi-function device 
cannot be reprogrammed but, in the last section, explores which part of 
the sky will fall down if we do allow that. May be we'll get through 
this with tears but no blood!


Have a good rest of the weekend.

Regards,
Sundar

On 3/29/2018 9:43 AM, Eric Fried wrote:

We discussed this on IRC [1], hangout, and etherpad [2].  Here is the
summary, which we mostly seem to agree on:

There are two different classes of device we're talking about
modeling/managing.  (We don't know the real nomenclature, so forgive
errors in that regard.)

==> Fully dynamic: You can program one region with one function, and
then still program a different region with a different function, etc.

==> Single program: Once you program the card with a function, *all* its
virtual slots are *only* capable of that function until the card is
reprogrammed.  And while any slot is in use, you can't reprogram.  This
is Sundar's FPGA use case.  It is also Sylvain's VGPU use case.

The "fully dynamic" case is straightforward (in the sense of being what
placement was architected to handle).
* Model the PF/region as a resource provider.
* The RP has inventory of some generic resource class (e.g. "VGPU",
"SRIOV_NET_VF", "FPGA_FUNCTION").  Allocations consume that inventory,
plain and simple.
* As a region gets programmed dynamically, it's acceptable for the thing
doing the programming to set a trait indicating that that function is in
play.  (Sundar, this is the thing I originally said would get
resistance; but we've agreed it's okay.  No blood was shed :)
* Requests *may* use preferred traits to help them land on a card that
already has their function flashed on it. (Prerequisite: preferred
traits, which can be implemented in placement.  Candidates with the most
preferred traits get sorted highest.)

The "single program" case needs to be handled more like what Alex
describes below.  TL;DR: We do *not* support dynamic programming,
traiting, or inventorying at instance boot time - it all has to be done
"up front".
* The PFs can be initially modeled as "empty" resource providers.  Or
maybe not at all.  Either way, *they can not be deployed* in this state.
* An operator or admin (via a CLI, config file, agent like blazar or
cyborg, etc.) preprograms the PF to have the specific desired
function/configuration.
   * This may be cyborg/blazar pre-programming devices to maintain an
available set of each function
   * This may be in response to a user requesting some function, which
causes a new image to be laid down on a device so it will be available
for scheduling
   * This may be a human doing it at cloud-build time
* This results in the resource provider being (created and) set up with
the inventory and traits appropriate to that function.
* Now deploys can happen, using required traits representing the desired
function.

-efried

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-29.log.html#t2018-03-29T12:52:56
[2] https://etherpad.openstack.org/p/placement-dynamic-traiting

On 03/29/2018 07:38 AM, Alex Xu wrote:

Agree with that, whatever the tweak inventory or traits, none of them works.

Same as VGPU, we can support pre-programmed mode for multiple-functions
region, and each region only can support one type function.

There are two reasons why Cyborg has a filter:
* records the usage of functions in a region
* records which function is programmed.

For #1, each region provider multiple functions. Each function can be
assigned to a VM. So we should create ResourceProvider for the region. And
the resource class is function. That is similar to the SR-IOV device.
The region(The PF)
provides functions (VFs).

For #2, We should use trait to distinguish the function type.

Then we didn't keep any inventory info in the cyborg again, and we
needn't any filter in cyborg also,
and there is no race condition anymore.

2018-03-29 2:48 GMT+08:00 Eric Fried >:

 Sundar-

         We're running across this issue in several places right
 now.   One
 thing that's definitely not going to get traction is
 automatically/implicitly tweaking inventory in one resource class when
 an allocation is made on a different resource class (whether in the same
 or different RPs).

         Slightly less of a nonstarter, but still likely to get
 significant
 push-back, is the idea of tweaking traits on the fly.  For example, your
 vGPU case might be modeled 

Re: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project

2018-03-31 Thread Jeffrey Zhang
Thanks to all guys.

The mail is a little off-topic. First of all, let us back to the topic of
this
mail.


**kolla-kubernetes**

The root issue for kolla-kubernetes is no active contributors. if more
person
is interested in this project, I would like to give more time to this
project.
But leave it in kolla governance may not good for its growth. Because it is
a
**totally different** technical stack with kolla-ansible. so migrate it to
TC
governance should the best solution.


**for kolla and kolla-ansible split**

kolla(container) is widely used by multi-project (TripleO, OSH). And I also
heard some internal projects are using it too. kolla and kolla-ansible are
well
decoupled. The usage or the API kolla provides always stable and backward
compatible. kolla images are also used in many produce environment through
different deployment tools So kolla(container) is worth say "provide
production-ready containers".  This should not be negative, just because of
kolla and kolla-ansible are under the same team governance.

the team split would let people fouse on one thing and make it looks better.
but we have two different teams, kolla-core team, and the
kolla-ansible-core team
already. anyone is welcome to join one of the team.  But in fact, the
members
of these two teams are almost the same.  if we split the team now, all we
gain
is making chaos and hard to manage.

I think it may be proper time when the members of kolla-core team and
the kolla-ansible-core team is different (50% maybe?).
​


On Sun, Apr 1, 2018 at 7:16 AM, Jeremy Stanley  wrote:

> On 2018-03-31 22:07:03 + (+), Steven Dake (stdake) wrote:
> [...]
> > The problems raised in this thread (tension - tight coupling -
> > second class citizens - stratification) was predicted early on -
> > prior to Kolla 1.0.  That prediction led to the creation of a
> > technical solution - the Kolla API.   This API permits anyone to
> > reuse the containers as they see fit if they conform their
> > implementation to the API.  The API is not specifically tied to
> > the Ansible deployment technology.  Instead the API is tied to the
> > varying requirements that various deployment teams have had in the
> > past around generalized requirements for making container
> > lifecycle management a reality while running OpenStack services
> > and their dependencies inside containers.
> [...]
>
> Thanks! That's where my fuzzy thought process was leading. Existence
> of a stable API guarantee rather than treating the API as "whatever
> kolla-ansible does" significantly increases the chances of other
> projects being able to rely on kolla's images in the long term.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto

2018-03-31 Thread Eric Fried
Mr. Fire-

> nova-powervm: no open reviews
>   - in test-requirements, but not actually used?
>   - made https://review.openstack.org/558091 for it

Thanks for that.  It passed all our tests; we should merge it early next
week.

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto

2018-03-31 Thread Matthew Thode
Here's the current status.  I'd like to ask the projects what's keeping
them from removing pycrypto in facor of a maintained library.

Open reviews
barbican:
  - (merge conflict) https://review.openstack.org/#/c/458196
  - (merge conflict) https://review.openstack.org/#/c/544873
nova-powervm: no open reviews
  - in test-requirements, but not actually used?
  - made https://review.openstack.org/558091 for it
pyghmi:
  - (merge conflict) https://review.openstack.org/#/c/331828
  - (merge conflict) https://review.openstack.org/#/c/545465
  - (doesn't change the import) https://review.openstack.org/#/c/545182
solum: no open reviews
  - looks like only a couple of functions need changing
trove: no open reviews
  - mostly uses the random feature

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project

2018-03-31 Thread Jeremy Stanley
On 2018-03-31 22:07:03 + (+), Steven Dake (stdake) wrote:
[...]
> The problems raised in this thread (tension - tight coupling -
> second class citizens - stratification) was predicted early on -
> prior to Kolla 1.0.  That prediction led to the creation of a
> technical solution - the Kolla API.   This API permits anyone to
> reuse the containers as they see fit if they conform their
> implementation to the API.  The API is not specifically tied to
> the Ansible deployment technology.  Instead the API is tied to the
> varying requirements that various deployment teams have had in the
> past around generalized requirements for making container
> lifecycle management a reality while running OpenStack services
> and their dependencies inside containers.
[...]

Thanks! That's where my fuzzy thought process was leading. Existence
of a stable API guarantee rather than treating the API as "whatever
kolla-ansible does" significantly increases the chances of other
projects being able to rely on kolla's images in the long term.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project

2018-03-31 Thread Steven Dake (stdake)



On March 31, 2018 at 12:35:31 PM, Jeremy Stanley 
(fu...@yuggoth.org) wrote:

On 2018-03-31 18:06:07 + (+), Steven Dake (stdake) wrote:
> I appreciate your personal interest in attempting to clarify the
> Kolla mission statement.
>
> The change in the Kolla mission statement you propose is
> unnecessary.
[...]

I should probably have been more clear. The Kolla mission statement
right now says that the Kolla team produces two things: containers
and deployment tools. This may make it challenging for the team to
avoid tightly coupling their deployment tooling and images, creating
a stratification of first-class (those created by the Kolla team)
and second-class (those created by anyone else) support for
deployment tools using those images.


The problems raised in this thread (tension - tight coupling - second class 
citizens - stratification) was predicted early on - prior to Kolla 1.0.  That 
prediction led to the creation of a technical solution - the Kolla API.   This 
API permits anyone to reuse the containers as they see fit if they conform 
their implementation to the API.  The API is not specifically tied to the 
Ansible deployment technology.  Instead the API is tied to the varying 
requirements that various deployment teams have had in the past around 
generalized requirements for making container lifecycle management a reality 
while running OpenStack services and their dependencies inside containers.

Is the intent to provide "a container-oriented deployment solution
and the container images it uses" (kolla-ansible as first-class
supported deployment engine for these images) or "container images
for use by arbitrary deployment solutions, along with an example
deployment solution for use with them" (kolla-ansible on equal
footing with competing systems that make use of the same images)?

My viewpoint is as all deployments projects are already on an equal footing 
when using Kolla containers.

I would invite the TripleO team who did integration with the Kolla API to 
provide their thoughts.

I haven't kept up with OSH development, but perhaps that team could provide 
their viewpoint as well.


Cheers

-steve


--
Jeremy Stanley
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests

2018-03-31 Thread Eric Fried
Hi Doug, I made this [2] for you.  I tested it locally with oslo.config
master, and whereas I started off with a slightly different set of
errors than you show at [1], they were in the same suites.  Since I
didn't want to tox the world locally, I went ahead and added a
Depends-On from [3].  Let's see how it plays out.

>> [1]
http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881
[2] https://review.openstack.org/#/c/558084/
[3] https://review.openstack.org/#/c/557012/

-efried

On 03/30/2018 06:35 AM, Doug Hellmann wrote:
> Anyone?
> 
>> On Mar 28, 2018, at 1:26 PM, Doug Hellmann  wrote:
>>
>> In the course of preparing the next release of oslo.config, Ben noticed
>> that nova's unit tests fail with oslo.config master [1].
>>
>> The underlying issue is that the tests mock things that oslo.config
>> is now calling as part of determining where options are being set
>> in code. This isn't an API change in oslo.config, and it is all
>> transparent for normal uses of the library. But the mocks replace
>> os.path.exists() and open() for the entire duration of a test
>> function (not just for the isolated application code being tested),
>> and so the library behavior change surfaces as a test error.
>>
>> I'm not really in a position to go through and clean up the use of
>> mocks in those (and other?) tests myself, and I would like to not
>> have to revert the feature work in oslo.config, especially since
>> we did it for the placement API stuff for the nova team.
>>
>> I'm looking for ideas about what to do.
>>
>> Doug
>>
>> [1] 
>> http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] Anchor/Relay Providers

2018-03-31 Thread Eric Fried
/me responds to self

Good progress has been made here.

Tetsuro solved the piece where provider summaries were only showing
resources that had been requested - with [8] they show usage information
for *all* their resources.

In order to make use of both [1] and [8], I had to shuffle them into the
same series - I put [8] first - and then balance my (heretofore) WIP [7]
on the top.  So we now have a lovely 5-part series starting at [9].

Regarding the (heretofore) WIP [7], I cleaned it up and made it ready.

QUESTION: Do we need a microversions for [8] and/or [1] and/or [7]?
Each changes the response payload content of GET /allocation_candidates,
so yes; but that content was arguably broken before, so no.  Please
comment on the patches accordingly.

-efried

> [1] https://review.openstack.org/#/c/533437/
> [2] https://bugs.launchpad.net/nova/+bug/1732731
> [3]
https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py@3308
> [4]
https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py@3062
> [5]
https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py@2658
> [6]
https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py@3303
> [7] https://review.openstack.org/#/c/558014/
[8] https://review.openstack.org/#/c/558045/
[9] https://review.openstack.org/#/c/558044/

On 03/30/2018 07:34 PM, Eric Fried wrote:
> Folks who care about placement (but especially Jay and Tetsuro)-
> 
> I was reviewing [1] and was at first very unsatisfied that we were not
> returning the anchor providers in the results.  But as I started digging
> into what it would take to fix it, I realized it's going to be
> nontrivial.  I wanted to dump my thoughts before the weekend.
> 
> 
> It should be legal to have a configuration like:
> 
> #CN1 (VCPU, MEMORY_MB)
> #/  \
> #   /agg1\agg2
> #  /  \
> # SS1SS2
> #  (DISK_GB)  (IPV4_ADDRESS)
> 
> And make a request for DISK_GB,IPV4_ADDRESS;
> And have it return a candidate including SS1 and SS2.
> 
> The CN1 resource provider acts as an "anchor" or "relay": a provider
> that doesn't provide any of the requested resource, but connects to one
> or more sharing providers that do so.
> 
> This scenario doesn't work today (see bug [2]).  Tetsuro has a partial
> fix [1].
> 
> However, whereas that fix will return you an allocation_request
> containing SS1 and SS2, neither the allocation_request nor the
> provider_summary mentions CN1.
> 
> That's bad.  Consider use cases like Nova's, where we have to land that
> allocation_request on a host: we have no good way of figuring out who
> that host is.
> 
> 
> Starting from the API, the response payload should look like:
> 
> {
> "allocation_requests": [
> {"allocations": {
> # This is missing ==>
> CN1_UUID: {"resources": {}},
> # <==
> SS1_UUID: {"resources": {"DISK_GB": 1024}},
> SS2_UUID: {"resources": {"IPV4_ADDRESS": 1}}
> }}
> ],
> "provider_summaries": {
> # This is missing ==>
> CN1_UUID: {"resources": {
> "VCPU": {"used": 123, "capacity": 456}
> }},
> # <==
> SS1_UUID: {"resources": {
> "DISK_GB": {"used": 2048, "capacity": 1048576}
> }},
> SS2_UUID: {"resources": {
> "IPV4_ADDRESS": {"used": 4, "capacity": 32}
> }}
> },
> }
> 
> Here's why it's not working currently:
> 
> => CN1_UUID isn't in `summaries` [3]
> => because _build_provider_summaries [4] doesn't return it
> => because it's not in usages because _get_usages_by_provider_and_rc [5]
> only finds providers providing resource in that RC
> => and since CN1 isn't providing resource in any requested RC, it ain't
> included.
> 
> But we have the anchor provider's (internal) ID; it's the ns_rp_id we're
> iterating on in this loop [6].  So let's just use that to get the
> summary and add it to the mix, right?  Things that make that difficult:
> 
> => We have no convenient helper that builds a summary object without
> specifying a resource class (which is a separate problem, because it
> means resources we didn't request don't show up in the provider
> summaries either - they should).
> => We internally build these gizmos inside out - an AllocationRequest
> contains a list of AllocationRequestResource, which contains a provider
> UUID, resource class, and amount.  The latter two are required - but
> would be n/a for our anchor RP.
> 
> I played around with this and came up with something that gets us most
> of the way there [7].  It's quick and dirty: there are functional holes
> (like returning "N/A" as a resource class; and traits are missing) and
> places where things could be made more efficient.  But it's a start.
> 
> -efried
> 
> [1] 

Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-31 Thread Jeremy Stanley
On 2018-03-31 18:06:07 + (+), Steven Dake (stdake) wrote:
> I appreciate your personal interest in attempting to clarify the
> Kolla mission statement.
> 
> The change in the Kolla mission statement you propose is
> unnecessary.
[...]

I should probably have been more clear. The Kolla mission statement
right now says that the Kolla team produces two things: containers
and deployment tools. This may make it challenging for the team to
avoid tightly coupling their deployment tooling and images, creating
a stratification of first-class (those created by the Kolla team)
and second-class (those created by anyone else) support for
deployment tools using those images.

Is the intent to provide "a container-oriented deployment solution
and the container images it uses" (kolla-ansible as first-class
supported deployment engine for these images) or "container images
for use by arbitrary deployment solutions, along with an example
deployment solution for use with them" (kolla-ansible on equal
footing with competing systems that make use of the same images)?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-31 Thread Michał Jastrzębski
So my take on the issue.

I think splitting Kolla and Kolla-Ansible to completely new project
(including name change and all) might look good from purity
perspective (they're effectively separate), but would cause chaos and
damage to production deployments people use. While code will be the
same, do we scrub "kolla" name from kolla-ansible code? Do we change
config paths? Configs lands in /etc/kolla so I guess new project
shouldn't do that? Not to mention that operators are used to this
nomenclature and build tools around it (for example Kayobe) and there
is no telling how many production deployments would get hurt. At the
same time I don't think there is much to gain from split like that, so
that's not really practical.

We can do this for Kolla-kubernetes as it hasn't released 1.0 so there
won't (or shouldn't) be production environments based on it.

We already have separate core teams for Kolla and Kolla-Ansible. From
my experience organizing PTG and other events for both (or rather all
3 deliverables) together makes sense and makes scheduling of
attendance much easier.

On 31 March 2018 at 11:06, Steven Dake (stdake)  wrote:
> On March 31, 2018 at 6:45:03 AM, Jeremy Stanley (fu...@yuggoth.org) wrote:
>
> [...]
> Given this, it sounds like the current Kolla mission statement of
> "provide production-ready containers and deployment tools for
> operating OpenStack clouds" could use some adjustment to drop the
> production-ready containers aspect for further clarity. Do you
> agree?
> [...]
>
> I appreciate your personal interest in attempting to clarify the Kolla
> mission statement.
>
> The change in the Kolla mission statement you propose is unnecessary.
>
> Regards
>
> -steve
>
>
>
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] New proposal for analysis.

2018-03-31 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Minwook,

I understand your concern about the security issue.
But how would that be different if the API call is passed through Vitrage API? 
The authentication from vitrage-dashboard to vitrage API will work, but then 
Vitrage will call an external API and you’ll have the same security issue, 
right? I don’t understand what is the difference between calling the external 
component from vitrage-dashboard and calling it from vitrage.

Best regards,
Ifat.

From: MinWookKim 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 29 March 2018 at 14:51
To: "'OpenStack Development Mailing List (not for usage questions)'" 

Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hello Ifat,

Thanks for your reply.  : )

I wrote my opinion on your comment.

Why do you think the request should pass through the Vitrage API? Why can’t 
vitrage-dashboard call the check component directly?

Authentication issues:
I think the check component is a separate component based on the API.

In my opinion, if the check component has a separate api address from the 
vitrage to receive requests from the Vitrage-dashboard,
the Vitrage-dashboard needs to know the api address for the check component.

This can result in a request / response situation open to anyone, regardless of 
the authentication supported
by openstack between the Vitrage-dashboard and the request / response procedure 
of check component.

This is possible not only through the Vitrage-dashboard, but also with simple 
commands such as curl.
(I think it is unnecessary to implement a separate authentication system for 
the check component.)

This problem may occur if someone knows the api address for the check component,
which can cause the host and VM to execute system commands.

what should happen if the user closes the check window before the checks are 
over? I assume that the checks will finish, but the user won’t be able to see 
the results?

If the window is closed before the check is finished, the user can not check 
the result.

To solve this problem, I think that temporarily saving a list of recent results 
is also a solution.

By storing temporary lists (for example, up to 10), the user can see the 
previous results and think that it is also possible to empty the list by the 
user.

how is it?

Thank you.

Best Regrads,
Minwook.

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Thursday, March 29, 2018 8:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hi Minwook,

Why do you think the request should pass through the Vitrage API? Why can’t 
vitrage-dashboard call the check component directly?

And another question: what should happen if the user closes the check window 
before the checks are over? I assume that the checks will finish, but the user 
won’t be able to see the results?

Thanks,
Ifat.

From: MinWookKim >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, 29 March 2018 at 10:25
To: "'OpenStack Development Mailing List (not for usage questions)'" 
>
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hello Ifat and Vitrage team.

I would like to explain more about the implementation part of the mail I sent 
last time.

The flow is as follows.

Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component

The last time I mentioned it as api-handler, it would be better to call the 
check component directly from Vitarge-api without having to use it.

I hope this helps you understand.

Thank you

Best Regards,
Minwook.

From: MinWookKim [mailto:delightw...@ssu.ac.kr]
Sent: Wednesday, March 28, 2018 11:21 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hello Ifat,

Thanks for your reply. : )

This proposal is a proposal that we expect to be useful from a user perspective.

From a manager's point of view, we need an implementation that minimizes the 
overhead incurred by the proposal.

The answers to some of your questions are:


 I assume that these checks will not be implemented in Vitrage, and the 
results will not be stored in Vitrage, right? Vitrage role is to be a place 
where it is easy and intuitive for the user to execute external actions/checks.

Yes, that's right. We do not need to save it to Vitrage because we just need to 
check the results.
However, it is possible to implement the function directly in Vitrage-dashboard 
separately from Vitrage like add-action-list panel,
but it seems that it is not enough to implement all the 

Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-31 Thread Steven Dake (stdake)
On March 31, 2018 at 6:45:03 AM, Jeremy Stanley 
(fu...@yuggoth.org) wrote:
[...]
Given this, it sounds like the current Kolla mission statement of
"provide production-ready containers and deployment tools for
operating OpenStack clouds" could use some adjustment to drop the
production-ready containers aspect for further clarity. Do you
agree?
[...]

I appreciate your personal interest in attempting to clarify the Kolla mission 
statement.

The change in the Kolla mission statement you propose is unnecessary.

Regards

-steve


Jeremy Stanley
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa] Pip 10 is on the way

2018-03-31 Thread Sean McGinnis
On Sat, Mar 31, 2018 at 03:00:27PM +, Jeremy Stanley wrote:
> According to a notice[1] posted to the pypa-announce and
> distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
> is expected to be released in two weeks (over the April 14/15
> weekend). We know it's at least going to start breaking[2] DevStack
> and we need to come up with a plan for addressing that, but we don't
> know how much more widespread the problem might end up being so
> encourage everyone to try it out now where they can.
> 
> [1] https://mail.python.org/pipermail/distutils-sig/2018-March/032104.html
> [2] https://github.com/pypa/pip/issues/4805
> -- 
> Jeremy Stanley

One upcoming change is the inability of having "import pip" in code. That
change snuck into 9.0.2 (and was worked around giving incorrect users a little
more time with 9.0.3).

I think we only found an issue with this in a library in use by neutron, but
please be aware that any programmatic use of pip as a library will need to be
fixed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][qa] Pip 10 is on the way

2018-03-31 Thread Jeremy Stanley
According to a notice[1] posted to the pypa-announce and
distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
is expected to be released in two weeks (over the April 14/15
weekend). We know it's at least going to start breaking[2] DevStack
and we need to come up with a plan for addressing that, but we don't
know how much more widespread the problem might end up being so
encourage everyone to try it out now where they can.

[1] https://mail.python.org/pipermail/distutils-sig/2018-March/032104.html
[2] https://github.com/pypa/pip/issues/4805
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-03-31 Thread Kashyap Chamarthy
[Meta comment: corrected the email subject: "Solar" --> "Stein"]

On Fri, Mar 30, 2018 at 04:26:43PM +0200, Kashyap Chamarthy wrote:
> The last version bump was in "Pike" release (commit: b980df0,
> 11-Feb-2017), and we didn't do any bump during "Queens".  So it's time
> to increment the versions (which will also makes us get rid of some
> backward compatibility cruft), and pick future versions of libvirt and
> QEMU.  
> 
> As it stands, during the "Pike" release the advertized NEXT_MIN versions  
>  
> were set to: libvirt 1.3.1 and QEMU 2.5.0 -- but they weren't actually
>  
> bumped for the "Queens" release.  So they will now be applied for the 
>  
> "Rocky" release.  (Hmm, but note that libvirt 1.3.1 was released more 
>  
> than 2 years ago[1].)  
> 
> While at it, we should also discuss about what will be the NEXT_MIN   
>  
> libvirt and QEMU versions for the "Solar" release.  To that end, I've 
>  
> spent going through different distributions and updated the   
>  
> DistroSupportMatrix Wiki[2].   
> 
> Taking the DistroSupportMatrix into picture, for the sake of discussion,
> how about the following NEXT_MIN versions for "Solar" release:
>  
> 
> (a) libvirt: 3.2.0 (released on 23-Feb-2017)   
> 
> This satisfies most distributions, but will affect Debian "Stretch",  
>  
> as they only have 3.0.0 in the stable branch -- I've checked their
> repositories[3][4].  Although the latest update for the stable
> release "Stretch (9.4)" was released only on 10-March-2018, I don't
> think they increment libvirt and QEMU versions in stable.  Is
> there another way for "Stretch (9.4)" users to get the relevant
> versions from elsewhere?   
> 
> (b) QEMU: 2.9.0 (released on 20-Apr-2017)  
> 
> This too satisfies most distributions but will affect Oracle Linux
> -- which seem to ship QEMU 1.5.3 (released in August 2013) with
> their "7", from the Wiki.  And will also affect Debian "Stretch" --
> as it only has 2.8.0
> 
> Can folks chime in here?
> 
> [1] 
> https://www.redhat.com/archives/libvirt-announce/2016-January/msg2.html
> [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
> [3] https://packages.qa.debian.org/libv/libvirt.html
> [4] https://packages.qa.debian.org/libv/libvirt.html
> 
> -- 
> /kashyap

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-31 Thread Jeremy Stanley
On 2018-03-31 03:13:01 + (+), Steven Dake (stdake) wrote:
[...]
> When contributors joined the Kolla project, we had a clear mission
> of providing containers and deployment tools.  Our ultimate
> objective was to make deployment *EASY* and solve from my
> perspective as PTL at the time what was OpenStack's number one
> pain point.
[...]

So, if I understand what you're suggesting, Kolla is a deployment
project. It uses Ansible and builds container images, but those are
merely implementation details. Other projects have found the
container images useful outside of Kolla and so the Kolla team has
attempted to be helpful in supporting their direct in unrelated
deployment tools but has no desire to decouple the deployment
tooling and image building components any further than necessary.

Given this, it sounds like the current Kolla mission statement of
"provide production-ready containers and deployment tools for
operating OpenStack clouds" could use some adjustment to drop the
production-ready containers aspect for further clarity. Do you
agree?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release

2018-03-31 Thread Kashyap Chamarthy
On Sat, Mar 31, 2018 at 03:17:52PM +0200, Kashyap Chamarthy wrote:
> On Fri, Mar 30, 2018 at 09:49:17AM -0500, Sean McGinnis wrote:

[...]

> > > Taking the DistroSupportMatrix into picture, for the sake of discussion,
> > > how about the following NEXT_MIN versions for "Solar" release:
> > >  
> > > 
> > Correction - for the "Stein" release. :)
> 
> Darn, I should've triple-checked before I assumed it is to be "Solar".
> If "Stein" is confirmed; I'll re-send this email with the correct
> release name for clarity.

It actually is:

http://lists.openstack.org/pipermail/openstack-dev/2018-March/128899.html
-- All Hail our Newest Release Name - OpenStack Stein

(That email went into 'openstack-operators' maildir for me; my filtering
fault.)

I won't start another thread;, will just leave this existing thread
intact, as people will read it as: "whatever name the 'S' release ends
up with" (as 'fungi' put it on IRC).

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release

2018-03-31 Thread Kashyap Chamarthy
On Fri, Mar 30, 2018 at 09:49:17AM -0500, Sean McGinnis wrote:
> > While at it, we should also discuss about what will be the NEXT_MIN 
> >
> > libvirt and QEMU versions for the "Solar" release.  To that end, I've   
> >
> > spent going through different distributions and updated the 
> >
> > DistroSupportMatrix Wiki[2].   
> > 
> > Taking the DistroSupportMatrix into picture, for the sake of discussion,
> > how about the following NEXT_MIN versions for "Solar" release:  
> >
> > 
> Correction - for the "Stein" release. :)

Darn, I should've triple-checked before I assumed it is to be "Solar".
If "Stein" is confirmed; I'll re-send this email with the correct
release name for clarity.

Thanks for correcting.

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev