Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions

2018-03-20 Thread 少合冯
2018-03-07 10:36 GMT+08:00 Alex Xu :

>
>
> 2018-03-07 10:21 GMT+08:00 Alex Xu :
>
>>
>>
>> 2018-03-06 22:45 GMT+08:00 Mooney, Sean K :
>>
>>>
>>>
>>>
>>>
>>> *From:* Matthew Booth [mailto:mbo...@redhat.com]
>>> *Sent:* Saturday, March 3, 2018 4:15 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions) <
>>> openstack-dev@lists.openstack.org>
>>> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple
>>> functions
>>>
>>>
>>>
>>> On 2 March 2018 at 14:31, Jay Pipes  wrote:
>>>
>>> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:
>>>
>>> Hello Nova team,
>>>
>>>  During the Cyborg discussion at Rocky PTG, we proposed a flow for
>>> FPGAs wherein the request spec asks for a device type as a resource class,
>>> and optionally a function (such as encryption) in the extra specs. This
>>> does not seem to work well for the usage model that I’ll describe below.
>>>
>>> An FPGA device may implement more than one function. For example, it may
>>> implement both compression and encryption. Say a cluster has 10 devices of
>>> device type X, and each of them is programmed to offer 2 instances of
>>> function A and 4 instances of function B. More specifically, the device may
>>> implement 6 PCI functions, with 2 of them tied to function A, and the other
>>> 4 tied to function B. So, we could have 6 separate instances accessing
>>> functions on the same device.
>>>
>>>
>>>
>>> Does this imply that Cyborg can't reprogram the FPGA at all?
>>>
>>> *[Mooney, Sean K] cyborg is intended to support fixed function
>>> acclerators also so it will not always be able to program the accelerator.
>>> In this case where an fpga is preprogramed with a multi function bitstream
>>> that is statically provisioned cyborge will not be able to reprogram the
>>> slot if any of the fuctions from that slot are already allocated to an
>>> instance. In this case it will have to treat it like a fixed function
>>> device and simply allocate a unused  vf  of the corret type if available. *
>>>
>>>
>>>
>>>
>>>
>>> In the current flow, the device type X is modeled as a resource class,
>>> so Placement will count how many of them are in use. A flavor for ‘RC
>>> device-type-X + function A’ will consume one instance of the RC
>>> device-type-X.  But this is not right because this precludes other
>>> functions on the same device instance from getting used.
>>>
>>> One way to solve this is to declare functions A and B as resource
>>> classes themselves and have the flavor request the function RC. Placement
>>> will then correctly count the function instances. However, there is still a
>>> problem: if the requested function A is not available, Placement will
>>> return an empty list of RPs, but we need some way to reprogram some device
>>> to create an instance of function A.
>>>
>>>
>>> Clearly, nova is not going to be reprogramming devices with an instance
>>> of a particular function.
>>>
>>> Cyborg might need to have a separate agent that listens to the nova
>>> notifications queue and upon seeing an event that indicates a failed build
>>> due to lack of resources, then Cyborg can try and reprogram a device and
>>> then try rebuilding the original request.
>>>
>>>
>>>
>>> It was my understanding from that discussion that we intend to insert
>>> Cyborg into the spawn workflow for device configuration in the same way
>>> that we currently insert resources provided by Cinder and Neutron. So while
>>> Nova won't be reprogramming a device, it will be calling out to Cyborg to
>>> reprogram a device, and waiting while that happens.
>>>
>>> My understanding is (and I concede some areas are a little hazy):
>>>
>>> * The flavors says device type X with function Y
>>>
>>> * Placement tells us everywhere with device type X
>>>
>>> * A weigher orders these by devices which already have an available
>>> function Y (where is this metadata stored?)
>>>
>>> * Nova schedules to host Z
>>>
>>> * Nova host Z asks cyborg for a local function Y and blocks
>>>
>>>   * Cyborg hopefully returns function Y which is already available
>>>
>>>   * If not, Cyborg reprograms a function Y, then returns it
>>>
>>> Can anybody correct me/fill in the gaps?
>>>
>>> *[Mooney, Sean K] that correlates closely to my recollection also. As
>>> for the metadata I think the weigher may need to call to cyborg to retrieve
>>> this as it will not be available in the host state object.*
>>>
>> Is it the nova scheduler weigher or we want to support weigh on
>> placement? Function is traits as I think, so can we have preferred_traits?
>> I remember we talk about that parameter in the past, but we don't have good
>> use-case at that time. This is good use-case.
>>
>
> If we call the Cyborg from the nova scheduler weigher, that will slow down
> the scheduling a lot also.
>

I'm not sure how much the performance loss.
But one nova scheduler weighter call Cyborg API (get the all accelerators

Re: [openstack-dev] [cinder] Support share backup to different projects?

2018-03-20 Thread Jay S Bryant

Tommy,

I am still not sure that this is going to move the team to a different 
decision.


Now that you have more information you can propose it as a topic in 
tomorrow's team meeting if you wish.


Jay


On 3/20/2018 8:54 PM, TommyLike Hu wrote:


Thanks Jay,
    The question is AWS doesn't have the concept of backup and their 
snapshot is incremental backup internally and will be finllay stored 
into S3 which is more sound like backup for us. Our snapshot can not 
be used across AZ.


Jay S Bryant >于2018年3月21日周三 上午4:13写道:




On 3/19/2018 10:55 PM, TommyLike Hu wrote:

Now Cinder can transfer volume (with or without snapshots) to
different projects,  and this make it possbile to transfer data
across tenant via volume or image. Recently we had a conversation
with our customer from Germany, they mentioned they are more
pleased if we can support transfer data accross tenant via backup
not image or volume, and these below are some of their concerns:

1. There is a use case that they would like to deploy their
develop/test/product systems in the same region but within
different tenants, so they have the requirment to share/transfer
data across tenants.

2. Users are more willing to use backups to secure/store their
volume data since backup feature is more advanced in product
openstack version (incremental backups/periodic backups/etc.).

3. Volume transfer is not a valid option as it's in AZ and it's a
complicated process if we would like to share the data to
multiple projects (keep copy in all the tenants).

4. Most of the users would like to use image for bootable volume
only and share volume data via image means the users have to
maintain lots of image copies when volume backup changed as well
as the whole system needs to differentiate bootable images and
none bootable images, most important, we can not restore volume
data via image now.

5. The easiest way for this seems to support sharing backup to
different projects, the owner project have the full authority
while shared projects only can view/read the backups.

6. AWS has the similar concept, share snapshot. We can share it
by modify the snapshot's create volume permissions [1].

Looking forward to any like or dislike or suggestion on this idea
accroding to my feature proposal experience:)


Thanks
TommyLike


[1]:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Tommy,

As discussed at the PTG, this still sounds like improper usage of
Backup.  Happy to hear input from others but I am having trouble
getting my head around it.

The idea of sharing a snapshot, as you mention AWS supports sounds
like it could be a more sensible approach.  Why are you not
proposing that?

Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][dib] Gate "out of disk" errors and diskimage-builder 2.12.0

2018-03-20 Thread Ian Wienand

Hi,

We had a small issue with dib's 2.12.0 release that means it creates
the root partition with the wrong partition type [1].  The result is
that a very old check in sfdisk fails, and growpart then can not
expand the disk -- which means you may have seen jobs that usually
work fine run out of disk space.

This slipped by because our functional testing doesn't test growpart;
an oversight we will correct in due course.

The bad images should have been removed, so a recheck should work.

We will prepare dib 2.12.1 with the fix.  As usual there are
complications, since the dib gate is broken due to unrelated triple-o
issues [2].  In the mean time, probably avoid 2.12.0 if you can.

Thanks,

-i

[1] https://review.openstack.org/554771
[2] https://review.openstack.org/554705

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] No meeting on 3/23

2018-03-20 Thread Eric K
IRC weekly meeting resumes on 3/30.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Team Weekly Meeting 2018.03.21

2018-03-20 Thread Zhipeng Huang
Hi Team,

Meeting today starting UTC1400 at #openstack-cyborg, initial agenda as
follows:

1. Sub-team lead progress update
2. rocky spec/patch review:
https://review.openstack.org/#/q/status:open+project:openstack/cyborg



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][vpnaas]

2018-03-20 Thread hoan...@vn.fujitsu.com
Hi,

IIUC, your use case is to connect 4 subnets from different sites (2 subnets for 
each site). If so, did you try with endpoint group?
If not, please refer the following docs for more detail about how to try and 
get more understanding [1][2]

[1] 
https://docs.openstack.org/neutron/latest/admin/vpnaas-scenario.html#using-vpnaas-with-endpoint-group-recommended
[2] 
https://docs.openstack.org/neutron-vpnaas/latest/contributor/multiple-local-subnets.html


BRs,
Cao Xuan Hoang,

From: vidyadhar reddy [mailto:vidyadharredd...@gmail.com]
Sent: Tuesday, March 20, 2018 4:31 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][vpnaas]

Hello,
i have a general question regarding the working of vpnaas,
can we setup multiple vpn connections on a single router? my scenario is lets 
say we have two networks net 1 and net2 in two different sites respectively, 
each network has two subnets, two sites have one router in each, with three 
interfaces one for the public network and remaining two for the two subnets, 
can we setup a two vpnaas connections on the routers in each site to enable 
communication between the two subnets in each site.
i have tried this setup, it didn't work for me. just wanted to know if it is a 
design constraint or not, i am not sure if this issue is under development, is 
there any development going on or is it already been solved?

BR,
Vidyadhar reddy peddireddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Does neutron-server support the main backup redundancy?

2018-03-20 Thread Kevin Benton
You can run as many neutron server processes as you want in an
active/active setup.

On Tue, Mar 20, 2018, 18:35 Frank Wang  wrote:

> Hi All,
>  As far as I know, neutron-server only can be a single node, In order
> to improve the reliability of the system, Does it support the main backup
> or active/active redundancy? Any comment would be appreciated.
>
> Thanks,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Support share backup to different projects?

2018-03-20 Thread TommyLike Hu
Thanks Jay,
The question is AWS doesn't have the concept of backup and their
snapshot is incremental backup internally and will be finllay stored into
S3 which is more sound like backup for us. Our snapshot can not be used
across AZ.

Jay S Bryant 于2018年3月21日周三 上午4:13写道:

>
>
> On 3/19/2018 10:55 PM, TommyLike Hu wrote:
>
> Now Cinder can transfer volume (with or without snapshots) to different
> projects,  and this make it possbile to transfer data across tenant via
> volume or image. Recently we had a conversation with our customer from
> Germany, they mentioned they are more pleased if we can support transfer
> data accross tenant via backup not image or volume, and these below are
> some of their concerns:
>
> 1. There is a use case that they would like to deploy their
> develop/test/product systems in the same region but within different
> tenants, so they have the requirment to share/transfer data across tenants.
>
> 2. Users are more willing to use backups to secure/store their volume data
> since backup feature is more advanced in product openstack version
> (incremental backups/periodic backups/etc.).
>
> 3. Volume transfer is not a valid option as it's in AZ and it's a
> complicated process if we would like to share the data to multiple projects
> (keep copy in all the tenants).
>
> 4. Most of the users would like to use image for bootable volume only and
> share volume data via image means the users have to maintain lots of image
> copies when volume backup changed as well as the whole system needs to
> differentiate bootable images and none bootable images, most important, we
> can not restore volume data via image now.
>
> 5. The easiest way for this seems to support sharing backup to different
> projects, the owner project have the full authority while shared projects
> only can view/read the backups.
>
> 6. AWS has the similar concept, share snapshot. We can share it by modify
> the snapshot's create volume permissions [1].
>
> Looking forward to any like or dislike or suggestion on this idea
> accroding to my feature proposal experience:)
>
>
> Thanks
> TommyLike
>
>
> [1]:
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Tommy,
>
> As discussed at the PTG, this still sounds like improper usage of Backup.
> Happy to hear input from others but I am having trouble getting my head
> around it.
>
> The idea of sharing a snapshot, as you mention AWS supports sounds like it
> could be a more sensible approach.  Why are you not proposing that?
>
> Jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky PTG summary - miscellaneous topics from Friday

2018-03-20 Thread melanie witt

On Tue, 20 Mar 2018 19:12:58 -0500, Matt Riedemann wrote:

    *  XenAPI: support non file system based SR types - e.g. LVM, ISCSI
      * Currently xenapi is only file system-based, cannot yet support
LVM, ISCSI that are supported by XenServer
      * We agreed that a specless blueprint is fine for this:
https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement  


This blueprint isn't approved yet. Is someone going to bring it up in
the nova meeting, or are we just going to approve since it there was
agreement to do so at the PTG?


I'll bring it up at the next meeting. I've added it to the open 
discussion section of the agenda.


-melanie


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Does neutron-server support the main backup redundancy?

2018-03-20 Thread Frank Wang
Hi All,
 As far as I know, neutron-server only can be a single node, In order to 
improve the reliability of the system, Does it support the main backup or 
active/active redundancy? Any comment would be appreciated.

Thanks,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky spec review day

2018-03-20 Thread Matt Riedemann

On 3/20/2018 6:47 PM, melanie witt wrote:
I was thinking that 2-3 weeks ahead of spec freeze would be appropriate, 
so that would be March 27 (next week) or April 3 if we do it on a Tuesday.


It's spring break here on April 3 so I'll be listening to screaming 
kids, I mean on vacation. Not that my schedule matters, just FYI.


But regardless of that, I think the earlier the better to flush out 
what's already there, since we've already approved quite a few 
blueprints this cycle (32 to so far).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky PTG summary - miscellaneous topics from Friday

2018-03-20 Thread Matt Riedemann

On 3/20/2018 5:57 PM, melanie witt wrote:
     * For rebuild, we're going to defer the instance.save() until 
conductor has passed scheduling and before it casts to compute in order 
to address the issue of rolling back instance values if something fails 
during rebuild scheduling


I got to thinking about why the API does the instance.save() before 
casting to conductor, and realized that if we changed that, the POST 
response for rebuild will be different, because the handler code looks 
up the updated instance from the DB to form the response body. So if we 
move the save() to conductor, the response body will change and that's a 
behavior change, unless there is another way to handle this without 
duplicating a bunch of logic.



   *  XenAPI: support non file system based SR types - e.g. LVM, ISCSI
     * Currently xenapi is only file system-based, cannot yet support 
LVM, ISCSI that are supported by XenServer
     * We agreed that a specless blueprint is fine for this: 
https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement 



This blueprint isn't approved yet. Is someone going to bring it up in 
the nova meeting, or are we just going to approve since it there was 
agreement to do so at the PTG?



   * Block device mapping creation races during attach volume
     * We agreed to create a nova-manage command to do BDM clean up and 
then add a unique constraint in S
     * mriedem will restore the device name spec and someone else can 
pick it up


The spec is now restored:

https://review.openstack.org/#/c/452546/

But I don't know who was going to take it over (dansmith?).


   * Validate policy when creating a server group
     * We can create a server group that have no policies (empty 
policies) currently. We can create a server with it, but all related 
scheduler filters return True, so it is useless

     * Spec: https://review.openstack.org/#/c/546484
     * We agreed this should be a simple thing to do, spec review is 
underway. We also said we should consider lumping in some other trivial 
API cleanup into the same microversion - we have a lot of TODOs for 
similar stuff like this in the API


I think https://review.openstack.org/#/c/546925/ will supersede ^ so we 
should probably hold off on Takashi's spec until we know for sure what 
we're doing about the hard-affinity policy limit stuff.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Rocky spec review day

2018-03-20 Thread melanie witt

Hi everybody,

The past several cycles, we've had a spec review day in the cycle where 
reviewers focus on specs and iterating quickly with spec authors for the 
day. Spec freeze is April 19 so I wanted to get some input from all of 
you about what day would work best for a spec review day.


I was thinking that 2-3 weeks ahead of spec freeze would be appropriate, 
so that would be March 27 (next week) or April 3 if we do it on a Tuesday.


Please let me know what you think and suggest other days that might work 
better.


Best,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Review runways this cycle

2018-03-20 Thread melanie witt

Hello Stackers,

As mentioned in the earlier "Rocky PTG summary - miscellaneous topics 
from Friday" email, this cycle we're going to experiment with a 
"runways" system for focusing review on approved blueprints in 
time-boxes. The goal here is to use a bit more structure and process in 
order to focus review and complete merging of approved work more quickly 
and reliably.


We were thinking of starting the runways process after the spec review 
freeze (which is April 19) so that reviewers won't be split between spec 
reviews and reviews of work in runways.


The process and instructions are explained in detail on this etherpad, 
which will also serve as the place we queue and track blueprints for 
runways:


https://etherpad.openstack.org/p/nova-runways-rocky

Please bear with us as this is highly experimental and we will be giving 
it a go knowing it's imperfect and adjusting the process iteratively as 
we learn from it.


Do check out the etherpad and ask questions on this thread or on IRC and 
we'll do our best to answer them.


Cheers,
-melanie



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-12

2018-03-20 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-12.html

This week's TC Report goes off in the weeds a bit with the editorial
commentary from yours truly. I had trouble getting started, so had
to push myself through some thinking by writing stuff that at least
for the last few weeks I wouldn't normally be including in the
summaries. After getting through it, I realized that the reason I
was struggling is because I haven't been including these sorts of
things. Including them results in a longer and more meandering report
but it is more authentically my experience, which was my original
intention.

# Zuul Extraction and the Difficult Nature of Communication

Last [Tuesday
Morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T17:22:38)
we had some initial discussion about Zuul being extracted from
OpenStack governance as a precursor to becoming part of the CI/CD
strategic area being born elsewhere in the OpenStack Foundation.

Then on 
[Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:08:06)
we revisited the topic, especially as it related to how we
communicate change in the community and how we invite participation
in making decisions about change. In this case by "community" we're
talking about anything under the giant umbrella of "stuff associated
with the OpenStack Foundation".

Plenty of people expressed that though they were not surprised by
the change, it was because they are insiders and could understand
how some, who are not, might be surprised by what seemed like a big
change. This led to addressing the immediate shortcomings and
clarifying the history of the event.

There was also
[concern](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:27:22)
that some of the reluctance to talk openly about the change appeared
to stem from needing to preserve the potency of a Foundation marketing
release.

I [expressed some
frustration](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:36:50):
"...as usual, we're getting caught up in
details of a particular event (one that in the end we're all happy
to see happen), rather than the general problem we saw with it
(early transparency etc). Solving the immediate problem is easy, but
since we _keep doing it_, we've got a general issues to resolve."

We went round and round about the various ways in which we have tried
and failed to do good communication in the past, and while we make
some progress, we fail to establish a pattern. As Doug [pointed
out](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:41:33),
no method can be 100% successful, but if we pick a method and stick to
it, people can learn that method.

We have a cycle where we not only sometimes communicate poorly but
we also communicate poorly about that poor communication. So when I
come round to another week of writing this report, and am reminded
that these issues persist and I am once again communicating about
them, it's frustrating. Communicating, a lot, is generally a good
thing, but if things don't change as a result, that can be a strain.
If I'm still writing these things in a year's time, and we haven't
managed to achieve at least a bit more grace, consistency, and
transparency in the ways that we share information within and
between groups (including, and maybe especially, the Foundation
executive wing) in the wider community, it will be a shame and I will
have a sad.

In a somewhat related and good sign, there is [great
thread](http://lists.openstack.org/pipermail/openstack-operators/2018-March/014994.html)
on the operators list that raises the potential of merging the Ops
Meeting and the PTG into some kind of "OpenStack Community Working
Gathering".

# Encouraging Upstream Contribution

On
[Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-16.log.html#t2018-03-16T14:29:21),
tbarron raised some interesting questions about how the summit talk
selection process might relate to the [four
opens](https://governance.openstack.org/tc/reference/opens.html).  The
talk eventually led to a positive plan to try bring some potential
contributors upstream in advance of summit as, well as to work to
create more clear guidelines for track chairs.

# Executive Power

I had a question at [this morning's office
hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-20.log.html#t2018-03-20T09:00:00),
related to some work in the API-SIG that hasn't had a lot of traction,
about how best to explain how executive power is gained and spent
in a community where we intentionally spread power around a lot. As
with communication above, this is a topic that comes up a fair
amount, and investigating the underlying patterns can be
instructive.

My initial reaction on the topic 

[openstack-dev] [nova] Rocky PTG summary - miscellaneous topics from Friday

2018-03-20 Thread melanie witt

Howdy all,

I've put together an etherpad [0] with summaries of the items from the 
Friday miscellaneous session from the PTG at the Croke Park Hotel "game 
room" across from the bar area. I didn't summarize all of the items, but 
attempted to do so for most of them, namely the ones that had 
discussion/decisions about them.


Cheers,
-melanie

[0] https://etherpad.openstack.org/p/nova-ptg-rocky-misc-summary

*Friday Miscellaneous: Rocky PTG Summary

https://etherpad.openstack.org/p/nova-ptg-rocky L281

*Key topics

  * Team / review policy
  * Technical debt and cleanup
* Removing nova-network and legacy cells v1
* Community goal to remove usage of mox3 in unit tests
* Dropping support of running nova-api and the metadata API service 
under eventlet

* Cruft surrounding rebuild and evacuate
* Bumping the minimum required version of libvirt
* Nova's 'enabled_perf_events' feature will be broken with Linux 
Kernel 4.14+ (the feature has been removed from the kernel)

  * Miscellaneous topics from the PTG etherpad

*Agreements and decisions

  * On team / review policy, for the Rocky cycle we're going to 
experiment with a process for "runways" wherein we'll focus review 
bandwidth on selected blueprints in 2 week time-boxes

* Details here: https://etherpad.openstack.org/p/nova-runways-rocky
  * On technical debt and cleanup:
* We're going to remove nova-network this cycle and see how it 
goes. Then we'll look toward removing legacy cells v1.
* NOTE: If you're planning to work on the community-wide goal of 
removing mox3 usage, don't bother refactoring nova-network and legacy 
cells v1 unit tests. Those tests will be entirely removed soon-ish.
* We're going to dump a warning on service startup and add a 
release note for deprecation, and plan for removal of support for 
running nova-api and the metadata API service under eventlet in S

  * Patch: https://review.openstack.org/#/c/549510/
* For rebuild, we're going to defer the instance.save() until 
conductor has passed scheduling and before it casts to compute in order 
to address the issue of rolling back instance values if something fails 
during rebuild scheduling
  * For future work on rebuild tech debt, there was an idea to 
deprecate "evacuate" and add an option to rebuild like "--elsewhere" to 
collapse the two into using nearly the same code path. Evacuate is a 
rebuild and it would be nice to represent it as such. Someone would need 
to write up a spec for this.
* We're going to bump the minimum required libvirt version: 
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix

  * kashyap is going to do this
* We're going to log a warning if enabled_perf_events is set in 
nova.conf and mark it as deprecated for removal

  * kashyap is going to do this
  * Abort Cold Migration
* This would add a new API and a significant amount of complexity 
as it prone to race conditions (for example, abort request lands just 
after the disk migration has finished, having to restore the original 
instance), etc.
* We would like to have greater interest from operators for the 
feature before going down that path
* takashin will email openstack-operat...@lists.openstack.org to 
ask if there is broader interest in the feature

  * Abort live migrations in queued status
* We agreed this is reasonable functionality to add, just need to 
work out the details on the spec
* Kevin_Zheng will update the spec: 
https://review.openstack.org/#/c/536722/

  * Adding request_id field to migrations object
* The goal here is to be able to lookup the instance action for a 
failed migration to determine why it failed, and the request_id is 
needed to lookup the instance action.
* We agreed to add the request_id instance action notification 
instead and gibi will do this: 
https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications
  * Returning Flavor Extra Specs in GET /flavors/detail and GET 
/flavors/{flavor_id}
* 
https://blueprints.launchpad.net/nova/+spec/add-extra-specs-to-flavor-list
* Doing this would create parity between the servers API (when 
showing the instance.flavor) and the flavors API
* We agreed to add a new microversion and implement it the same way 
as we have for instance.flavor using policy as the control on whether to 
show the extra specs

  * Adding host and Error Code field to instance action event
* We agreed that it would be reasonable to add a new microversion 
to add the host (how it's shown to be based on a policy check) to the 
instance action event but the error code is a much more complex, 
cross-project, community-wide effort so we're not going to pursue that 
for now

* Spec for adding host: https://review.openstack.org/#/c/543277/
  * Allow specifying tolerance for (soft)(anti-)affinity groups
* This requirement is about adding an attribute to the group to 
limit the amount of how hard the 

[openstack-dev] [First Contact] [SIG] Meeting Today!

2018-03-20 Thread Kendall Nelson
Hello!

Another meeting tonight late/tomorrow depending on where in the world you
live :) 0800 UTC Wednesday.

Here is the agenda if you have anything to add [1]. Or if you want to add
your name to the ping list it is there as well!

See you all soon!

-Kendall (diablo_rojo)

[1] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Support share backup to different projects?

2018-03-20 Thread Jay S Bryant



On 3/19/2018 10:55 PM, TommyLike Hu wrote:
Now Cinder can transfer volume (with or without snapshots) to 
different projects,  and this make it possbile to transfer data across 
tenant via volume or image. Recently we had a conversation with our 
customer from Germany, they mentioned they are more pleased if we can 
support transfer data accross tenant via backup not image or volume, 
and these below are some of their concerns:


1. There is a use case that they would like to deploy their 
develop/test/product systems in the same region but within different 
tenants, so they have the requirment to share/transfer data across 
tenants.


2. Users are more willing to use backups to secure/store their volume 
data since backup feature is more advanced in product openstack 
version (incremental backups/periodic backups/etc.).


3. Volume transfer is not a valid option as it's in AZ and it's a 
complicated process if we would like to share the data to multiple 
projects (keep copy in all the tenants).


4. Most of the users would like to use image for bootable volume only 
and share volume data via image means the users have to maintain lots 
of image copies when volume backup changed as well as the whole system 
needs to differentiate bootable images and none bootable images, most 
important, we can not restore volume data via image now.


5. The easiest way for this seems to support sharing backup to 
different projects, the owner project have the full authority while 
shared projects only can view/read the backups.


6. AWS has the similar concept, share snapshot. We can share it by 
modify the snapshot's create volume permissions [1].


Looking forward to any like or dislike or suggestion on this idea 
accroding to my feature proposal experience:)



Thanks
TommyLike


[1]: 
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Tommy,

As discussed at the PTG, this still sounds like improper usage of 
Backup.  Happy to hear input from others but I am having trouble getting 
my head around it.


The idea of sharing a snapshot, as you mention AWS supports sounds like 
it could be a more sensible approach.  Why are you not proposing that?


Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Job failures on stable/pike and stable/ocata

2018-03-20 Thread Matt Riedemann

On 3/20/2018 1:45 PM, Sean McGinnis wrote:

All known patches are merged now and the last step of reverting the non-voting
state of the one job is just about to finish in the gate queue.

Stable branches should now be OK to recheck any failed jobs from the last
couple of days. If you see anything else crop up related to this, just let me
know.


Thanks for wrangling this one.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] The Weekly Owl - 13th Edition

2018-03-20 Thread Emilien Macchi
On Tue, Mar 20, 2018 at 9:01 AM, Emilien Macchi  wrote:

>
> +--> Matt is John and ruck is John. Please let them know any new CI issue.
>

so I double checked and Matt isn't John but in fact he's the rover ;-)
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Job failures on stable/pike and stable/ocata

2018-03-20 Thread Sean McGinnis
On Mon, Mar 19, 2018 at 04:02:50PM -0500, Sean McGinnis wrote:
> [snip]
> 
> We have a couple of issues causing failures with stable/pike and stable/ocata.
> Actually, it also affects stable/queens as well due to grenade jobs needing to
> run stable/pike first.
> 
> [snip]
>
> I think we have a full working plan in place. The oslo.util patches would fail
> just the legacy-tempest-dsvm-neutron-src job, so that has been marked as
> non-voting for now. Next, the oslo.util fixes need to merge and a new stable
> release done for them. Then, requirements updates to both stable branches can
> pass that raise the upper-constraints for ryu to 4.18 which includes the
> changes we need.
> 
> Once all that is done, we can merge the last patch that reverts the change
> making legacy-tempest-dsvm-neutron-src voting again.
> 
> The set up patches (other than the upcoming release requests) can be found
> under the pip/5081 topic:
> 
> https://review.openstack.org/#/q/topic:pip/5081+(status:open+OR+status:merged)
> 
> As far as I can tell, once all that is done, the stable branches should be
> unblocked and we should be back in business. If anything else crops up, I'll
> post updates here.
> 

All known patches are merged now and the last step of reverting the non-voting
state of the one job is just about to finish in the gate queue.

Stable branches should now be OK to recheck any failed jobs from the last
couple of days. If you see anything else crop up related to this, just let me
know.

Sean
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG

2018-03-20 Thread Monty Taylor

On 03/17/2018 03:34 AM, Emilien Macchi wrote:
That way, we'll be able to have some early testing on python3-only 
environments (thanks containers!) without changing the host OS.


All hail our new python3-only overlords!!!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG

2018-03-20 Thread Alfredo Moralejo Alonso
On Sat, Mar 17, 2018 at 9:34 AM, Emilien Macchi  wrote:

> During the PTG we had some nice conversations about how TripleO can make
> progress on testing OpenStack deployments with Python 3.
> In CC, Haikel, Alfredo and Javier, please complete if I missed something.
>
>
> ## Goal
>
> As an OpenStack distribution, RDO would like to ensure that the OpenStack
> services (which aren't depending on Python 2) are packaged and can be
> containerized to be tested in TripleO CI.
>
>

> ## Challenges
>
> - Some services aren't fully Python 3, but we agreed this was not our
> problem but the project's problems. However, as a distribution, we'll make
> sure to ship what we can on Python 3.
> - CentOS 7 is not the Python 3 distro and there are high expectations from
> the next release but we aren't there yet.
> - Fedora is Python 3 friendly but we don't deploy TripleO on Fedora, and
> we don't want to do it (for now at least).
>
> To be clear, python3 packages will be only provided for Fedora ini RDO
Trunk repos and, unless it's explicitely changed in future, RDO's policy
is not to support deployments in Fedora using python2 nor python3. The main
goal of this effort is to make transition to python3 smoother in future
CentOS releases and using fedora as a testbed for it.

>
> ## Proposal
>
> - A fedora stabilized repository will be created by RDO to provide a
stable and working set of fedora packages to run RDO OpenStack services
using python3.

- Continue to follow upstream projects who support Python3 only and ship
> rpms in RDO.
> - Investigate the build of Kolla containers on Fedora / Python 3 and push
> them to a registry (maybe in the same namespace with different name or
> maybe a new namespace).
> - Kick-off some TripleO CI experimental job that will use these containers
> to deploy TripleO (maybe on one basic scenario for now).
>
>
> ## Roadmap for Rocky
>
> For Rocky we agreed to follow the 3 steps part of the proposal (maybe
> more, please add what I've missed).
>
The services enabled for python3  during rocky will depend on the progress
of the different tasks and i guess we will adapt the order of the services
depending on the technical issues we find.

> That way, we'll be able to have some early testing on python3-only
> environments (thanks containers!) without changing the host OS.
>
> Just for awareness, we may hit issues running services closely coupled to
kernel modules as openvswitch.

>
> Thanks for your feedback and comments, it's an open discussion.
> --
> Emilien Macchi
>

[1]
https://mail.rdoproject.org/thread.html/f122ccd93daf5e4ca26b7db0e90e977fb0fbb253ad7293f81b13a132@%3Cdev.lists.rdoproject.org%3E
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG

2018-03-20 Thread Javier Pena
- Original Message -

> During the PTG we had some nice conversations about how TripleO can make
> progress on testing OpenStack deployments with Python 3.
> In CC, Haikel, Alfredo and Javier, please complete if I missed something.

> ## Goal

> As an OpenStack distribution, RDO would like to ensure that the OpenStack
> services (which aren't depending on Python 2) are packaged and can be
> containerized to be tested in TripleO CI.

> ## Challenges

> - Some services aren't fully Python 3, but we agreed this was not our problem
> but the project's problems. However, as a distribution, we'll make sure to
> ship what we can on Python 3.
> - CentOS 7 is not the Python 3 distro and there are high expectations from
> the next release but we aren't there yet.
> - Fedora is Python 3 friendly but we don't deploy TripleO on Fedora, and we
> don't want to do it (for now at least).

> ## Proposal

> - Continue to follow upstream projects who support Python3 only and ship rpms
> in RDO.
> - Investigate the build of Kolla containers on Fedora / Python 3 and push
> them to a registry (maybe in the same namespace with different name or maybe
> a new namespace).
> - Kick-off some TripleO CI experimental job that will use these containers to
> deploy TripleO (maybe on one basic scenario for now).

One point we should add here: to test Python 3 we need some base operating 
system to work on. For now, our plan is to create a set of stabilized Fedora 28 
repositories and use them only for CI jobs. See [1] for details on this plan. 

Regards, 
Javier 

[1] - 
https://etherpad.openstack.org/p/stabilized-fedora-repositories-for-openstack 

> ## Roadmap for Rocky

> For Rocky we agreed to follow the 3 steps part of the proposal (maybe more,
> please add what I've missed).
> That way, we'll be able to have some early testing on python3-only
> environments (thanks containers!) without changing the host OS.

> Thanks for your feedback and comments, it's an open discussion.
> --
> Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-ansible cli proposal

2018-03-20 Thread Christian Berendt
In my opinion, a separate repository would be most suitable for this. So we do 
not mix the ansible roll with a frontend for ansible itself and are independent.

Importing the project into the OpenStack namespace is probably easier this way.

Christian.

> On 20. Mar 2018, at 18:02, Borne Mace  wrote:
> 
> Greetings all,
> 
> One of the discussions we had at the recent PTG was in regards to the
> blueprint to add support for a kolla-ansible cli [0].  I would like to
> propose that to satisfy this blueprint the thus far Oracle developed
> kollacli be completely upstreamed and made a community guided project.
> 
> The source for the project is available already [1] and it is known to
> work against the Queens codebase.  My suggestion would be that either a
> new repository be created for it or it be included in the
> kolla-ansible repository.  Either way my hope is that it be under
> kolla project control, as far as PTL guidance and core contributors.
> 
> The kollacli is documented here [2] for your review, and along with
> any discussion that folks want to have on the mailing list I will make
> sure to be around for the next couple of wednesday kolla meetings so
> that it can be discussed there as well.
> 
> Thanks much for taking the time to read this,
> 
> -- Borne Mace
> 
> [0]: https://blueprints.launchpad.net/kolla/+spec/kolla-multicloud-cli
> [1]: https://oss.oracle.com/git/gitweb.cgi?p=openstack-kollacli.git;a=summary
> [2]: https://docs.oracle.com/cd/E90981_01/E90982/html/kollacli.html
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 12

2018-03-20 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w12.


Bugs


One new bug from last week:

[Undecided] https://bugs.launchpad.net/nova/+bug/1756360 Serializer 
strips Exception kwargs
The bug refers to an oslo.serialization change as the reason of the 
changed behavior but I failed to reproduce the expected behavior with 
older than that oslo.serialization version. Also there is a fix 
proposed that I have to look at https://review.openstack.org/#/c/554607/



Versioned notification transformation
-
There are 3 patches that has positive feedback (but no +2 as I'm the 
author of those) and needs core attention

https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open


Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
The implementation still needs work 
https://review.openstack.org/#/c/526251/



Add the user id and project id of the user initiated the instance
action to the notification
-
The bp has been approved 
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications 
but implementation hasn't been proposed yet.


Add request_id to the InstanceAction versioned notifications

https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications
Kevin has a WIP patch up https://review.openstack.org/#/c/553288 . I 
promised to go through it soon.



Sending full traceback in versioned notifications
-
The specless bp has been approved 
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications 
was discussed and due to possible complications with looking up the 
server group when a server is deleted we would like to see some WIP 
implementation patch proposed before the bp is approved.



Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No progress.

Weekly meeting
--
The next meeting will be held on 27th of Marc on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180327T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] kolla-ansible cli proposal

2018-03-20 Thread Borne Mace

Greetings all,

One of the discussions we had at the recent PTG was in regards to the
blueprint to add support for a kolla-ansible cli [0].  I would like to
propose that to satisfy this blueprint the thus far Oracle developed
kollacli be completely upstreamed and made a community guided project.

The source for the project is available already [1] and it is known to
work against the Queens codebase.  My suggestion would be that either a
new repository be created for it or it be included in the
kolla-ansible repository.  Either way my hope is that it be under
kolla project control, as far as PTL guidance and core contributors.

The kollacli is documented here [2] for your review, and along with
any discussion that folks want to have on the mailing list I will make
sure to be around for the next couple of wednesday kolla meetings so
that it can be discussed there as well.

Thanks much for taking the time to read this,

-- Borne Mace

[0]: https://blueprints.launchpad.net/kolla/+spec/kolla-multicloud-cli
[1]: 
https://oss.oracle.com/git/gitweb.cgi?p=openstack-kollacli.git;a=summary

[2]: https://docs.oracle.com/cd/E90981_01/E90982/html/kollacli.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][oslo] Move MultiConfigParser to networking-cisco

2018-03-20 Thread Ben Nemec

Hi,

I'm hoping anyone involved with networking-cisco is subscribed to the 
neutron tag.  If there's a better one to use please feel free to add it.


The purpose of this email is to discuss plans for removing 
MultiConfigParser from oslo.config.  It has been deprecated for a while 
and some upcoming work in the project has prompted us to want to remove 
it.  Currently the only project still using it is networking-cisco, and 
in the interest of simplicity we are proposing that MultiConfigParser 
just be moved to networking-cisco.  I've pushed a change to do that in 
https://review.openstack.org/554617


One concern is that I'm not sure if this functionality is tested in ci. 
I'm hoping someone from networking-cisco can comment on what needs to 
happen with that.


Anyway, I just wanted to send something out that explains what is going 
on with these changes.  Please respond with any comments or questions.


Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 13th Edition

2018-03-20 Thread Emilien Macchi
Note: this is the thirteenth edition of a weekly update of what happens in
TripleO.
The goal is to provide a short reading (less than 5 minutes) to learn where
we are and what we're doing.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128234.html

+-+
| General announcements |
+-+

+--> Bug backlog for Rocky is *huge*, please read Alex's email:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128557.html
+--> TripleO UI squad is about to experiment Storyboard and bugs might be
migrated from Launchpad during the following days (more infos soon).

+--+
| Continuous Integration |
+--+

+--> Matt is John and ruck is John. Please let them know any new CI issue.
+--> Master promotion is 14 days, Queens is 14 days, Pike is 2 days and
Ocata is 0 days.
+--> Focus is on devmode replacement and promotion blockers.
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and
https://goo.gl/D4WuBP

+-+
| Upgrades |
+-+

+--> Good progress on FFU and P2Q workflows, reviews are needed.
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> No updates this week, efforts are on containerized undercloud.
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> ceph-ansible support still in progress
+--> Working on validation to check for SoftwareConfig outputs
+--> Still looking at process to create a new git repo per role for
standalone ansible roles
+--> More:
https://etherpad.openstack.org/p/tripleo-config-download-squad-status

+--+
| Integration |
+--+

+--> Team is working on config-download integration for ceph and
multi-cluster support.
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Last week's meeting was about Threat analysis, Limit TripleO users,
Public TLS, and Secret identification for TripleO
+--> Tomorrow's meeting is about Mistral secret storage.
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact |
++

The eyes of an owl are not true “eyeballs.” Their tube-shaped eyes are
completely immobile, providing binocular vision which fully focuses on
their prey and boosts depth perception.
Source: http://www.audubon.org/news/11-fun-facts-about-owls

Stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Re: Mistral docker job

2018-03-20 Thread Vitalii Solodilov
Can we push a Mistral image to some registry?I think it is more convenient than zip for user. And we can add image to cache-from section. https://docs.docker.com/compose/compose-file/#cache_from  About mysql, sorry, i don't have experience. I added the latest version of mysql. As I know that It's RC. Maybe we change to 5.7? 20.03.2018, 17:28, "Kovi, Andras 1. (Nokia - HU/Budapest)" :Yeah, sorry. In the current code docker-compose is set up to build the mistral image. It is not going to use anything that's uploaded and I don't know how one can instruct docker-compose to pass the --cache-from parameter to the docker build if the uploaded image was imported. If someone decides to try the docker build, they should already be prepared to build stuff locally. Building the image does not take a long time (given v8eval is not installed) and they can use the current code at any time. This way we can prevent ourselves from comments regarding versions or anything. I just don't find it particularly useful. If there is real demand, let's keep it.  TL,DR;On the other hand, I've just stumbled upon an issue that mysql does not bind to the IPv4 address, only IPv6. This used to work and I don't know what changed.  mysql_1   | 2018-03-20T14:00:05.068149Z 0 [Note] Server hostname (bind-address): '*'; port: 3306mysql_1   | 2018-03-20T14:00:05.068213Z 0 [Note] IPv6 is available.mysql_1   | 2018-03-20T14:00:05.068233Z 0 [Note]   - '::' resolves to '::';mysql_1   | 2018-03-20T14:00:05.068296Z 0 [Note] Server socket created on IP: '::'. root@2e71797b8470 (this is mysql):/# ss -lntState  Recv-Q Send-Q    Local Address:Port  Peer Address:PortLISTEN 0  128  127.0.0.11:40374    *:*     LISTEN 0  128  :::3306    :::*      Mistral is not able to connect as it resolves the V4 address.  Cheers,A Feladó: Brad P. Crochet Elküldve: 2018. március 20., kedd 12:34Címzett: Kovi, Andras 1. (Nokia - HU/Budapest)Másolatot kap: Vitalii Solodilov; Akhmerov, Renat (Nokia - RU); dou...@redhat.comTárgy: Re: Mistral docker job I'm curious why you keep wanting to just delete the job rather than fixing it? It's not a gate job, and does not affect landing patches, but I do believe there are users that find it useful. On Tue, Mar 20, 2018 at 2:12 AM Kovi, Andras 1. (Nokia - HU/Budapest)  wrote:Hi Vitalii, thanks for the docker update. It's really good improvement from the previous version. But now the mistral docker job is failing.  http://zuul.openstack.org/builds.html?job_name=mistral-docker-buildimageI would be for deleting this job completely but could you please try to fix it first in some way? Thanks,Andras--Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDSPrincipal Software Engineer(c)  704.236.9385   -- Best regards, Vitalii Solodilov __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Bug status

2018-03-20 Thread Alex Schultz
Hey everyone,

In today's IRC meeting, I brought up[0] that we've been having an
increase in the number of open bugs of the last few weeks. We're
currently at about 635 open bugs.  It would be beneficial for everyone
to take a look at the bugs that they are currently assigned to and
ensure they are up to date.

Additionally, there was chat about possibly introducing some process
around the Triaging of bugs such that we should be assigning squad
tags to all the bugs so that there's some potential ownership.  I'm
not sure what that would look like, so if others thinks this might be
a good idea, feel free to comment.

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-03-20-14.01.log.html#l-69

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Project for profiles and defaults for libvirt domains

2018-03-20 Thread Martin Kletzander

Hi everyone!

First of all sorry for such wide distribution, but apparently that's the
best way to make sure we cooperate nicely.  So please be considerate as
this is a cross-post between huge amount of mailing lists.

After some discussions with developers from different projects that work
with libvirt one cannot but notice some common patterns and workarounds.
So I set off to see how can we make all our lives better and our coding
more effective (and maybe more fun as well).  If all goes well we will
create a project that will accommodate most of the defaulting, policies,
workarounds and other common algorithms around libvirt domain
definitions.  And since early design gets you half way, I would like to
know your feedback on several key points as well as on the general idea.
Also correct me brutally in case I'm wrong.

In order to not get confused in the following descriptions, I will refer
to this project idea using the name `virtuned`, but there is really no
name for it yet (although an abbreviation for "Virtualization
Abstraction Definition and Hypervisor Delegation" would suit well,
IMHO).

Here are some common problems and use cases that virtuned could solve
(or help with).  Don't take it as something that's impossible to solve
on your own, but rather something that could be de-duplicated from
multiple projects or "done right" instead of various hack-ish solutions.

1) Default devices/values

Libvirt itself must default to whatever values there were before any
particular element was introduced due to the fact that it strives to
keep the guest ABI stable.  That means, for example, that it can't just
add -vmcoreinfo option (for KASLR support) or magically add the pvpanic
device to all QEMU machines, even though it would be useful, as that
would change the guest ABI.

For default values this is even more obvious.  Let's say someone figures
out some "pretty good" default values for various HyperV enlightenment
feature tunables.  Libvirt can't magically change them, but each one of
the projects building on top of it doesn't want to keep that list
updated and take care of setting them in every new XML.  Some projects
don't even expose those to the end user as a knob, while others might.

One more thing could be automatically figuring out best values based on
libosinfo-provided data.

2) Policies

Lot of the time there are parts of the domain definition that need to be
added, but nobody really cares about them.  Sometimes it's enough to
have few templates, another time you might want to have a policy
per-scenario and want to combine them in various ways.  For example with
the data provided by point 1).

For example if you want PCI-Express, you need the q35 machine type, but
you don't really want to care about the machine type.  Or you want to
use SPICE, but you don't want to care about adding QXL.

What if some of these policies could be specified once (using some DSL
for example), and used by virtuned to merge them in a unified and
predictable way?

3) Abstracting the XML

This is probably just usable for stateless apps, but it might happen
that some apps don't really want to care about the XML at all.  They
just want an abstract view of the domain, possibly add/remove a device
and that's it.  We could do that as well.  I can't really tell how much
of a demand there is for it, though.

4) Identifying devices properly

In contrast to the previous point, stateful apps might have a problem
identifying devices after hotplug.  For example, let's say you don't
care about the addresses and leave that up to libvirt.  You hotplug a
device into the domain and dump the new XML of it.  Depending on what
type of device it was, you might need to identify it based on different
values.  It could be  for disks,  for
interfaces etc.  For some devices it might not even be possible and you
need to remember the addresses of all the previous devices and then
parse them just to identify that one device and then throw them away.

With new enough libvirt you could use the user aliases for that, but
turns out it's not that easy to use them properly anyway.  Also the
aliases won't help users identify that device inside the guest.


We really should've gone with new attribute for the user alias instead
of using an existing one, given how many problems that is causing.


5) Generating the right XML snippet for device hot-(un)plug

This is kind of related to some previous points.

When hot-plugging a device and creating an XML snippet for it, you want
to keep the defaults from point 1) and policies from 2) in mind.  Or
something related to the already existing domain which you can describe
systematically.  And adding something for identification (see previous
point).

Doing the hot-unplug is easy depending on how much information about
that device is saved by your application.  The less you save about the
device (or show to the user in a GUI, if applicable) the harder it might
be to generate an XML that libvirt will accept.  Again, 

Re: [openstack-dev] [Blazar] Nominating Bertrand Souville to Blazar core

2018-03-20 Thread Pierre Riteau
> On 20 Mar 2018, at 07:44, Masahito MUROI  wrote:
> 
> Hi Blazar folks,
> 
> I'd like to nominate Bertrand Souville to blazar core team. He has been 
> involved in the project since the Ocata release. He has worked on NFV 
> usecase, gap analysis and feedback in OPNFV and ETSI NFV as well as in Blazar 
> itself.  Additionally, he has reviewed not only Blazar repository but Blazar 
> related repository with nice long-term perspective.
> 
> I believe he would make the project much nicer.
> 
> best regards,
> Masahito

+1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] Rocky PTG summary - nova/ironic

2018-03-20 Thread Jim Rollenhagen
Thanks for the writeup, Melanie :)

On Mon, Mar 19, 2018 at 8:31 PM, melanie witt  wrote:

>
>   * For the issue of nova-compute crashing on startup, we could add a
> try-except around the call site at startup and ignore a "NotReadyYet" or
> similar exception from the Ironic driver
>

This is here: https://review.openstack.org/#/c/545479/

Just doing a bit more testing and should have a new version up shortly.


>   * On Ironic API version negotiation, the ironicclient already has some
> version negotiation built-in, so there are some options. 1) update Ironic
> driver to handle return/error codes from ironicclient version-negotiated
> calls, 2) add per-call microversion support to ironicclient and use it in
> the Ironic driver, 3) convert all Ironic driver calls to use raw REST
> * Option 1) would be the most expedient, but it's up to the Ironic
> team how they will want to proceed. Option 3 is the desired ideal solution
> but will take a rewrite of the related Ironic driver unit tests as they
> currently all mock ironicclient
>

We discussed this further in IRC yesterday, and Julia is going to explore
option 2 for now.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron

2018-03-20 Thread Balázs Gibizer



On Fri, Mar 16, 2018 at 12:04 AM, Matt Riedemann  
wrote:

On 3/15/2018 3:30 PM, melanie witt wrote:
 * We don't need to block bandwidth-based scheduling support for 
doing port creation in conductor (it's not trivial), however, if 
nova creates a port on a network with a QoS policy, nova is going 
to have to munge the allocations and update placement (from 
nova-compute) ... so maybe we should block this on moving port 
creation to conductor after all


This is not the current direction in the spec. The spec is *large* 
and detailed, and this is one of the things being discussed in there. 
For the latest on all of it, gonna need to get caught up on the spec. 
But it won't be updated for awhile because Brother Gib is on vacation.



In the current state of the spec I try to keep this case out of scope 
[1]. Having QoS policy requires a special port or network and nova 
server create with network_id only expected to work is simple network 
and port setup. If the user want some special port (like SRIOV) she has 
to pre-create that port in neutron anyhow.


Cheers,
gibi

[1] 
https://review.openstack.org/#/c/502306/18/specs/rocky/approved/bandwidth-resource-provider.rst@126



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Vancouver Forum session brainstorming

2018-03-20 Thread Thierry Carrez
Hi, governance / cross-community topics lovers,

Like other groups, the TC has started to brainstorm potential topics for
discussion at the Forum in Vancouver. The idea is to coordinate, merge
duplicate sessions, and find missing sessions before the submission site
formally opens. Please add your suggestions at:

https://etherpad.openstack.org/p/YVR-forum-TC-sessions

As a reminder, the Forum is the venue where it is the easiest to get
wide feedback from the OpenStack community as a whole. Ideal session
topics are those that would benefit a lot from that wide feedback.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [ceilometer] [gnocchi] keystone verification failed.

2018-03-20 Thread Julien Danjou
On Tue, Mar 20 2018, __ mango. wrote:

> I have configured the following
> export OS_PROJECT_DOMAIN_NAME=Default
> export OS_USER_DOMAIN_NAME=Default
> export OS_PROJECT_NAME=admin
> export OS_USERNAME=admin
> exp​ort OS_PASSWORD=admin
> export OS_AUTH_URL=http://controller:35357/v3
> export OS_IDENTITY_API_VERSION=3
> export OS_IMAGE_API_VERSION=2
>
> /etc/gnocchi/gnocchi.conf
> [DEFAULT]
> [api]
> auth_mode = keystone

You said in your mail that you were using basic as auth_mode and your
request here:

>>  # gnocchi status --debug
>> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H
>> "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept:
>> application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0
>> python-requests/2.18.1 CPython/2.7.12"
>> Starting new HTTP connection (1): localhost
>> http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114
>> RESP: [401] Content-Type: application/json Content-Length: 114 
>> WWW-Authenticate: Keystone uri='http://controller:5000/v3' Connection: 
>> Keep-Alive 
>> RESP BODY: {"error": {"message": "The request you have made requires 
>> authentication.", "code": 401, "title": "Unauthorized"}}
>> The request you have made requires authentication. (HTTP 401)

Indicates that the client is using basic auth mode. As Gordon already
replied:

  
https://gnocchi.xyz/gnocchiclient/shell.html#openstack-keystone-authentication 
  you're missing OS_AUTH_TYPE

Sigh.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 回复: [ceilometer] [gnocchi] keystone verification failed.

2018-03-20 Thread __ mango.
hi,
I have configured the following
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
exp​ort OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

/etc/gnocchi/gnocchi.conf
[DEFAULT]
[api]
auth_mode = keystone
[archive_policy]
[cors]
[healthcheck]
[incoming]
[indexer]
url = mysql+pymysql://gnocchi:gnocchi@controller/gnocchi
[keystone_authtoken]
auth_type = password
auth_url = http://controller:5000/v3
project_domain_name = default
user_domain_name = default
project_name = service
username = gnocchi
password = gnocchi
interface = internalURL
region_name = RegionOne
[metricd]
[oslo_middleware]
[oslo_policy]
[statsd]
[storage]
coordination_url = redis://controller:6379
file_basepath = /var/lib/gnocchi
driver = file

PS:
This  standard documents cannot install gnocchi  
(https://docs.openstack.org/ceilometer/pike/install/install-base-ubuntu.html#install-gnocchi),
  what should I do?
I have used the "admin" authentication, and the other components are normal 
except for gnocchi.




-- 原始邮件 --
发件人: "Julien Danjou";
发送时间: 2018年3月20日(星期二) 下午4:54
收件人: "__ mango."<935540...@qq.com>;
抄送: "openstack-dev"; 
主题: Re: [openstack-dev] [ceilometer] [gnocchi] keystone verification failed.



On Tue, Mar 20 2018, __ mango. wrote:

> hi,
>  I have a question about the validation of gnocchi keystone.
>  I run the following command, but it is not successful.(api.auth_mode :basic, 
> basic mode can be 
>  # gnocchi status --debug
> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H
> "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept:
> application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0
> python-requests/2.18.1 CPython/2.7.12"
> Starting new HTTP connection (1): localhost
> http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114
> RESP: [401] Content-Type: application/json Content-Length: 114 
> WWW-Authenticate: Keystone uri='http://controller:5000/v3' Connection: 
> Keep-Alive 
> RESP BODY: {"error": {"message": "The request you have made requires 
> authentication.", "code": 401, "title": "Unauthorized"}}
> The request you have made requires authentication. (HTTP 401)

You need to be authed as "admin" to get the status.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [all][api] POST /api-sig/news

2018-03-20 Thread Gilles Dubreuil



On 20/03/18 08:26, Michael McCune wrote:



On Fri, Mar 16, 2018 at 4:55 AM, Chris Dent > wrote:




So summarize and clarify, we are talking about SDK being able
to build their interface to Openstack APIs in an automated way
but statically from API Schema generated by every project.
Such API Schema is already built in memory during API
reference documentation generation and could be saved in JSON
format (for instance) (see [5]).


What do you see as the current roadblocks preventing this work from
continuing to make progress?



Gilles, i'm very curious about how we can help as well.

i am keenly interested in the api-schema work that is happening and i 
am coming up to speed with the work that Graham has done, and which 
previously existed, on os-api-ref. although i don't have a *ton* of 
spare free time, i would like to help as much as i can.


Hi Michael,

Thank you very much for jumping in. Your interest shows the demand for 
such feature, which is what we need the most at the moment. The more 
people the better the momentum and likelihood of getting more help. 
Let's blow the horn!


As you probably already know, the real work is Graham's PR [1] where the 
magic is going to happen and where you can help.
Graham who has been involved and working with the Sphinx library offered 
to 'dump' the API schema which is already in memory, what I call the 
de-facto API Schema, which is needed to generate the API Reference 
guides. So instead of asking developers of each project to change their 
habits and write an API schema up front, it seemed easier to just use 
the current work flow in place with the documentation (API Ref) and 
generate the API schema which can be stored in every project Git.


[1] https://review.openstack.org/#/c/528801

Cheers,
Gilles




thanks for bringing this up again,

peace o/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][vpnaas]

2018-03-20 Thread vidyadhar reddy
Hello,

i have a general question regarding the working of vpnaas,

can we setup multiple vpn connections on a single router? my scenario is
lets say we have two networks net 1 and net2 in two different sites
respectively, each network has two subnets, two sites have one router in
each, with three interfaces one for the public network and remaining two
for the two subnets, can we setup a two vpnaas connections on the routers
in each site to enable communication between the two subnets in each site.

i have tried this setup, it didn't work for me. just wanted to know if it
is a design constraint or not, i am not sure if this issue is under
development, is there any development going on or is it already been
solved?


BR,
Vidyadhar reddy peddireddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][vpnaas]

2018-03-20 Thread vidyadhar reddy
Hello,

i have a general question regarding the working of vpnaas,

can we setup multiple vpn connections on a single router? my scenario is
lets say we have two networks net 1 and net2 in two different sites
respectiviely, each network has two subnets, in site one and site two we
have two routers, each router with three interfaces one for the public
network and remaining two for the two subnets, can we setup a two vpnaas
connections on the routers in each site to enable communication between the
two subnets in each site.

i have tried this setup, it didn't work for me. just wanted to know if it
is a design constraint or not, i am not sure if this issue is under
development, is there any development going on or is it already been
solved?


BR,
Vidyadhar reddy peddireddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [gnocchi] keystone verification failed.

2018-03-20 Thread Julien Danjou
On Tue, Mar 20 2018, __ mango. wrote:

> hi,
>  I have a question about the validation of gnocchi keystone.
>  I run the following command, but it is not successful.(api.auth_mode :basic, 
> basic mode can be 
>  # gnocchi status --debug
> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H
> "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept:
> application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0
> python-requests/2.18.1 CPython/2.7.12"
> Starting new HTTP connection (1): localhost
> http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114
> RESP: [401] Content-Type: application/json Content-Length: 114 
> WWW-Authenticate: Keystone uri='http://controller:5000/v3' Connection: 
> Keep-Alive 
> RESP BODY: {"error": {"message": "The request you have made requires 
> authentication.", "code": 401, "title": "Unauthorized"}}
> The request you have made requires authentication. (HTTP 401)

You need to be authed as "admin" to get the status.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-03-20 Thread Akihiro Motoki
Hi Kaz and Ivan,

Yeah, it is worth discussed officially in the horizon team meeting or the
mailing list thread to get a consensus.
Hopefully you can add this topic to the horizon meeting agenda.

After sending the previous mail, I noticed anther option. I see there are
several options now.
(1) Keep xstatic-core and horizon-core same.
(2) Add specific members to xstatic-core
(3) Add specific horizon-plugin core to xstatic-core
(4) Split core membership into per-repo basis (perhaps too complicated!!)

My current vote is (2) as xstatic-core needs to understand what is xstatic
and how it is maintained.

Thanks,
Akihiro


2018-03-20 17:17 GMT+09:00 Kaz Shinohara :

> Hi Akihiro,
>
>
> Thanks for your comment.
> The background of my request to add us to xstatic-core comes from
> Ivan's comment in last PTG's etherpad for heat-dashboard discussion.
>
> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
> Line135, "we can share ownership if needed - e0ne"
>
> Just in case, could you guys confirm unified opinion on this matter as
> Horizon team ?
>
> Frankly speaking I'm feeling the benefit to make us xstatic-core
> because it's easier & smoother to manage what we are taking for
> heat-dashboard.
> On the other hand, I can understand what Akihiro you are saying, the
> newly added repos belong to Horizon project & being managed by not
> Horizon core is not consistent.
> Also having exception might make unexpected confusion in near future.
>
> Eventually we will follow your opinion, let me hear Horizon team's
> conclusion.
>
> Regards,
> Kaz
>
>
> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki :
> > Hi Kaz,
> >
> > These repositories are under horizon project. It looks better to keep the
> > current core team.
> > It potentially brings some confusion if we treat some horizon plugin team
> > specially.
> > Reviewing xstatic repos would be a small burden, wo I think it would work
> > without problem even if only horizon-core can approve xstatic reviews.
> >
> >
> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara :
> >>
> >> Hi Ivan, Horizon folks,
> >>
> >>
> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed.
> >>
> >> In project-config for them, I've set same acl-config as the existing
> >> xstatic repos.
> >> It means only "xstatic-core" can manage the newly created repos on
> gerrit.
> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as
> >> what horizon-core is doing ?
> >>
> >> xstatic-core
> >> https://review.openstack.org/#/admin/groups/385,members
> >>
> >> heat-dashboard-core
> >> https://review.openstack.org/#/admin/groups/1844,members
> >>
> >> Of course, we will surely touch only what we made, just would like to
> >> manage them smoothly by ourselves.
> >> In case we need to touch the other ones, will ask Horizon team for help.
> >>
> >> Thanks in advance.
> >>
> >> Regards,
> >> Kaz
> >>
> >>
> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge :
> >> > Hi Horizon Team,
> >> >
> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option,
> >> >  and submitted a patch for it.
> >> > Could you please help to review the patch.
> >> >
> >> > https://bugs.launchpad.net/horizon/+bug/1755339
> >> > https://review.openstack.org/#/c/552259/
> >> >
> >> > Thank you very much.
> >> >
> >> > Best Regards,
> >> > Xinni
> >> >
> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny 
> >> > wrote:
> >> >>
> >> >> Hi Kaz,
> >> >>
> >> >> Thanks for cleaning this up. I put +1 on both of these patches
> >> >>
> >> >> Regards,
> >> >> Ivan Kolodyazhny,
> >> >> http://blog.e0ne.info/
> >> >>
> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara  >
> >> >> wrote:
> >> >>>
> >> >>> Hi Ivan & Horizon folks,
> >> >>>
> >> >>>
> >> >>> Now we are submitting a couple of patches to have the new xstatic
> >> >>> modules.
> >> >>> Let me request you to have review the following patches.
> >> >>> We need Horizon PTL's +1 to move these forward.
> >> >>>
> >> >>> project-config
> >> >>> https://review.openstack.org/#/c/551978/
> >> >>>
> >> >>> governance
> >> >>> https://review.openstack.org/#/c/551980/
> >> >>>
> >> >>> Thanks in advance:)
> >> >>>
> >> >>> Regards,
> >> >>> Kaz
> >> >>>
> >> >>>
> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski
> >> >>> :
> >> >>> > Yes, please do that. We can then discuss in the review about
> >> >>> > technical
> >> >>> > details.
> >> >>> >
> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge  >
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> Hi, Akihiro
> >> >>> >>
> >> >>> >> Thanks for the quick reply.
> >> >>> >>
> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be
> >> >>> >> modified.
> >> >>> >> It is much better to enhance horizon plugin settings,
> >> >>> >>  and I think maybe there could be one option like
> >> >>> >> ADD_XSTATIC_MODULES.

Re: [openstack-dev] [all][api] POST /api-sig/news

2018-03-20 Thread Gilles Dubreuil



On 16/03/18 19:55, Chris Dent wrote:


Meta: When responding to lists, please do not cc individuals, just
repond to the list. Thanks, response within.



+1


On Fri, 16 Mar 2018, Gilles Dubreuil wrote:

In order to continue and progress on the API Schema guideline [1] as 
mentioned in [2] to make APIs more machine-discoverable and also 
discussed during [3].


Unfortunately until a new or either a second meeting time slot has 
been allocated,  inconveniently for everyone, have to be done by emails.


I'm sorry that the meeting time is excluding you and others, but our
efforts to have either a second meeting or to change the time have
met with limited response (except from you).

In any case, the meeting are designed to be checkpoints where we
resolve stuck questions and checkpoint where we are on things. It is
better that most of the work be done in emails and on reviews as
that's the most inclusive, and is less dependent on time-related
variables.


I agree in general most of our work can be done "off-line" meanwhile 
there are times were interaction is preferable especially in early 
phases of conception in order to provide appropriate momentum.




So moving the discussion about schemas here is the right thing and
the fact that it hasn't happened (until now) is the reason for what
appears to be a rather lukewarm reception from the people writing
the API-SIG newsletter: if there's no traffic on either the gerrit
review or here in email then there's no evidence of demand. You're
asserting here that there is; that's great.


Yes, and some of those believers are to either jump-on this thread or 
add comment to related reviews in order to confirm this.
Of course one cannot expect them to be active participants as I'm 
delegated to be the interface for this feature.




Of course new features have to be decided (voted) by the community 
but how does that work when there are not enough people voting in?
It seems unfair to decide not to move forward and ignore the request 
because the others people interested are not participating at this 
level.


In a world of limited resources we can't impose work on people. The
SIG is designed to be a place where people can come to make progress
on API-related issues. If people don't show up, progress can't be
made. Showing up doesn't have to mean show up at an IRC meeting. In
fact I very much hope that it never means that. Instead it means
writing things (like your email message) and seeking out
collaborators to push your idea(s) forward.


This comforts me about more automation to help ;)


It's very important  to consider the fact "I" am representing more 
than just myself but an Openstack integration team, whose members are 
supporting me, and our work impacts others teams involved in their 
open source product consuming OpenStack. I'm sorry if I haven't made 
this more clear from the beginning, I guess I'm still learning on the 
particiaption process. So from now on, I'm going to use "us" instead.


Can some of those "us" show up on the mailing list, the gerrit
reviews, and prototype work that Graham has done?


Yes absolutely, as I just mentioned above.



Also from discussions with other developers from AT (OpenStack 
summit in Sydney) and SAP (Misty project) who are already using 
automation to consume APIs, this is really needed.


Them too.


For the first ones, I've tried without success (tweeter), unfortunately 
I don't have their email addresses, let me ask Openstack Organizers if 
they can pass it along...

I'll poke the second ones.



I've also mentioned the now known fact that no SDK has full time 
resources to maintain it (which was the initial trigger for us) more 
automation is the only sustainable way to continue the journey.


Finally how can we dare say no to more automation? Unless of course, 
only artisan work done by real hipster is allowed ;)


Nobody is saying no to automation (as far as I'm aware). Some people
(e.g., me, but not just me) are saying "unless there's an active
community to do this work and actively publish about it and the
related use cases that drive it it's impossible to make it a
priority". Some other people (also me, but not just me) are also
saying "schematizing API client generation is not my favorite thing"
but that's just a personal opinion and essentially meaningless
because yet other people are saying "I love API schema!".

What's missing, though, is continuous enagement on producing
children of that love.


Well I believe, maybe because I kind of belong to the second group, that 
the whole API definition is upside-down.
If we had API schema from day one we would have more children of love 
and many many more grand children of Openstack users.





Furthermore, API-Schema will be problematic for services that use 
microversions. If you have some insight or opinions on this, please 
add your comments to that review.


I understand microversion standardization (OpenAPI) has not happened 
yet or if it ever does but that 

Re: [openstack-dev] [kolla][vote] core nomination for caoyuan

2018-03-20 Thread Surya Singh
+1


>
>>
>> *From:* Jeffrey Zhang [mailto:zhang.lei@gmail.com]
>> *Sent:* Monday, March 12, 2018 9:07 AM
>> *To:* OpenStack Development Mailing List > .org>
>> *Subject:* [openstack-dev] [kolla][vote] core nomination for caoyuan
>>
>>
>>
>> ​​Kolla core reviewer team,
>>
>>
>>
>> It is my pleasure to nominate caoyuan for kolla core team.
>>
>>
>>
>> caoyuan's output is fantastic over the last cycle. And he is the most
>>
>> active non-core contributor on Kolla project for last 180 days[1]. He
>>
>> focuses on configuration optimize and improve the pre-checks feature.
>>
>>
>>
>> Consider this nomination a +1 vote from me.
>>
>>
>>
>> A +1 vote indicates you are in favor of caoyuan as a candidate, a -1
>>
>> is a veto. Voting is open for 7 days until Mar 12th, or a unanimous
>>
>> response is reached or a veto vote occurs.
>>
>>
>>
>> [1] http://stackalytics.com/report/contribution/kolla-group/180
>>
>> --
>>
>> Regards,
>>
>> Jeffrey Zhang
>>
>> Blog: http://xcodest.me
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-03-20 Thread Kaz Shinohara
Hi Akihiro,


Thanks for your comment.
The background of my request to add us to xstatic-core comes from
Ivan's comment in last PTG's etherpad for heat-dashboard discussion.

https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
Line135, "we can share ownership if needed - e0ne"

Just in case, could you guys confirm unified opinion on this matter as
Horizon team ?

Frankly speaking I'm feeling the benefit to make us xstatic-core
because it's easier & smoother to manage what we are taking for
heat-dashboard.
On the other hand, I can understand what Akihiro you are saying, the
newly added repos belong to Horizon project & being managed by not
Horizon core is not consistent.
Also having exception might make unexpected confusion in near future.

Eventually we will follow your opinion, let me hear Horizon team's conclusion.

Regards,
Kaz


2018-03-20 12:58 GMT+09:00 Akihiro Motoki :
> Hi Kaz,
>
> These repositories are under horizon project. It looks better to keep the
> current core team.
> It potentially brings some confusion if we treat some horizon plugin team
> specially.
> Reviewing xstatic repos would be a small burden, wo I think it would work
> without problem even if only horizon-core can approve xstatic reviews.
>
>
> 2018-03-20 10:02 GMT+09:00 Kaz Shinohara :
>>
>> Hi Ivan, Horizon folks,
>>
>>
>> Now totally 8 xstatic-** repos for heat-dashboard have been landed.
>>
>> In project-config for them, I've set same acl-config as the existing
>> xstatic repos.
>> It means only "xstatic-core" can manage the newly created repos on gerrit.
>> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as
>> what horizon-core is doing ?
>>
>> xstatic-core
>> https://review.openstack.org/#/admin/groups/385,members
>>
>> heat-dashboard-core
>> https://review.openstack.org/#/admin/groups/1844,members
>>
>> Of course, we will surely touch only what we made, just would like to
>> manage them smoothly by ourselves.
>> In case we need to touch the other ones, will ask Horizon team for help.
>>
>> Thanks in advance.
>>
>> Regards,
>> Kaz
>>
>>
>> 2018-03-14 15:12 GMT+09:00 Xinni Ge :
>> > Hi Horizon Team,
>> >
>> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option,
>> >  and submitted a patch for it.
>> > Could you please help to review the patch.
>> >
>> > https://bugs.launchpad.net/horizon/+bug/1755339
>> > https://review.openstack.org/#/c/552259/
>> >
>> > Thank you very much.
>> >
>> > Best Regards,
>> > Xinni
>> >
>> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny 
>> > wrote:
>> >>
>> >> Hi Kaz,
>> >>
>> >> Thanks for cleaning this up. I put +1 on both of these patches
>> >>
>> >> Regards,
>> >> Ivan Kolodyazhny,
>> >> http://blog.e0ne.info/
>> >>
>> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara 
>> >> wrote:
>> >>>
>> >>> Hi Ivan & Horizon folks,
>> >>>
>> >>>
>> >>> Now we are submitting a couple of patches to have the new xstatic
>> >>> modules.
>> >>> Let me request you to have review the following patches.
>> >>> We need Horizon PTL's +1 to move these forward.
>> >>>
>> >>> project-config
>> >>> https://review.openstack.org/#/c/551978/
>> >>>
>> >>> governance
>> >>> https://review.openstack.org/#/c/551980/
>> >>>
>> >>> Thanks in advance:)
>> >>>
>> >>> Regards,
>> >>> Kaz
>> >>>
>> >>>
>> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski
>> >>> :
>> >>> > Yes, please do that. We can then discuss in the review about
>> >>> > technical
>> >>> > details.
>> >>> >
>> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge 
>> >>> > wrote:
>> >>> >>
>> >>> >> Hi, Akihiro
>> >>> >>
>> >>> >> Thanks for the quick reply.
>> >>> >>
>> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be
>> >>> >> modified.
>> >>> >> It is much better to enhance horizon plugin settings,
>> >>> >>  and I think maybe there could be one option like
>> >>> >> ADD_XSTATIC_MODULES.
>> >>> >> This option adds the plugin's xstatic files in STATICFILES_DIRS.
>> >>> >> I am considering to add a bug report to describe it at first, and
>> >>> >> give
>> >>> >> a
>> >>> >> patch later maybe.
>> >>> >> Is that ok with the Horizon team?
>> >>> >>
>> >>> >> Best Regards.
>> >>> >> Xinni
>> >>> >>
>> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki 
>> >>> >> wrote:
>> >>> >>>
>> >>> >>> Hi Xinni,
>> >>> >>>
>> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge :
>> >>> >>> > Hello Horizon Team,
>> >>> >>> >
>> >>> >>> > I would like to hear about your opinions about how to add new
>> >>> >>> > xstatic
>> >>> >>> > modules to horizon settings.
>> >>> >>> >
>> >>> >>> > As for Heat-dashboard project embedded 3rd-party files issue,
>> >>> >>> > thanks
>> >>> >>> > for
>> >>> >>> > your advices in Dublin PTG, we are now removing them and
>> >>> >>> > referencing as
>> >>> >>> > new
>> >>> >>> > 

[openstack-dev] [Blazar] Nominating Bertrand Souville to Blazar core

2018-03-20 Thread Masahito MUROI

Hi Blazar folks,

I'd like to nominate Bertrand Souville to blazar core team. He has been 
involved in the project since the Ocata release. He has worked on NFV 
usecase, gap analysis and feedback in OPNFV and ETSI NFV as well as in 
Blazar itself.  Additionally, he has reviewed not only Blazar repository 
but Blazar related repository with nice long-term perspective.


I believe he would make the project much nicer.

best regards,
Masahito


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-Dev] [Neutron] [DragonFlow] Automatic Neighbour Discovery responder for IPv6

2018-03-20 Thread N Vivekanandan
Hi DragonFlow Team,

We noticed that you are adding support for automatic responder for neighbor 
solicitation via OpenFlow Rules here:
https://review.openstack.org/#/c/412208/

Can you please let us know with latest OVS release are you using to test this 
feature?

We are pursuing Automatic NS Responder in OpenDaylight Controller 
implementation, and we noticed that there are no NXM extensions to manage the 
'R' bit and 'S' bit correctly.

>From the RFC: https://tools.ietf.org/html/rfc4861


  R  Router flag.  When set, the R-bit indicates that

 the sender is a router.  The R-bit is used by

 Neighbor Unreachability Detection to detect a

 router that changes to a host.



  S  Solicited flag.  When set, the S-bit indicates that

 the advertisement was sent in response to a

 Neighbor Solicitation from the Destination address.

 The S-bit is used as a reachability confirmation

 for Neighbor Unreachability Detection.  It MUST NOT

 be set in multicast advertisements or in
 unsolicited unicast advertisements.

We noticed that this dragonflow rule is being programmed for automatic response 
generation for NS:
icmp6,ipv6_dst=1::1,icmp_type=135 
actions=load:0x88->NXM_NX_ICMPV6_TYPE[],move:NXM_NX_IPV6_SRC[]->NXM_NX_IPV6_DST[],mod_dl_src:00:11:22:33:44:55,load:0->NXM_NX_ND_SLL[],IN_PORT
above line from spec 
https://docs.openstack.org/dragonflow/latest/specs/ipv6.html

However, from the flow rule by dragonflow for automatic response above, we 
couldn't notice that R and S bits of the NS Response is being managed.

Can you please clarify if you don't intend to use 'R' and 'S' bits at all in 
dragonflow implementation?
Or you intend to use them but you weren't able to get NXM extensions for the 
same with OVS and so wanted to start ahead without managing those bits (as per 
RFC)?

Thanks in advance for your help.

--
Thanks,

Vivek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev