Re: [openstack-dev] [Performance][Meeting time] Possible meeting time change: feedback appreciated

2016-08-29 Thread Dina Belova
OK, folks, new time chosen: 15:30 UTC (30 minutes earlier than usually, it
looks like most of the people are ok with it).

Starting with tomorrow (Tuesday, Aug 30th) we're running this time :)

Cheers,
Dina

On Mon, Aug 22, 2016 at 12:09 PM, Dina Belova  wrote:

> Ok, let's keep current (16:00 UTC) time slot for tomorrow meeting, until
> we'll find a better option.
>
> Cheers,
> Dina
>
> On Mon, Aug 22, 2016 at 9:05 AM, Augie Mena III  wrote:
>
>> Hi Dina/Adrien,
>>
>> Not a great slot here in Austin (that would be noon).  Would prefer a
>> 30-minute slot at 15:30 UTC or 18:00 UTC, but I realize it's hard to find
>> one to accommodate everyone.
>>
>> Thanks.
>>
>> Regards,
>> Augie
>>
>>
>>
>>
>>
>> From:Dina Belova 
>> To:lebre.adr...@free.fr, Joshua Harlow ,
>> Augie Mena III/Austin/IBM@IBMUS
>> Cc:OpenStack Development Mailing List <
>> openstack-dev@lists.openstack.org>, openstack-operators <
>> openstack-operat...@lists.openstack.org>
>> Date:08/22/2016 10:51 AM
>> Subject:Re: [Performance][Meeting time] Possible meeting time
>> change: feedback appreciated
>> --
>>
>>
>>
>> Adrien,
>>
>> this looks good to me. Let's see what Joshua and Augie will think about
>> it.
>>
>> Cheers,
>> Dina
>>
>> On Mon, Aug 22, 2016 at 7:44 AM, <*lebre.adr...@free.fr*
>> > wrote:
>> Hi Dina,
>>
>> 17:30 UTC means 19:30 in France (Summer time).
>> Considering that the meeting lasts in average 30/45 min, it would be
>> great if we could find a trade off a bit sooner. What's about 17:00 UTC ?
>>
>> Thanks,
>> Adrien
>>
>> - Mail original -
>> > De: "Dina Belova" <*dbel...@mirantis.com* >
>> > À: "OpenStack Development Mailing List" <
>> *openstack-dev@lists.openstack.org* >,
>> "openstack-operators"
>> > <*openstack-operat...@lists.openstack.org*
>> >
>> > Envoyé: Jeudi 18 Août 2016 21:35:48
>> > Objet: [Performance][Meeting time] Possible meeting time change:
>> feedback appreciated
>> >
>> >
>> > Hey, OpenStackers!
>> >
>> >
>> > recently I've got lots comments about current time our Performance
>> > Team meetings are held on. Right now we're having them 16:00 UTC on
>> > Tuesdays (9:00 PST) (#openstack-performance IRC channel) and this
>> > time slot is not that much comfortable for some of the US folks due
>> > to internal daily meetings.
>> >
>> >
>> > The question is: can we move our weekly meeting to 17:30 UTC (10:30
>> > PST)? It's late a bit for folks from Moscow (20:30), so I'd like to
>> > collect more feedback.
>> >
>> >
>> > Please leave your comments.
>> >
>> >
>> > Cheers,
>> > Dina
>> >
>> >
>> > --
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Dina Belova Senior Software Engineer
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Mirantis, Inc.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > 525 Almanor Avenue, 4th Floor
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Sunnyvale, CA 94085 Phone: 650-772-8418
>> > Email: *dbel...@mirantis.com* 
>> > *www.mirantis.com* 
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>>
>>
>>
>> --
>> *Dina Belova*
>> *Senior Software Engineer*
>> Mirantis, Inc.
>> 525 Almanor Avenue, 4th Floor
>> Sunnyvale, CA 94085
>>
>> *Phone: 650-772-8418 <650-772-8418>Email: **dbel...@mirantis.com*
>> 
>> *www.mirantis.com* 
>> 
>>
>>
>>
>
>
> --
> *Dina Belova*
> *Senior Software Engineer*
> Mirantis, Inc.
> 525 Almanor Avenue, 4th Floor
> Sunnyvale, CA 94085
>
> *Phone: 650-772-8418 <650-772-8418>Email: dbel...@mirantis.com
> *
> www.mirantis.com
> 
>



-- 
*Dina Belova*
*Senior Software Engineer*
Mirantis, Inc.
525 Almanor Avenue, 4th Floor
Sunnyvale, CA 94085

*Phone: 650-772-8418Email: dbel...@mirantis.com *
www.mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE request for Manila CephFS Native backend integration

2016-08-29 Thread Erno Kuvaja
On Fri, Aug 19, 2016 at 9:53 AM, Erno Kuvaja  wrote:
> Hi all,
>
> I'm still working on getting all pieces together for the Manila CephFS
> driver integration and realizing that we have about a week of busy
> gating left 'till FF and the changes are not reviewed yet, I'd like to
> ask community consider the feature for Feature Freeze Exception.
>
> I'm confident that I will get all the bits together over next week or
> so, but I'm far from confident that we will have them merged in time.
> I would like to see this feature making Newton still.
>
> Best,
> Erno (jokke) Kuvaja

The last commit for this feature is in review [0], pending the
decision how & if we split these backends in THT.

[0] https://review.openstack.org/#/c/358525/

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack/Designate] [DNAAS] Help required in getting the sample pools.yaml for infoblox

2016-08-29 Thread S, Selvakumar
Hi All,
Currently I am trying to configuring  the infoblox as backend dns server  in 
the designate for the DNS , I looked the upstream document and it is quite 
outdated.
Could you please give me sample pools.yaml for the infoblox server 
configuration so that I can update the pool and continue testing.

Thanks in advance
Thanks
Selvakumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread joehuang
Hello, Jay,

> The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases 

Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so, it's cloud. 
The introduction slides [1]  can help you to learn the use cases quickly, there 
are lots of material in ETSI website[2].

[1] 
http://www.etsi.org/images/files/technologies/MEC_Introduction_slides__SDN_World_Congress_15-10-14.pdf
[2] http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing

And when we talk about massively distributed cloud, vCPE is only one of the 
scenarios( now in argue - ing ), but we can't forget that there are other 
scenarios like  vCDN, vEPC, vIMS, MEC, IoT etc. Architecture level discussion 
is still necessary to see if current design and new proposals can fulfill the 
demands. If there are lots of proposals, it's good to compare the pros. and 
cons, and which scenarios the proposal work, which scenario the proposal can't 
work very well. 

( Hope this reply in the thread :) )

Best Regards
Chaoyi Huang(joehuang)

From: Jay Pipes [jaypi...@gmail.com]
Sent: 29 August 2016 18:48
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/27/2016 11:16 AM, HU, BIN wrote:
> The challenge in OpenStack is how to enable the innovation built on top of 
> OpenStack.

No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the
innovation of a jet engine.

> So telco use cases is not only the innovation built on top of OpenStack. 
> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile 
> Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in 
> OpenStack itself. If OpenStack don't address those basic requirements,

That's the thing, Bin, those are *not* "basic" requirements. The Telco
vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking
for fundamental architectural and design changes to the foundational
components of OpenStack. Instead of Nova being designed to manage a
bunch of hardware in a relatively close location (i.e. a datacenter or
multiple datacenters), vCPE is asking for Nova to transform itself into
a micro-agent that can be run on an Apple Watch and do things in
resource-constrained environments that it was never built to do.

And, honestly, I have no idea what Gluon is trying to do. Ian sent me
some information a while ago on it. I read it. I still have no idea what
Gluon is trying to accomplish other than essentially bypassing Neutron
entirely. That's not "innovation". That's subterfuge.

> the innovation will never happen on top of OpenStack.

Sure it will. AT and BT and other Telcos just need to write their own
software that runs their proprietary vCPE software distribution
mechanism, that's all. The OpenStack community shouldn't be relied upon
to create software that isn't applicable to general cloud computing and
cloud management platforms.

> An example is - self-driving car is built on top of many technologies, such 
> as sensor/camera, AI, maps, middleware etc. All innovations in each 
> technology (sensor/camera, AI, map, etc.) bring together the innovation of 
> self-driving car.

Yes, indeed, but the people who created the self-driving car software
didn't ask the people who created the cameras to write the software for
them that does the self-driving.

> WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on 
> top of OpenStack.

You are defining "innovation" in an odd way, IMHO. "Innovation" for the
vCPE use case sounds a whole lot like "rearchitect your entire software
stack so that we don't have to write much code that runs on set-top boxes."

Just being honest,
-jay

> Thanks
> Bin
> -Original Message-
> From: Edward Leafe [mailto:e...@leafe.com]
> Sent: Saturday, August 27, 2016 10:49 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
>
> On Aug 27, 2016, at 12:18 PM, HU, BIN  wrote:
>
>>> From telco perspective, those are the areas that allow innovation, and 
>>> provide telco customers with new types of services.
>>
>> We need innovation, starting from not limiting ourselves from bringing new 
>> idea and new use cases, and bringing those impossibility to reality.
>
> There is innovation in OpenStack, and there is innovation in things built on 
> top of OpenStack. We are simply trying to keep the two layers from getting 
> confused.
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread joehuang
Hello, Jay,

Sorry, I don't know why my mail-agent(Microsoft Outlook Web App) did not carry 
the thread message-id information in the reply.  I'll check and avoid to create 
a new thread for reply in existing thread.

Best Regards
Chaoyi Huang ( joehuang)


From: Jay Pipes [jaypi...@gmail.com]
Sent: 29 August 2016 18:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/28/2016 09:02 PM, joehuang wrote:
> Hello, Bin,
>
> Understand your expectation. In Tricircle big-tent application: 
> https://review.openstack.org/#/c/338796/, a proposal was also given to add 
> plugin mechnism in Nova/Cinder API layer, just like Neutron support plugin 
> mechanism in API layer, that boosts innovation for different backend 
> implementation to be supported, from ODL to OVN, Open Contrail
>
> Mobile edging computing, NFV netwoking, distributed edge cloud etc are some 
> new scneario for OpenStack, I suggest to have at least two successive 
> dedicated design summit sessions to discuss about that f2f, the topics to be 
> discussed could be:
>
> 1, Use cases
> 2, Requirements  in detail
> 3, Gaps in OpenStack
> 4, Proposal to be discussed
>
> Arhietcture level proposal discussion
> 1, Proposals
> 2, Pros. and Cons. comparation
> 3, Chellenges
> 4, next step
>
>
> Looking forward to your thoughts.

We could also have a design summit session on how to use a mail user
agent that doesn't create new mailing list thread when you're responding
to an existing thread. We could also include a topic about top-posting.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-29 Thread Yujun Zhang
Patch work in progress [1] but local test fails [2].

It seems to be caused by the mock_sync.

I'm still looking into it. Any help would be appreciated.

[1] https://review.openstack.org/#/c/362525
[2] http://pastebin.com/iepqxUAP


On Mon, Aug 29, 2016 at 4:59 PM Yujun Zhang 
wrote:

> Thanks, Alexey. Point 1 and 3 are pretty clear.
>
> As for point 2, if I understand it correctly, you are suggesting to modify
> the static_physical.yaml as following
>
> entities:
>  - type: switch
>name: switch-1
>id: switch-1 # should be same as name
>state: available
>relationships:
>  - type: nova.host
>name: host-1
>id: host-1 # should be same as name*   is_source: true # entity is 
> `source` in this relationship
> *   relation_type: attached - type: switch   name: switch-2   
> id: switch-2 # should be same as name
> *   is_source: false # entity is `target` in this relationship*   
> relation_type: backup
>
> But I wonder why the static physical configuration file use a different
> format from vitrage template definitions[1]
>
> [1]
> https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst
>
> On Sun, Aug 28, 2016 at 4:14 PM Weyl, Alexey (Nokia - IL) <
> alexey.w...@nokia.com> wrote:
>
>> Hi Yujun,
>>
>>
>>
>> In order for the static_physical to work for different datasources
>> without the restrictions, you need to do the following changes:
>>
>> Go to the static_physical transformer:
>>
>> 1.   Remove the methods: _register_relations_direction,
>> _find_relation_direction_source.
>>
>> 2.   Add to the static_physical.yaml for each definition also a
>> field for direction which will indicate the source and the destination
>> between the datasources.
>>
>> 3.   In method: _create_neighbor, remove the usage of method
>> _find_relation_direction_source, and use the new definition from the yaml
>> file here to decide the edge direction.
>>
>>
>>
>> Is it ok?
>>
>>
>>
>> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
>> *Sent:* Friday, August 26, 2016 4:22 AM
>>
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
>> static_datasources
>>
>>
>>
>> Lost in the code...It seems the datasource just construct the entities
>> and send them over event bus to entity graph processor. I need to dig
>> further to find out the exact point the "backup" relationship is filtered.
>>
>>
>>
>> I think we should some how keep the validation of relationship type. It
>> is so easy to make typo when creating the template manually (I did this
>> quite often...).
>>
>>
>>
>> My idea is to delegate the validation to datasource instead of
>> enumerating all constants it in evaluator. I think this will introduce
>> better extensibility. Any comments?
>>
>>
>>
>> On Thu, Aug 25, 2016 at 1:32 PM Weyl, Alexey (Nokia - IL) <
>> alexey.w...@nokia.com> wrote:
>>
>> Hi Yujun,
>>
>>
>>
>> You can find the names of the lables in the constants.py file.
>>
>>
>>
>> In addition, the restriction on the physical_static datasource is done in
>> it’s driver.py.
>>
>>
>>
>> Alexey
>>
>>
>>
>> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
>> *Sent:* Thursday, August 25, 2016 4:50 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
>> static_datasources
>>
>>
>>
>> Hi, Ifat,
>>
>>
>>
>> I searched for edge_labels in the project. It seems it is validated only
>> in `vitrage/evaluator/template_validation/template_syntax_validator.py`.
>> Where is such restriction applied in static_datasources?
>>
>>
>>
>> --
>>
>> Yujun
>>
>>
>>
>> On Wed, Aug 24, 2016 at 3:19 PM Afek, Ifat (Nokia - IL) <
>> ifat.a...@nokia.com> wrote:
>>
>> Hi Yujun,
>>
>>
>>
>> Indeed, we have some restrictions on the relationship types that can be
>> used in the static datasources. I think we should remove these
>> restrictions, and allow any kind of relationship type.
>>
>>
>>
>> Best regards,
>>
>> Ifat.
>>
>>
>>
>> *From: *Yujun Zhang
>> *Date: *Monday, 22 August 2016 at 08:37
>>
>> I'm following the sample configuration in docs [1] to verify how static
>> datasources works.
>>
>>
>>
>> It seems `backup` relationship is not displayed in the entity graph view
>> and neither is it included in topology show.
>>
>>
>>
>> There is an enumeration for edge labels [2]. Should relationship in
>> static datasource be limited to it?
>>
>>
>>
>> [1]
>> https://github.com/openstack/vitrage/blob/master/doc/source/static-physical-config.rst
>>
>> [2]
>> https://github.com/openstack/vitrage/blob/master/vitrage/common/constants.py#L49
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [watcher] Mascot final choice

2016-08-29 Thread Susanne Balle
When will we know about the mascot as well as what the design looks like?

Susanne

On Thu, Jul 28, 2016 at 6:46 PM, Joe Cropper  wrote:

> +2 to Jellyfish!
>
> > On Jul 28, 2016, at 4:08 PM, Antoine Cabot 
> wrote:
> >
> > Hi Watcher team,
> >
> > Last week during the mid-cycle, we came up with a list of possible
> mascots for Watcher. The only one which is in conflict with other projects
> is the bee.
> > So we have this final list :
> > 1. Jellyfish
> > 2. Eagle
> > 3. Hammerhead shark
> >
> > I'm going to confirm jellyfish as the Watcher mascot by EOW except if
> any contributor is against this choice. Please let me know.
> >
> > Antoine
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Joshua Harlow




 From a brief look, it seems like vCPE is more along the lines of the
customer having a "thin" device on their premise and their (now
virtual) network functions, eg. firewall, live in the providers data
center over a private link created by that thin device. So having a
hypervisor on a customer premise is probably not what most telecoms
would consider vCPE [1].

But in my (limited) example, I'm not talking about managing that thin
device, I am thinking of a hypervisor or two instead in a customer
premise, or remote location, that is controlled by some (magic?)
remote nova, and yeah, would have access to glance, etc, to deploy
instances, basically as a way of avoiding running an OpenStack control
plane there. But not so much in the way of managing upgrades of the
software on that virtual machine on that hypervisor or anything, just
acting as IaaS.


So what/who is the cloud user in this case? It almost seems like there 
isn't much of a user (in the sense of a customer, like say myself) 
involved in this equation. Instead there is really the providers user 
that is issuing these commands and not much else? Is the user the 
provider themselves (so they can take advantage of the under-utilized 
resources on customer premise)?




Thanks,
Curtis.

[1]: Pg. 48 - 
http://innovation.verizon.com/content/dam/vic/PDF/Verizon_SDN-NFV_Reference_Architecture.pdf



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-29 Thread Matt Riedemann

On 8/29/2016 3:46 PM, Sean Dague wrote:

On 08/29/2016 03:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.

Keep in mind feature freeze is more or less Thursday 9/1.

Also keep in mind the goal from the midcycle:

"Jay's personal goal for Newton is for the resource tracker to be
writing inventory and allocation data via the placement API. Get the
data pumping into the placement API in Newton so we can start using it
in Ocata."

1. The ResourceTracker work starts here:

https://review.openstack.org/#/c/358797/

That relies on the placement service being in the service catalog and
will be optional for Newton. There are details to be sorted about
if/when to retry connecting to the placement service with or without
requiring a restart of nova-compute, but I don't think those are too hairy.

Jay is working on changes that go on top of that series to push the
inventory and allocation data from the resource tracker to the placement
service.

Chris Dent pointed out that there is remaining work to do with the
allocation objects in the placement API, but those can be worked in
parallel to the RT work Jay is doing.


If the devstack patch is up in the morning, I can help get 358797 into a
merge state with local testing of both the placement API working, and it
not. Seeing the feedback so far I think I can do that chunk, which would
free jay to work on the follow on patches which don't seem to be posted yet.


2. Chris is going to cleanup the devstack change that adds the placement
service:

https://review.openstack.org/#/c/342362/

The main issue is there isn't a separate placement database, at least
not by default, so Chris has to take that into account. In Newton, by
default, the Nova API DB will be used for the placement service. You can
optionally configure a separate placement database with the API DB
schema, but we're not going to test with that as the default in devstack
in Newton since that's most likely not what deployers would be doing in
Newton as the placement service is still optional.

3. I'm going to work on a job that runs in the experimental queue and
enables the placement service. So by default in Newton devstack the
placement service will not be configured or running. With the
experimental queue job we can test the Nova changes with and without the
placement service to make sure we didn't completely screw something up.


The early testing for the placement-api job is in this devstack-gate change:

https://review.openstack.org/#/c/362441/

Which is failing as expected in the devstack setup because the placement 
DB is no longer a thing we're doing, and we know the devstack change 
needs to be cleaned up for that.




--

If I've left something out please add it here.







--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Curtis
On Mon, Aug 29, 2016 at 2:15 PM, Joshua Harlow  wrote:
> Curtis wrote:
>>
>> On Mon, Aug 29, 2016 at 1:27 PM, gordon chung  wrote:
>>>
>>> just to clarify, what 'innovation' do you believe is required to enable
>>> you
>>> to build on top of OpenStack. what are the feature gaps you are
>>> proposing?
>>> let's avoid defining "the cloud" since that will give you 1000 different
>>> answers if you ask 1000 different people.*
>>
>>
>> One idea I hear fairly often is having a couple of hypervisors in say
>> a single store or some other customer premise, but not wanting to also
>> run an OpenStack control plane there. If we are talking about a
>> hypervisor level, not some other unknown but smaller IoTs...uh thing,
>> does that make more sense from a OpenStack + vCPE context? Or do some
>> think that is out of scope for OpenStack's mission as well?
>>
>> Thanks,
>> Curtis.
>>
>
> 
>
> So is that like making a customer premise have the equivalent of dumb
> terminals (maybe we can call them 'semi-smart' terminals) where those things
> basically can be remote controlled (aka the VMs on them can be upgraded or
> downgraded or deleted or ...) by the corporate (or other) entity that is
> controlling those terminals?
>
> If that's the case, then I don't exactly call that a cloud (in my classical
> sense), but more of a software delivery (and remote-control) strategy (and
> using nova to do this for u?).
>
> But then I don't know all the 3 or 4 letter acronyms so who knows, I might
> be incorrect with the above assumption :-P

>From a brief look, it seems like vCPE is more along the lines of the
customer having a "thin" device on their premise and their (now
virtual) network functions, eg. firewall, live in the providers data
center over a private link created by that thin device. So having a
hypervisor on a customer premise is probably not what most telecoms
would consider vCPE [1].

But in my (limited) example, I'm not talking about managing that thin
device, I am thinking of a hypervisor or two instead in a customer
premise, or remote location, that is controlled by some (magic?)
remote nova, and yeah, would have access to glance, etc, to deploy
instances, basically as a way of avoiding running an OpenStack control
plane there. But not so much in the way of managing upgrades of the
software on that virtual machine on that hypervisor or anything, just
acting as IaaS.

Thanks,
Curtis.

[1]: Pg. 48 - 
http://innovation.verizon.com/content/dam/vic/PDF/Verizon_SDN-NFV_Reference_Architecture.pdf


>
> -Josh
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][osic] OSIC cluster ready with CentOS

2016-08-29 Thread Vikram Hosakote (vhosakot)
Hi Kollagues,

Scenario # 11 (fio_many_osd) ended successfully.  Daviey and I installed
CentOS on all the 130 target nodes in the OSIC cluster today.  We also:

1.  Deleted all the Ubuntu images and volumes on the deploy node
(729494-comp-s3500-002).
2.  Cleaned up /var/lib/docker on the deploy node.
3.  Started building kolla CentOS source images.
4.  Updated the cobbler profile and cobbler kickstart for CentOS.

The same kolla-ansible inventory file used for Ubuntu testing can be
used for CentOS as well.  Once the images are built, we can run tests on
CentOS before handing the OSIC cluster back tomorrow.

Regards,
Vikram Hosakote
IRC:  vhosakot
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Barcelona summit space requirements and session planning etherpad

2016-08-29 Thread Nikhil Komawar
Just a friendly reminder that I will be sending the final summit
planning request for slots for Glance first thing tomorrow. So, please
cast your vote if haven't already. Thanks!


>From the looks of it, the current winner looks to be FB: 2, WR: 2, CM: 1
(Friday afternoon) -- but this could change.

https://etherpad.openstack.org/p/ocata-glance-summit-planning


On 8/25/16 11:58 AM, Nikhil Komawar wrote:
> Hi,
>
>
> Just wanted to point out to those who haven't been to Glance meetings in
> the past couple of weeks that we've to submit space requirements for the
> Barcelona design summit early next week. I've listed the constraints
> poised in front of us in the planning etherpad [1]. Please see the top
> portion of this etherpad under "Layout Proposal" to either propose or
> vote on the layout proposal options to help us collaboratively determine
> the space needs for Glance. Currently there are 2 proposals and if you
> don't have any other in mind, please cast your vote on the given.
>
>
> I need the votes by EOD on Monday 29th Aug and will be sending our final
> space requirement request first thing on Tuesday 30th.
>
>
> On another note, if you want to start proposing sessions for the summit
> feel free to scroll to the bottom of the etherpad for the template and
> the slots for the topics.
>
>
> Let me know if you've any questions.
>
>
> [1] https://etherpad.openstack.org/p/ocata-glance-summit-planning
>
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OSIC cluster ready with CentOS

2016-08-29 Thread Vikram Hosakote (vhosakot)
Hi Kollagues,

Scenario # 11 (fio_many_osd) ended successfully.  Daviey and I installed
CentOS on all the 130 target nodes in the OSIC cluster today.  We also:

1.  Deleted all the Ubuntu images and volumes on the deploy node
(729494-comp-s3500-002).
2.  Cleaned up /var/lib/docker on the deploy node.
3.  Started building kolla CentOS source images.
4.  Updated the cobbler profile and cobbler kickstart for CentOS.

The same kolla-ansible inventory file used for Ubuntu testing can be
used for CentOS as well.  Once the images are built, we can run tests on
CentOS before handing the OSIC cluster back tomorrow.

Regards,
Vikram Hosakote
IRC:  vhosakot
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] stable/newton branching schedule

2016-08-29 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-08-29 16:49:20 -0400:
> Excerpts from Doug Hellmann's message of 2016-08-26 10:29:19 -0400:
> > I plan to create the stable/newton branches for non-client libraries on
> > Monday based on the most recently tagged versions according to
> > the deliverable files in openstack/releases. If you *know* you are going
> > to need a bug fix release and want me to hold off, please speak up
> > before Monday morning US Eastern time.
> 
> I've started creating these branches for teams where the PTL confirmed
> they were ready. Please look for the .gitreview [1] and reno updates
> [2] on the new branches and take over the patches if they need to
> be fixed up in order to land properly.
> 
> Doug
> 
> [1] https://review.openstack.org/#/q/topic:create-newton
> [2] https://review.openstack.org/#/q/topic:reno-newton
> 

My process document was missing a step, so most of those patches won't
land until we update the devstack-gate feature list [3]. I'll announce
when it's safe to recheck.

Sorry for the confusion,
Doug

[3] https://review.openstack.org/362435

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] stable/newton branching schedule

2016-08-29 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-08-26 10:29:19 -0400:
> I plan to create the stable/newton branches for non-client libraries on
> Monday based on the most recently tagged versions according to
> the deliverable files in openstack/releases. If you *know* you are going
> to need a bug fix release and want me to hold off, please speak up
> before Monday morning US Eastern time.

I've started creating these branches for teams where the PTL confirmed
they were ready. Please look for the .gitreview [1] and reno updates
[2] on the new branches and take over the patches if they need to
be fixed up in order to land properly.

Doug

[1] https://review.openstack.org/#/q/topic:create-newton
[2] https://review.openstack.org/#/q/topic:reno-newton

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-29 Thread Sean Dague
On 08/29/2016 03:40 PM, Matt Riedemann wrote:
> I've been out for a week and not very involved in the resource providers
> work, but after talking about the various changes up in the air at the
> moment a bunch of us thought it would be helpful to lay out next steps
> for the work we want to get done this week.
> 
> Keep in mind feature freeze is more or less Thursday 9/1.
> 
> Also keep in mind the goal from the midcycle:
> 
> "Jay's personal goal for Newton is for the resource tracker to be
> writing inventory and allocation data via the placement API. Get the
> data pumping into the placement API in Newton so we can start using it
> in Ocata."
> 
> 1. The ResourceTracker work starts here:
> 
> https://review.openstack.org/#/c/358797/
> 
> That relies on the placement service being in the service catalog and
> will be optional for Newton. There are details to be sorted about
> if/when to retry connecting to the placement service with or without
> requiring a restart of nova-compute, but I don't think those are too hairy.
> 
> Jay is working on changes that go on top of that series to push the
> inventory and allocation data from the resource tracker to the placement
> service.
> 
> Chris Dent pointed out that there is remaining work to do with the
> allocation objects in the placement API, but those can be worked in
> parallel to the RT work Jay is doing.

If the devstack patch is up in the morning, I can help get 358797 into a
merge state with local testing of both the placement API working, and it
not. Seeing the feedback so far I think I can do that chunk, which would
free jay to work on the follow on patches which don't seem to be posted yet.

> 2. Chris is going to cleanup the devstack change that adds the placement
> service:
> 
> https://review.openstack.org/#/c/342362/
> 
> The main issue is there isn't a separate placement database, at least
> not by default, so Chris has to take that into account. In Newton, by
> default, the Nova API DB will be used for the placement service. You can
> optionally configure a separate placement database with the API DB
> schema, but we're not going to test with that as the default in devstack
> in Newton since that's most likely not what deployers would be doing in
> Newton as the placement service is still optional.
> 
> 3. I'm going to work on a job that runs in the experimental queue and
> enables the placement service. So by default in Newton devstack the
> placement service will not be configured or running. With the
> experimental queue job we can test the Nova changes with and without the
> placement service to make sure we didn't completely screw something up.
> 
> -- 
> 
> If I've left something out please add it here.
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Gluon; was Coordination between actions/WGs

2016-08-29 Thread Ian Wells
On 29 August 2016 at 03:48, Jay Pipes  wrote:

> On 08/27/2016 11:16 AM, HU, BIN wrote:
>
>> So telco use cases is not only the innovation built on top of OpenStack.
>> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile
>> Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in
>> OpenStack itself. If OpenStack don't address those basic requirements,
>>
>
> That's the thing, Bin, those are *not* "basic" requirements. The Telco
> vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking for
> fundamental architectural and design changes to the foundational components
> of OpenStack. Instead of Nova being designed to manage a bunch of hardware
> in a relatively close location (i.e. a datacenter or multiple datacenters),
> vCPE is asking for Nova to transform itself into a micro-agent that can be
> run on an Apple Watch and do things in resource-constrained environments
> that it was never built to do.
>

This conversation started above, but in a war of analogies became this:


> And, honestly, I have no idea what Gluon is trying to do. Ian sent me some
> information a while ago on it. I read it. I still have no idea what Gluon
> is trying to accomplish other than essentially bypassing Neutron entirely.
> That's not "innovation". That's subterfuge.


Gluon, as written, does allow you to bypass Neutron, but as I'm sure you
understand, I did have more useful features on my mind than 'subterfuge'.
Let me lay this out in the clear again, since I've had this conversation
with you and others and I'm not getting my point across.  And in keeping
with the discussion I'll start with an analogy.

When we started out, Nova was the compute component for OpenStack.  I'd
like to remind you of the problems we had with the docker driver, because
docker containers are like, but also unlike, virtual machines.  They're
compute containers, they support networking, but their storage requirements
are weird; they tend to use technologies unlike conventional disk images,
they're not exactly capable of using block devices without help, and so
on.  You can change Nova to support that, or you can say 'these are
sufficiently different that we should have a separate API'.  I see we have
Zun now, so someone's trying that approach.  They're 'bypassing' Nova.

Here, we're talking about different compute components that have
similarities and differences to virtual machines.  If they're similar
enough, then building into Nova is logical - it will require some change to
the APIs (let's forget the internal code for a moment) but not drastic
ones; ones that are backward compatible.  If they're different enough you
sit something new alongside Nova.

Neutron is the networking component for OpenStack the same way that Nova is
compute.  It brings together the sorts of thing you would want to run cloud
applications on a public cloud, and as it happens these concepts also work
reasonably nicely for many other cloud use cases.  But - today - it is not
'all networking everywhere', it's 'networking with a specific focus on L2
domains' - because this solves the majority of its users' problems.  (We
can quibble about whether a 'network' in Neutron must be L2, because it's
not exactly documented as such, but I would like to point out the plugin
that most people use today to implement networks is called 'ML2' and the
only way to attach a port to anything is to attach it to a network with
location-independent subnets.  Suffice it to say that the consumer of the
API can treat it like an L2 network.)

There comes a question, then.  If it is to be the only networking project
in OpenStack, for it to be 'all networking everywhere', then we need to
address the problem that its current API does not suit every form of
networking in existence.  We need to do this without affecting every single
person who uses OpenStack as it is and doesn't want or need every new bit
of functionality.  For that we have extensions within Neutron, but they're
still constrained to operate within Neutron's existing API structure.  The
most complex ones tend to work on the principle of 'networks work as you
expect until the extension steps in, then they become a bit weird and
special'.  This isn't the way to write a system with widely understood and
easy-to-use APIs.  Really you're just tolerating the history of Neutron
because you don't have a choice.  It also makes something which turns a bit
monolithic and complex in practice (e.g. forwarding elements being
programmed by multiple bits of independent code).

Some of the APIs we were experimenting with were things that already
existed as Neutron extensions, such as MPLS/BGP overlays.  Some that we'd
like to try in the future include things like point-to-point connectivity,
or comprehensively routed networks.  But as much as anything the point is
that we know that networking changes over time and people have new ideas of
how to use what exists, so we're trying to make 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Joshua Harlow

Curtis wrote:

On Mon, Aug 29, 2016 at 1:27 PM, gordon chung  wrote:

just to clarify, what 'innovation' do you believe is required to enable you
to build on top of OpenStack. what are the feature gaps you are proposing?
let's avoid defining "the cloud" since that will give you 1000 different
answers if you ask 1000 different people.*


One idea I hear fairly often is having a couple of hypervisors in say
a single store or some other customer premise, but not wanting to also
run an OpenStack control plane there. If we are talking about a
hypervisor level, not some other unknown but smaller IoTs...uh thing,
does that make more sense from a OpenStack + vCPE context? Or do some
think that is out of scope for OpenStack's mission as well?

Thanks,
Curtis.





So is that like making a customer premise have the equivalent of dumb 
terminals (maybe we can call them 'semi-smart' terminals) where those 
things basically can be remote controlled (aka the VMs on them can be 
upgraded or downgraded or deleted or ...) by the corporate (or other) 
entity that is controlling those terminals?


If that's the case, then I don't exactly call that a cloud (in my 
classical sense), but more of a software delivery (and remote-control) 
strategy (and using nova to do this for u?).


But then I don't know all the 3 or 4 letter acronyms so who knows, I 
might be incorrect with the above assumption :-P


-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Curtis
On Mon, Aug 29, 2016 at 1:27 PM, gordon chung  wrote:
> just to clarify, what 'innovation' do you believe is required to enable you
> to build on top of OpenStack. what are the feature gaps you are proposing?
> let's avoid defining "the cloud" since that will give you 1000 different
> answers if you ask 1000 different people.*

One idea I hear fairly often is having a couple of hypervisors in say
a single store or some other customer premise, but not wanting to also
run an OpenStack control plane there. If we are talking about a
hypervisor level, not some other unknown but smaller IoTs...uh thing,
does that make more sense from a OpenStack + vCPE context? Or do some
think that is out of scope for OpenStack's mission as well?

Thanks,
Curtis.

>
> * actually you'll get 100 answers and the rest will say: "i don't know."
>
>
> On 29/08/16 12:23 PM, HU, BIN wrote:
>
> Please see inline [BH526R].
>
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Monday, August 29, 2016 3:48 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
>
> On 08/27/2016 11:16 AM, HU, BIN wrote:
>
> The challenge in OpenStack is how to enable the innovation built on top of
> OpenStack.
>
> No, that's not the challenge for OpenStack.
>
> That's like saying the challenge for gasoline is how to enable the
> innovation of a jet engine.
>
> [BH526R] True. 87 gas or diesel certainly cannot be used in any jet engine.
> While Jet A-1 and Jet B fuel are widely used for jet engine today,
> innovation of a new generation of jet engine may require an innovation of
> new type of aviation fuel.
>
> So telco use cases is not only the innovation built on top of
> OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking), vCPE
> Cloud, Mobile Cloud, Mobile Edge Cloud, brings the needed requirement
> for innovation in OpenStack itself. If OpenStack don't address those
> basic requirements,
>
> That's the thing, Bin, those are *not* "basic" requirements. The Telco vCPE
> and Mobile "Edge cloud" (hint: not a cloud) use cases are asking for
> fundamental architectural and design changes to the foundational components
> of OpenStack. Instead of Nova being designed to manage a bunch of hardware
> in a relatively close location (i.e. a datacenter or multiple datacenters),
> vCPE is asking for Nova to transform itself into a micro-agent that can be
> run on an Apple Watch and do things in resource-constrained environments
> that it was never built to do.
>
> [BH526R] So we have 2 choices here - either to explicitly exclude telco
> requirement from OpenStack, and clearly indicate that telco needs to work on
> its own "telco stack"; or to allow telco to innovate within OpenStack
> through perhaps a new type of "telco nova" and/or "telco Neutron". Which way
> do you suggest?
>
> And, honestly, I have no idea what Gluon is trying to do. Ian sent me some
> information a while ago on it. I read it. I still have no idea what Gluon is
> trying to accomplish other than essentially bypassing Neutron entirely.
> That's not "innovation". That's subterfuge.
>
> [BH526R] Thank you for recognizing you don't know Gluon. Certainly the
> perception of "bypassing Neutron entirely" is incorrect. You are very
> welcome to join our project and meeting so that you can understand more of
> what Gluon is. We are also happy to set up specific meetings with you to
> discuss it too. Just let me know which way prefer. We are looking for you to
> participate in Gluon project and meeting.
>
> [BH526R] On the other hand, I also try to understand why "bypassing Neutron
> entirely" is not an innovation. Neutron is not perfect. (I don't mean Gluon
> here, but) if there is an innovation that can replace Neutron entirely,
> everyone should be happy. Just like automobile bypassed carriage wagon
> entirely.
>
> the innovation will never happen on top of OpenStack.
>
> Sure it will. AT and BT and other Telcos just need to write their own
> software that runs their proprietary vCPE software distribution mechanism,
> that's all. The OpenStack community shouldn't be relied upon to create
> software that isn't applicable to general cloud computing and cloud
> management platforms.
>
> [BH526R] If I understand correctly, this suggestion excludes telco from
> OpenStack entirely. That's fine.
>
> An example is - self-driving car is built on top of many technologies, such
> as sensor/camera, AI, maps, middleware etc. All innovations in each
> technology (sensor/camera, AI, map, etc.) bring together the innovation of
> self-driving car.
>
> Yes, indeed, but the people who created the self-driving car software didn't
> ask the people who created the cameras to write the software for them that
> does the self-driving.
>
> [BH526R] It's actually the other way around. Furthermore, camera/sensor
> industry does see the need, and VC's funding has been dramatically 

[openstack-dev] [nova] Next steps for resource providers work

2016-08-29 Thread Matt Riedemann
I've been out for a week and not very involved in the resource providers 
work, but after talking about the various changes up in the air at the 
moment a bunch of us thought it would be helpful to lay out next steps 
for the work we want to get done this week.


Keep in mind feature freeze is more or less Thursday 9/1.

Also keep in mind the goal from the midcycle:

"Jay's personal goal for Newton is for the resource tracker to be 
writing inventory and allocation data via the placement API. Get the 
data pumping into the placement API in Newton so we can start using it 
in Ocata."


1. The ResourceTracker work starts here:

https://review.openstack.org/#/c/358797/

That relies on the placement service being in the service catalog and 
will be optional for Newton. There are details to be sorted about 
if/when to retry connecting to the placement service with or without 
requiring a restart of nova-compute, but I don't think those are too hairy.


Jay is working on changes that go on top of that series to push the 
inventory and allocation data from the resource tracker to the placement 
service.


Chris Dent pointed out that there is remaining work to do with the 
allocation objects in the placement API, but those can be worked in 
parallel to the RT work Jay is doing.


2. Chris is going to cleanup the devstack change that adds the placement 
service:


https://review.openstack.org/#/c/342362/

The main issue is there isn't a separate placement database, at least 
not by default, so Chris has to take that into account. In Newton, by 
default, the Nova API DB will be used for the placement service. You can 
optionally configure a separate placement database with the API DB 
schema, but we're not going to test with that as the default in devstack 
in Newton since that's most likely not what deployers would be doing in 
Newton as the placement service is still optional.


3. I'm going to work on a job that runs in the experimental queue and 
enables the placement service. So by default in Newton devstack the 
placement service will not be configured or running. With the 
experimental queue job we can test the Nova changes with and without the 
placement service to make sure we didn't completely screw something up.


--

If I've left something out please add it here.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][barbican][cinder][cloudkitty][ironic][magnum][monasca][searchlight][senlin][solum][swift][tripleo][watcher][winstackers] tags needed to be considered part of Newton

2016-08-29 Thread Sean McGinnis
On Mon, Aug 29, 2016 at 10:50:27AM -0400, Doug Hellmann wrote:
> We have several projects using the cycle-with-intermediary release
> model for which we have not had any releases yet this cycle. Please
> consider a release this week, and be aware that we need a release
> by the final deadline in order to consider these projects as part
> of Newton (see [1] for details).
> 
> Thanks,
> Doug
> 
> 
> python-barbicanclient
> python-brick-cinderclient-ext

Since this is an extension to python-cinderclient, we treat this the
same and have the same release deadline for client libraries. I plan on
requesting a final Newton release for this along with
python-cinderclient this week. It just doesn't have enough activity to
have needed a release any earlier.


> cloudkitty
> cloudkitty-dashboard
> python-cloudkittyclient
> bifrost
> magnum
> magnum-ui
> monasca-transform
> python-searchlightclient
> senlin-dashboard
> python-solumclient
> solum
> solum-dashboard
> solum-infra-guestagent
> python-swiftclient
> tripleo-quickstart
> tripleo-ui
> watcher-dashboard
> networking-hyperv
> 
> [1] 
> http://governance.openstack.org/reference/tags/release_cycle-with-intermediary.html#requirements
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Sean McGinnis
On Mon, Aug 29, 2016 at 09:29:57AM -0400, Andrew Laski wrote:
> 
> 
> 
> On Mon, Aug 29, 2016, at 09:06 AM, Jordan Pittier wrote:
> >
> >
> > On Mon, Aug 29, 2016 at 8:50 AM, Zhenyu Zheng
> >  wrote:
> >> Hi, all
> >>
> >> Currently we have customer demands about adding parameter
> >> "volume_type" to --block-device to provide the support of specified
> >> storage backend to boot instance. And I find one newly drafted
> >> Blueprint that aiming to address the same feature:
> >> https://blueprints.launchpad.net/nova/+spec/support-boot-instance-set-store-type
> >> ;
> >>
> >> As I know this is kind of "proxy" feature for cinder and we don't
> >> like it in general, but as the boot from volume functional was
> >> already there, so maybe it is OK to support another parameter?
> >>
> >> So, my question is that what are your opinions about this in general?
> >> Do you like it or it will not be able to got approved at all?
> >>
> >> Thanks,
> >>
> >> Kevin Zheng
> >
> > Hi,
> > I think it's not a great idea. Not only for the reason you mention,
> > but also because the "nova boot" command is already way to complicated
> > with way to many options. IMO we should only add support for new
> > features, not "features" we can have by other means, just for
> > convenience.
> 
> I completely agree with this. However I have some memory of us
> saying(in Austin?) that adding volume_type would be acceptable since
> it's a clear oversight in the list of parameters for specifying a block
> device. So while I greatly dislike Nova creating volumes and would
> rather users pass in pre-created volume ids I would support adding this
> parameter. I do not support continuing to add parameters if Cinder adds
> parameters though.
> 

FWIW, I get asked the question on the Cinder side of how to specify
which volume type to use when booting from a Cinder volume on a fairly
regular basis.

I agree with the approach of not adding more proxy functionality in
Nova, but since this is an existing feature that is missing expected
functionality, I would like to see this get in.

Just my $0.02.

Sean

> 
> >
> >
> > -
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Requesting FFE for improved Swift deployments

2016-08-29 Thread Christian Schwede
Hello,

kindly asking for a FFE for a required setting to improve Swift-based
TripleO deployments:

https://review.openstack.org/#/c/358643/

This is required to land the last patch in a series of TripleO-doc patches:

https://review.openstack.org/#/c/293311/
https://review.openstack.org/#/c/360353/
https://review.openstack.org/#/c/361032/

Current idea is to automate the described manual actions for Ocata.
There was some discussion on the ML as well:

http://lists.openstack.org/pipermail/openstack-dev/2016-August/102053.html

If one is interested in testing this with tripleo-quickstart, here is a
patch to automatically add extra blockdevices to the overcloud VMs:

https://review.openstack.org/#/c/359630/

Thanks a lot!

-- Christian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread gordon chung
just to clarify, what 'innovation' do you believe is required to enable you to 
build on top of OpenStack. what are the feature gaps you are proposing? let's 
avoid defining "the cloud" since that will give you 1000 different answers if 
you ask 1000 different people.*

* actually you'll get 100 answers and the rest will say: "i don't know."

On 29/08/16 12:23 PM, HU, BIN wrote:


Please see inline [BH526R].

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, August 29, 2016 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/27/2016 11:16 AM, HU, BIN wrote:


The challenge in OpenStack is how to enable the innovation built on top of 
OpenStack.



No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the innovation 
of a jet engine.

[BH526R] True. 87 gas or diesel certainly cannot be used in any jet engine. 
While Jet A-1 and Jet B fuel are widely used for jet engine today, innovation 
of a new generation of jet engine may require an innovation of new type of 
aviation fuel.



So telco use cases is not only the innovation built on top of
OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking), vCPE
Cloud, Mobile Cloud, Mobile Edge Cloud, brings the needed requirement
for innovation in OpenStack itself. If OpenStack don't address those
basic requirements,



That's the thing, Bin, those are *not* "basic" requirements. The Telco vCPE and 
Mobile "Edge cloud" (hint: not a cloud) use cases are asking for fundamental 
architectural and design changes to the foundational components of OpenStack. 
Instead of Nova being designed to manage a bunch of hardware in a relatively 
close location (i.e. a datacenter or multiple datacenters), vCPE is asking for 
Nova to transform itself into a micro-agent that can be run on an Apple Watch 
and do things in resource-constrained environments that it was never built to 
do.

[BH526R] So we have 2 choices here - either to explicitly exclude telco 
requirement from OpenStack, and clearly indicate that telco needs to work on 
its own "telco stack"; or to allow telco to innovate within OpenStack through 
perhaps a new type of "telco nova" and/or "telco Neutron". Which way do you 
suggest?

And, honestly, I have no idea what Gluon is trying to do. Ian sent me some 
information a while ago on it. I read it. I still have no idea what Gluon is 
trying to accomplish other than essentially bypassing Neutron entirely. That's 
not "innovation". That's subterfuge.

[BH526R] Thank you for recognizing you don't know Gluon. Certainly the 
perception of "bypassing Neutron entirely" is incorrect. You are very welcome 
to join our project and meeting so that you can understand more of what Gluon 
is. We are also happy to set up specific meetings with you to discuss it too. 
Just let me know which way prefer. We are looking for you to participate in 
Gluon project and meeting.

[BH526R] On the other hand, I also try to understand why "bypassing Neutron 
entirely" is not an innovation. Neutron is not perfect. (I don't mean Gluon 
here, but) if there is an innovation that can replace Neutron entirely, 
everyone should be happy. Just like automobile bypassed carriage wagon entirely.



the innovation will never happen on top of OpenStack.



Sure it will. AT and BT and other Telcos just need to write their own 
software that runs their proprietary vCPE software distribution mechanism, 
that's all. The OpenStack community shouldn't be relied upon to create software 
that isn't applicable to general cloud computing and cloud management platforms.

[BH526R] If I understand correctly, this suggestion excludes telco from 
OpenStack entirely. That's fine.



An example is - self-driving car is built on top of many technologies, such as 
sensor/camera, AI, maps, middleware etc. All innovations in each technology 
(sensor/camera, AI, map, etc.) bring together the innovation of self-driving 
car.



Yes, indeed, but the people who created the self-driving car software didn't 
ask the people who created the cameras to write the software for them that does 
the self-driving.

[BH526R] It's actually the other way around. Furthermore, camera/sensor 
industry does see the need, and VC's funding has been dramatically increased to 
invest in camera/sensor, map, AI areas. And the startups in those areas are the 
fastest growing areas. Those investments and innovations accelerate the 
maturity of self-driving cars.



WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on top 
of OpenStack.



You are defining "innovation" in an odd way, IMHO. "Innovation" for the vCPE 
use case sounds a whole lot like "rearchitect your entire software stack so 
that we don't have to write much code that runs on set-top boxes."

[BH526R] Certainly it is 

Re: [openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Loo, Ruby
Hi,

Thanks for everyone's comments. Dean's were very useful, I'm going to file 
those away for future commands.

In this case, I think we'll go with 'passthru' since I do want to make life 
easier for the operators :)

--ruby

On 2016-08-29, 12:00 PM, "Jay Faulkner" > wrote:


On Aug 29, 2016, at 8:19 AM, Dean Troyer 
> wrote:

On Mon, Aug 29, 2016 at 9:41 AM, Loo, Ruby 
> wrote:
I did this because 'passthrough' is more English than 'passthru' and I thought 
that was the 'way to go' in osc. But some folks wanted it to be 'passthru' 
because in ironic, we've been calling them 'passthru' since day 2.

Our default rule is to use proper spellings and not abbreviations[0].  The 
exceptions we have made are due to either a) significant existing practice in 
the industry (outside OpenStack, mostly in the network area so far); and b) 
when the user experience is clearly improved.

To be clear; thru is a valid english word, in just about every dictionary I've 
checked. In fact, some evidence shows it predates "through" as a word. I agree 
with other folks who have posted on the mailing list that keeping "passthru" is 
going to be more clear to operators of ironic than changing it to "passthrough" 
in this single context.

Thanks,
Jay Faulkner
OSIC


You might notice that calling out prior OpenStack usage is absent from that 
list.  One of the tenets of OSC from the start is to look first at user 
experience and identifying a _single_ set of terminology.  An existing practice 
can fall under (b) when it is compelling overall, and is an easier case to make 
when there is no competing OSC usage, or other OSC usage matches.

Unfortunately, I wasn't able to make everyone happy because someone else thinks 
that we shouldn't be providing two different openstack commands that provide 
the same functionality. (They're fine with either one, just not both.)

I agree with not aliasing commands out of the box.  We'll do that for 
deprecations, and are looking at a generalize alias method for other reasons, 
but on initial implementation I would prefer to not do this.

What do the rest of the folks think? Some guidance from the OpenStackClient 
folks would be greatly appreciated.

I would suggest you pick the one that lines up with usage outside OpenStack, in 
the sorts of ways that our users would be familiar with[1].  In this case, a 
grep of help output of even 'passthr' will find the match.

Hopefully this all makes enough sense that we can add it as a guideline to the 
OSC docs.  Feedback welcome.

Thanks
dt

[0] Where 'proper' is usually North American English, for whatever definition 
of that we have. This is totally due to me not thinking far enough ahead 4 
years ago...

[1] Cases like "all other clouds use this term" or "it is the common way to 
refer to this resource in the networking field" have been used in the past.

--
Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new] ironic-staging-drivers 0.3.0 release

2016-08-29 Thread no-reply
We are glowing to announce the release of:

ironic-staging-drivers 0.3.0: A project used to hold out-of-tree
ironic drivers

With source available at:

http://git.openstack.org/cgit/openstack/ironic-staging-drivers

Please report issues through launchpad:

http://bugs.launchpad.net/ironic-staging-drivers

For more details, please see below.

Changes in ironic-staging-drivers 0.2.0..0.3.0
--

f3698f5 Fix iboot mock
3ca96ed Mock the 'libvirt' import on tests
0eef2ea Remove kwargs from the vendor passthru continue_deploy tests
6e8fc9f Add iBoot driver
477fd76 Remove pass_deploy_info() from AMT driver
7a08bec Install amt driver requirements
afee94c Add devstack plugin


Diffstat (except docs and test files)
-

devstack/enabled-drivers.txt   |  11 +
devstack/plugin.sh |  64 
driver-requirements.txt|   6 -
ironic_staging_drivers/amt/other-requirements.sh   |   3 +
ironic_staging_drivers/amt/vendor.py   |  13 +-
ironic_staging_drivers/iboot/__init__.py   |  81 
ironic_staging_drivers/iboot/power.py  | 287 ++
ironic_staging_drivers/libvirt/power.py|   3 +-
.../libvirt/python-requirements.txt|   1 +
requirements.txt   |   1 +
setup.cfg  |   3 +
test-requirements.txt  |   3 -
21 files changed, 1065 insertions(+), 68 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 93398a8..c810371 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13,0 +14 @@ jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
+oslo.service>=1.10.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index dfc5d2b..5974613 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -19,3 +18,0 @@ mock>=1.2 # BSD
-
-# libvirt driver requires libvirt-python
-libvirt-python>=1.2.5 # LGPLv2+



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday August 30th at 19:00 UTC

2016-08-29 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday August 30th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-08-23-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-08-23-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-08-23-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][documentation] openstack-doc-tools 1.0.1 release

2016-08-29 Thread no-reply
We are delighted to announce the release of:

openstack-doc-tools 1.0.1: Tools for OpenStack Documentation

With source available at:

http://git.openstack.org/cgit/openstack/openstack-doc-tools

Please report issues through launchpad:

http://bugs.launchpad.net/openstack-manuals

For more details, please see below.

Changes in openstack-doc-tools 1.0.0..1.0.1
---

a9b1818 Fix install-guide draft translated publishing
8e756ad [cli-reference] add gnocchi subcommands


Diffstat (except docs and test files)
-

bin/doc-tools-check-languages   | 11 ++-
os_doc_tools/resources/clients.yaml | 37 -
2 files changed, 42 insertions(+), 6 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam report

2016-08-29 Thread Loo, Ruby
Hi,

Here is this week's subteam report for Ironic. As usual, this is pulled 
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff between 22 Aug 2016 and 29 Aug 2016)
- Ironic: 234 bugs (+11) + 214 wishlist items (+3). 33 new (+7), 170 in 
progress (+6), 0 critical (-1), 34 high and 17 incomplete (+1)
- Inspector: 10 bugs (+1) + 20 wishlist items. 1 new (+1), 9 in progress, 0 
critical, 1 high and 2 incomplete
- Nova bugs with Ironic tag: 10. 0 new, 0 critical, 0 high

Gate improvements (jlvillal, lucasagomes, dtantsur)
===
* trello: 
https://trello.com/c/HWVHhxOj/1-multi-tenant-networking-network-isolation
- Patch to allow constraints to be used for ramdisk: 
https://review.openstack.org/358855

Generic boot-from-volume (TheJulia, dtantsur, lucasagomes)
==
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- Spec needs reviews: https://review.openstack.org/#/c/294995/
- Volume connector information revisions to be rebased this week 
(https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526231)

OpenStackClient plugin for ironic (dtantsur, rloo)
==
* trello: https://trello.com/c/ckqtq3kG/16-openstackclient-plugin-for-ironic
- patches for review: 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient+branch:master+topic:bug/1526479
- 'passthru' vs 'passthrough' for eg 'openstack baremetal node passthru list'. 
Going to only use 'passthru' : 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102449.html
- providing 'openstack baremetal node list --chassis' & 'openstack baremetal 
port list --node'. Should we provide 'ironic node list --chassis' and 'ironic 
port list --node' ? : 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102461.html
- some discrepancies/bugs wrt existing osc commands, will push up some patches 
(hopefully in next 2 days) [rloo]

Notifications (mariojv)
===
* trello: https://trello.com/c/MD8HNcwJ/17-notifications
- 8/29/2016 update
- Notification base classes and docs patch merged
- Power state notification patch needs reviews

Keystone policy support (JayF, devananda)
=
* trello: https://trello.com/c/P5q3Es2z/15-keystone-policy-support
- patch to apply policy to /heartbeat - https://review.openstack.org/353696 
(more related to agent API promotion)
- https://review.openstack.org/#/c/350177/ (Add test to ensure policy is always 
authorized)
- otherwise done!

Active node creation (TheJulia)
===
* trello: https://trello.com/c/BwA91osf/22-active-node-creation
- New revision of the tempest tests should be expected later today.

Serial console (yossy, hshiina, yuikotakadamori)

* trello: https://trello.com/c/nm3I8djr/20-serial-console
- documentation: merged https://review.openstack.org/#/c/293872/
- nova patch: needs review https://review.openstack.org/#/c/328157/  (-2ed 
again). Will have to wait til Ocata.

Enhanced root device hints (lucasagomes)

* trello: https://trello.com/c/f9DTEvDB/21-enhanced-root-device-hints
- Two patches for ironic-lib ready to review (they address the white space 
problem from last week): https://review.openstack.org/#/c/348953/1 and 
https://review.openstack.org/#/c/358000/

Inspector (dtansur)
===
* trello: https://trello.com/c/PwH1pexJ/23-rescue-mode
- no updates, releases are coming soon
- states patch needs deciding whether to land before Feature freeze 
https://review.openstack.org/#/c/348943/ (milan)

Bifrost (TheJulia)
==
- Will be cutting a release this week.

.

Until next week (or the week after, if next week's meeting doesn't happen),
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
~


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][OpenStackClient] deprecated commands (was Re: Upcoming OpenStackClient 3.0 Release (Monday))

2016-08-29 Thread Dean Troyer
On Mon, Aug 29, 2016 at 1:17 PM, Ruby Loo  wrote:

> For the deprecated commands -- will they be supported forever, or is there
> some deprecation policy that OSC follows? In ironic, we have three OSC
> commands that we've deprecated, but we don't know when/if we can ever
> delete them.
>

Deprecated things are maintained in OSC for a minimum of one year (two
major releases for the current schedules) following the first release of
deprecation. So the commands mentioned here will be removed no sooner than
late August 2017.  In practice we tend to be conservative on these and do
not always do them immediately.

Steve has said in the past that we hold for two major releases[0], however
OSC does not use the service release cycle and that can be confused with
two major OSC releases, which have no timeframe associated with them.

dt

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094841.html

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][OpenStackClient] deprecated commands (was Re: Upcoming OpenStackClient 3.0 Release (Monday))

2016-08-29 Thread Ruby Loo
Hi,

On Sun, Aug 21, 2016 at 9:08 PM, Dean Troyer  wrote:

> We're excited to pre-announce the release of OSC 3.0.0. The release is
> expected to be approved by the release team during their working hours on
> Monday 22 Aug 2016.
>
> This is a **huge** release, and we shuffled things around so we've bumped
> our major version. We tried our darndest to not break anything, and where
> applicable we deprecated things instead. We have a small number of known,
> documented breaking changes, which is why we did a major version bump,
> please keep this in mind when upgrading.
> ...
>
The known breaking changes are:
>   - The `ip floating` commands have been renamed to `floating ip` -- check
> the help output for details.  The old commands are still present but
> deprecated and no longer appear in help output.
>
>
For the deprecated commands -- will they be supported forever, or is there
some deprecation policy that OSC follows? In ironic, we have three OSC
commands that we've deprecated, but we don't know when/if we can ever
delete them.

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #91

2016-08-29 Thread Emilien Macchi
Hi,

If you have any topic to add for this week, please use the etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160830

See you tomorrow,

On Tue, Aug 23, 2016 at 1:08 PM, Iury Gregory  wrote:
> No topic/discussion in our agenda, we cancelled the meeting, see you next
> week!
>
>
>
> 2016-08-22 16:19 GMT-03:00 Iury Gregory :
>>
>> Hi Puppeteers!
>>
>> We'll have our weekly meeting tomorrow at 3pm UTC on #openstack-meeting-4
>>
>> Here's a first agenda:
>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160823
>>
>> Feel free to add topics, and any outstanding bug and patch.
>>
>> See you tomorrow!
>> Thanks,
>
>
>
>
> --
>
> ~
> Att[]'s
> Iury Gregory Melo Ferreira
> Master student in Computer Science at UFCG
> E-mail:  iurygreg...@gmail.com
> ~
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Requesting FFE for DPDK and SR-IOV Automation

2016-08-29 Thread Saravanan KR
Hello,

We are working on DPDK and SR-IOV Automation in TripleO. We are at the
last leg of the set of patches pending to be merged:


DPDK (waiting for ovb ha and noha CI):
https://review.openstack.org/#/c/361238/ (THT) with +2 and +1s
https://review.openstack.org/#/c/327705/ (THT) lost workflow due to conflict

SR-IOV (waiting for review):
https://review.openstack.org/#/c/361350/ (puppet-tripleo)
https://review.openstack.org/#/c/361430/ (THT)


Both the changes are low impact and only applicable if respective
feature is enabled. If these changes doesn't go through today
(considering long CI queue), we require an FFE (for n3). Please let us
know if you need more details.

Regards,
Saravanan KR

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos]

2016-08-29 Thread Ihar Hrachyshka

Ofer Ben-Yacov  wrote:


Hi.

I was asked to look for a solution / develop traffic control component.
The idea is that provider / tenant will be able to enforce rate limit, do  
traffic shaping and prioritize traffic from several VMs (policy) - not  
like the qos in neutron now which can do this on single VM port.


My initial thought is to do it using SFC to route the traffic to a  
Traffic Control service.


You mean classifier?



Do any of you know if there is some one that currently develop a project  
like that for me to join?


If no work is done, do any of you would like to join in developing such  
project?


Do you suggestion other than SFC to approach this?


There is an RFE to add traffic control primitives to Neutron:

https://bugs.launchpad.net/neutron/+bug/1476527

That said, I am yet to see any patches to implement that. You are welcome  
to join the folks who stepped in to implement it. I think your call for SFC  
is generally correct since SFC and TC communities seem to overlap because  
SFC needs to classify traffic.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread HU, BIN

Please see inline [BH526R].

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, August 29, 2016 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/27/2016 11:16 AM, HU, BIN wrote:
> The challenge in OpenStack is how to enable the innovation built on top of 
> OpenStack.

No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the innovation 
of a jet engine.

[BH526R] True. 87 gas or diesel certainly cannot be used in any jet engine. 
While Jet A-1 and Jet B fuel are widely used for jet engine today, innovation 
of a new generation of jet engine may require an innovation of new type of 
aviation fuel.

> So telco use cases is not only the innovation built on top of 
> OpenStack. Instead, telco use cases, e.g. Gluon (NFV networking), vCPE 
> Cloud, Mobile Cloud, Mobile Edge Cloud, brings the needed requirement 
> for innovation in OpenStack itself. If OpenStack don't address those 
> basic requirements,

That's the thing, Bin, those are *not* "basic" requirements. The Telco vCPE and 
Mobile "Edge cloud" (hint: not a cloud) use cases are asking for fundamental 
architectural and design changes to the foundational components of OpenStack. 
Instead of Nova being designed to manage a bunch of hardware in a relatively 
close location (i.e. a datacenter or multiple datacenters), vCPE is asking for 
Nova to transform itself into a micro-agent that can be run on an Apple Watch 
and do things in resource-constrained environments that it was never built to 
do.

[BH526R] So we have 2 choices here - either to explicitly exclude telco 
requirement from OpenStack, and clearly indicate that telco needs to work on 
its own "telco stack"; or to allow telco to innovate within OpenStack through 
perhaps a new type of "telco nova" and/or "telco Neutron". Which way do you 
suggest?

And, honestly, I have no idea what Gluon is trying to do. Ian sent me some 
information a while ago on it. I read it. I still have no idea what Gluon is 
trying to accomplish other than essentially bypassing Neutron entirely. That's 
not "innovation". That's subterfuge.

[BH526R] Thank you for recognizing you don't know Gluon. Certainly the 
perception of "bypassing Neutron entirely" is incorrect. You are very welcome 
to join our project and meeting so that you can understand more of what Gluon 
is. We are also happy to set up specific meetings with you to discuss it too. 
Just let me know which way prefer. We are looking for you to participate in 
Gluon project and meeting.

[BH526R] On the other hand, I also try to understand why "bypassing Neutron 
entirely" is not an innovation. Neutron is not perfect. (I don't mean Gluon 
here, but) if there is an innovation that can replace Neutron entirely, 
everyone should be happy. Just like automobile bypassed carriage wagon entirely.

> the innovation will never happen on top of OpenStack.

Sure it will. AT and BT and other Telcos just need to write their own 
software that runs their proprietary vCPE software distribution mechanism, 
that's all. The OpenStack community shouldn't be relied upon to create software 
that isn't applicable to general cloud computing and cloud management platforms.

[BH526R] If I understand correctly, this suggestion excludes telco from 
OpenStack entirely. That's fine.

> An example is - self-driving car is built on top of many technologies, such 
> as sensor/camera, AI, maps, middleware etc. All innovations in each 
> technology (sensor/camera, AI, map, etc.) bring together the innovation of 
> self-driving car.

Yes, indeed, but the people who created the self-driving car software didn't 
ask the people who created the cameras to write the software for them that does 
the self-driving.

[BH526R] It's actually the other way around. Furthermore, camera/sensor 
industry does see the need, and VC's funding has been dramatically increased to 
invest in camera/sensor, map, AI areas. And the startups in those areas are the 
fastest growing areas. Those investments and innovations accelerate the 
maturity of self-driving cars.

> WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on 
> top of OpenStack.

You are defining "innovation" in an odd way, IMHO. "Innovation" for the vCPE 
use case sounds a whole lot like "rearchitect your entire software stack so 
that we don't have to write much code that runs on set-top boxes."

[BH526R] Certainly it is misunderstanding. "Rearcihtect" may be needed. 
However, if the "telco Nova" and "telco Neutron" concept and components can be 
allowed for us telco to innovate within OpenStack, we will write the code and 
do the rest of work. (But prior suggestion excludes us telco entirely, if I 
understand correctly.)

Just being honest,
-jay

> Thanks
> Bin
> -Original Message-
> From: Edward Leafe [mailto:e...@leafe.com]
> 

Re: [openstack-dev] [Neutron][Nova] Neutron mid-cycle summary report

2016-08-29 Thread Martin Hickey
Not a bother. Great to have the Neutrinos in Cork ! :-)




From:   "Armando M." 
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: Martin Hickey/Ireland/IBM@IBMIE
Date:   27/08/2016 01:14
Subject:[Neutron][Nova] Neutron mid-cycle summary report



Hi Neutrinos,

For those of you who couldn't join in person, please find a few notes below
to capture some of the highlights of the event.

I would like to thank everyone one who helped me put this report together,
and everyone who helped make this mid-cycle a fruitful one.

I would also like to thank IBM, and the individual organizers who made
everything go smoothly. In particular Martin, who put up with our moody
requests: thanks Martin!!

Feel free to reach out/add if something is unclear, incorrect or
incomplete.

Cheers,
Armando

~~~

We touched on these topics (as initially proposed on
https://etherpad.openstack.org/p/newton-neutron-midcycle-workitems)

  Keystone v3 and project-id adoption:
dasm and amotoki have been working to making the Neutron server
process project-id correctly [1]. Looking at the spec [2], we
are half way through having completed the DB migration, being
Keystone v3 complaint, and having updated the client bindings
[3].
  [1] https://review.openstack.org/#/c/357977/
  [2] https://review.openstack.org/#/c/257362/
  [3] https://review.openstack.org/#/q/topic:bp/keystone-v3
  Neutron-lib:
HenryG, dougwig and kevinbenton worked out a plan to get
the common_db_mixin into neutron-lib. Because of the risk of
regression, this is being deferred until Ocata opens up.
However, simpler changes like the he model_base move to lib was
agreed on and merged.
A plan to provide test support was discussed. The current
strategy involves providing test base classes in lib (this
reverses the stance conveyed in Austin). The usual steps
involved require to making public the currently private
classes, ensure the lib's copies are up-to-date with core
neutron, and deprecate the ones located in Neutron.
rtheis and armax worked on having networking-ovn test
periodically against neutron-lib [1,2,3].
  [1] https://review.openstack.org/#/c/357086/
  [2] https://review.openstack.org/#/c/359143/
  [3] https://review.openstack.org/#/c/357079/
A tool (tools/migration_report.sh) helps project team determine
the level of dependency they have with Neutron. It should be
improved to report the exact offending imports.
Right now neutron-lib 0.4.0 is released and available in
global-requirements/upper-constraints.
  Objects and hitless upgrades:
Ihar gave the team an overview and status update [1]
There was a fruitful discussion that hopefully set the way
forward for Ocata. The discussed plan was to start Ocata with
the expectation that no new contract scripts are landing in
Ocata, and to revisit the requirement later if for some reason
we see any issue with applying the requirement in practice.
Some work was done to deliver necessary objects for
push-notifications. Patches up for review. Some review cycles
were spent to work on landing patches moving model definitions
under neutron/db/models
  [1]
  
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101838.html
  OSC transition:
rtheis gave an update to the team on the state of the
transition. Core resources commands are all available through
OSC; QoS, Metering and *-aaS are still not converted.
There is some confusion on how to tackle openstacksdk support.
We discussed the future goal of python binding of Networking
API. OSC uses OpenStack SDK for network commands and Neutron
OSC plugin uses python bindings from python-neutronclient. A
question is to which project developers who add new features
implement, both, openstack SDK or python-neutronclient? There
was no conclusion at the mid-cycle. It is not specific to
neutron. Similar situation can happen for nova, cinder and
other projects and we need to raise it to the community.
Ocata is going to be the first release where the neutronclient
CLI is officially deprecated. It may take us more than the
usual two cycles to remove it altogether, but that's a signal
to developer and users to seriously develop against OSC, and
report 

[openstack-dev] [ironic] should we provide 'ironic node-list --chassis' and 'ironic port-list --node' commands?

2016-08-29 Thread Loo, Ruby
Hi,

While working on the openstackclient plugin commands for ironic, I was thinking 
about the equivalents for 'ironic chassis-node-list' (nodes that are part of 
specified chassis) and 'ironic-node-port-list' (ports that are part of 
specified node). It didn't make sense to me to have an 'openstack baremetal 
chassis node list', since a 'chassis' and a 'node' are two different objects in 
osc lingo and we already have 'openstack baremetal chassis xx' and 'openstack 
baremetal node yy' commands. Furthermore, our REST API supports 'GET 
/v1/nodes?chassis=c1' and 'GET /v1/ports?node=n1'.

So I proposed 'openstack baremetal node list --chassis' and 'openstack 
baremetal port list --node' [1]. To implement this, I need to enhance our 
corresponding python APIs. The question I have is whether we want to only 
enhance the python API, or also provide 'ironic node-list --chassis' and 
'ironic port-list --node' commands. The latter is being proposed [2] and coded 
at [3]. Doing this would mean two different ironic CLIs to do the same thing, 
but also provide a more obvious 1:1 correspondence between ironic & osc 
commands, and between ironic CLI and python API.

Thoughts?

It'd be great if we could decide in the next day or so, in order to get the 
osc-related commands into the client this week for the Newton release.

--ruby

[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/ironicclient-osc-plugin.html#openstack-baremetal-node
[2] https://launchpad.net/bugs/1616242
[3] https://review.openstack.org/#/c/359520/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Jay Faulkner

On Aug 29, 2016, at 8:19 AM, Dean Troyer 
> wrote:

On Mon, Aug 29, 2016 at 9:41 AM, Loo, Ruby 
> wrote:
I did this because 'passthrough' is more English than 'passthru' and I thought 
that was the 'way to go' in osc. But some folks wanted it to be 'passthru' 
because in ironic, we've been calling them 'passthru' since day 2.

Our default rule is to use proper spellings and not abbreviations[0].  The 
exceptions we have made are due to either a) significant existing practice in 
the industry (outside OpenStack, mostly in the network area so far); and b) 
when the user experience is clearly improved.

To be clear; thru is a valid english word, in just about every dictionary I've 
checked. In fact, some evidence shows it predates "through" as a word. I agree 
with other folks who have posted on the mailing list that keeping "passthru" is 
going to be more clear to operators of ironic than changing it to "passthrough" 
in this single context.

Thanks,
Jay Faulkner
OSIC


You might notice that calling out prior OpenStack usage is absent from that 
list.  One of the tenets of OSC from the start is to look first at user 
experience and identifying a _single_ set of terminology.  An existing practice 
can fall under (b) when it is compelling overall, and is an easier case to make 
when there is no competing OSC usage, or other OSC usage matches.

Unfortunately, I wasn't able to make everyone happy because someone else thinks 
that we shouldn't be providing two different openstack commands that provide 
the same functionality. (They're fine with either one, just not both.)

I agree with not aliasing commands out of the box.  We'll do that for 
deprecations, and are looking at a generalize alias method for other reasons, 
but on initial implementation I would prefer to not do this.

What do the rest of the folks think? Some guidance from the OpenStackClient 
folks would be greatly appreciated.

I would suggest you pick the one that lines up with usage outside OpenStack, in 
the sorts of ways that our users would be familiar with[1].  In this case, a 
grep of help output of even 'passthr' will find the match.

Hopefully this all makes enough sense that we can add it as a guideline to the 
OSC docs.  Feedback welcome.

Thanks
dt

[0] Where 'proper' is usually North American English, for whatever definition 
of that we have. This is totally due to me not thinking far enough ahead 4 
years ago...

[1] Cases like "all other clouds use this term" or "it is the common way to 
refer to this resource in the networking field" have been used in the past.

--
Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][barbican][cinder][cloudkitty][ironic][magnum][monasca][searchlight][senlin][solum][swift][tripleo][watcher][winstackers] tags needed to be considered part of Newton

2016-08-29 Thread Doug Hellmann
Excerpts from Tripp, Travis S's message of 2016-08-29 15:29:25 +:
> Thanks, Doug.
> 
> We are planning to tag Searchlight and Searchlight-UI later this week.

The item I was expecting to see a release for already is searchlight's
client library. The deadline for that update is also this week.

Doug

> 
> -Travis
> 
> 
> On 8/29/16, 8:50 AM, "Doug Hellmann"  wrote:
> 
> We have several projects using the cycle-with-intermediary release
> model for which we have not had any releases yet this cycle. Please
> consider a release this week, and be aware that we need a release
> by the final deadline in order to consider these projects as part
> of Newton (see [1] for details).
> 
> Thanks,
> Doug
> 
> 
> python-barbicanclient
> python-brick-cinderclient-ext
> cloudkitty
> cloudkitty-dashboard
> python-cloudkittyclient
> bifrost
> magnum
> magnum-ui
> monasca-transform
> python-searchlightclient
> senlin-dashboard
> python-solumclient
> solum
> solum-dashboard
> solum-infra-guestagent
> python-swiftclient
> tripleo-quickstart
> tripleo-ui
> watcher-dashboard
> networking-hyperv
> 
> [1] 
> http://governance.openstack.org/reference/tags/release_cycle-with-intermediary.html#requirements
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][barbican][cinder][cloudkitty][ironic][magnum][monasca][searchlight][senlin][solum][swift][tripleo][watcher][winstackers] tags needed to be considered part of Newton

2016-08-29 Thread Hongbin Lu
Same for Magnum. We can have a tag release of magnum and magnum-ui later this 
week, and we might do another release (possibly the final release) before the 
final RCs deadline (Sep 26-30).

Best regards,
Hongbin

> -Original Message-
> From: Tripp, Travis S [mailto:travis.tr...@hpe.com]
> Sent: August-29-16 11:29 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev]
> [release][barbican][cinder][cloudkitty][ironic][magnum][monasca][search
> light][senlin][solum][swift][tripleo][watcher][winstackers] tags needed
> to be considered part of Newton
> 
> Thanks, Doug.
> 
> We are planning to tag Searchlight and Searchlight-UI later this week.
> 
> -Travis
> 
> 
> On 8/29/16, 8:50 AM, "Doug Hellmann"  wrote:
> 
> We have several projects using the cycle-with-intermediary release
> model for which we have not had any releases yet this cycle. Please
> consider a release this week, and be aware that we need a release
> by the final deadline in order to consider these projects as part
> of Newton (see [1] for details).
> 
> Thanks,
> Doug
> 
> 
> python-barbicanclient
> python-brick-cinderclient-ext
> cloudkitty
> cloudkitty-dashboard
> python-cloudkittyclient
> bifrost
> magnum
> magnum-ui
> monasca-transform
> python-searchlightclient
> senlin-dashboard
> python-solumclient
> solum
> solum-dashboard
> solum-infra-guestagent
> python-swiftclient
> tripleo-quickstart
> tripleo-ui
> watcher-dashboard
> networking-hyperv
> 
> [1] http://governance.openstack.org/reference/tags/release_cycle-
> with-intermediary.html#requirements
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][barbican][cinder][cloudkitty][ironic][magnum][monasca][searchlight][senlin][solum][swift][tripleo][watcher][winstackers] tags needed to be considered part of Newton

2016-08-29 Thread Tripp, Travis S
Thanks, Doug.

We are planning to tag Searchlight and Searchlight-UI later this week.

-Travis


On 8/29/16, 8:50 AM, "Doug Hellmann"  wrote:

We have several projects using the cycle-with-intermediary release
model for which we have not had any releases yet this cycle. Please
consider a release this week, and be aware that we need a release
by the final deadline in order to consider these projects as part
of Newton (see [1] for details).

Thanks,
Doug


python-barbicanclient
python-brick-cinderclient-ext
cloudkitty
cloudkitty-dashboard
python-cloudkittyclient
bifrost
magnum
magnum-ui
monasca-transform
python-searchlightclient
senlin-dashboard
python-solumclient
solum
solum-dashboard
solum-infra-guestagent
python-swiftclient
tripleo-quickstart
tripleo-ui
watcher-dashboard
networking-hyperv

[1] 
http://governance.openstack.org/reference/tags/release_cycle-with-intermediary.html#requirements

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Dean Troyer
On Mon, Aug 29, 2016 at 9:41 AM, Loo, Ruby  wrote:

> I did this because 'passthrough' is more English than 'passthru' and I
> thought that was the 'way to go' in osc. But some folks wanted it to be
> 'passthru' because in ironic, we've been calling them 'passthru' since day
> 2.


Our default rule is to use proper spellings and not abbreviations[0].  The
exceptions we have made are due to either a) significant existing practice
in the industry (outside OpenStack, mostly in the network area so far); and
b) when the user experience is clearly improved.

You might notice that calling out prior OpenStack usage is absent from that
list.  One of the tenets of OSC from the start is to look first at user
experience and identifying a _single_ set of terminology.  An existing
practice can fall under (b) when it is compelling overall, and is an easier
case to make when there is no competing OSC usage, or other OSC usage
matches.

Unfortunately, I wasn't able to make everyone happy because someone else
> thinks that we shouldn't be providing two different openstack commands that
> provide the same functionality. (They're fine with either one, just not
> both.)
>

I agree with not aliasing commands out of the box.  We'll do that for
deprecations, and are looking at a generalize alias method for other
reasons, but on initial implementation I would prefer to not do this.


> What do the rest of the folks think? Some guidance from the
> OpenStackClient folks would be greatly appreciated.
>

I would suggest you pick the one that lines up with usage outside
OpenStack, in the sorts of ways that our users would be familiar with[1].
In this case, a grep of help output of even 'passthr' will find the match.

Hopefully this all makes enough sense that we can add it as a guideline to
the OSC docs.  Feedback welcome.

Thanks
dt

[0] Where 'proper' is usually North American English, for whatever
definition of that we have. This is totally due to me not thinking far
enough ahead 4 years ago...

[1] Cases like "all other clouds use this term" or "it is the common way to
refer to this resource in the networking field" have been used in the past.

-- 
Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-29 Thread Zane Bitter

On 24/08/16 20:37, Jay Pipes wrote:

On 08/24/2016 04:26 AM, Peter Willis wrote:

Colleagues,

I'd like to confirm that scalability and multi-site operations are key
to BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we
will have compute highly distributed around the network (from thousands
to millions of sites). BT would therefore support a Massively
Distributed WG and/or work on scalability and multi-site operations in
the Architecture WG.


Love all the TLAs.


I think you mean ETLAs ;)

It seems to be an unfortunate occupational hazard of working in 
networking that over time one loses the ability to communicate with 
people using, you know, words. (I used to work in networking, but the 
good news is I'm still optimistic the damage is reversible ;)



I've asked this before to numerous Telco product managers and engineers,
but I've yet to get a solid answer from any of them, so I'll repeat the
question here...

How is vCPE a *cloud* use case?

From what I understand, the v[E]CPE use case is essentially that Telcos
want to have the set-top boxen/routers that are running cable television
apps (i.e. AT U-verse or Verizon FiOS-like things for US-based
customers) and home networking systems (broadband connectivity to a
local central office or point of presence, etc) be able run on virtual
machines to make deployment and management of new applications easier.
Since all those home routers and set-top boxen are essentially just
Linux boxes, the infrastructure seems to be there to make this a
cost-savings reality for Telcos. [1]


So I just heard of this today and looked it up. And unsurprisingly the 
explanations were mostly unclear and sometimes conflicting. (If you want 
to marvel at a rare instance of perfection in the genre of complete 
gibberish, check out http://www.telco.com/index.php?page=vcpe) However, 
I didn't come away with the same understanding as you.


My understanding is that they're taking stuff which used to run on edge 
devices (i.e. your home router or set-top box) and instead running them 
in the cloud:


http://www.nec.com/en/global/solutions/tcs/vcpe/index.html
http://searchsdn.techtarget.com/definition/vCPE-virtual-customer-premise-equipment

Basically as last-mile networks get faster, the bottleneck is no longer 
edge network bandwidth but the flexibility of the edge devices. So the 
idea, as I understand it, is to not run _more_ on them but to run _less_ 
and make use of the network bandwidth available to move a bunch of 
services into the cloud where they can be more flexible.


(Honestly, this sounds like the most cloud-y use case since those 
thermostats where you can't turn on the air conditioning without asking 
Google what they think about it first.)


Where I'm guessing this differs from other cloud use cases is that you 
want the newly-virtualised services running as close as possible to the 
edge. The user is essentially making the provider part of their layer 2 
network, so there's a number of drawbacks to having all of the 
virtualised services running in a single centralised cloud:


- It'd add a ton latency at a point where applications aren't expecting it.
- It'd start pushing some of your local traffic over the core network, 
where bandwidth is still very much scarce.
- It's really hard to keep a large number of layer 2 networks segregated 
from each other all the way through the core network (Ethernet gives you 
only 4094 to play with).


So I'd imagine that what they want to do is run a small cluster of Nova 
compute servers in e.g. your local telephone exchange, plus keep very 
tight control over how the workloads running on them are connected to 
actual physical networks. Then think about how many telephone exchanges 
there are in, say, Britain and it's obvious why they are interested in 
ensuring OpenStack can cope with massively distributed architectures.


Hopefully somebody who had heard of this stuff before today will jump in 
and correct all of the incorrect assumptions I have made. Remember: use 
your words! :P


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Tim Bell

> On 29 Aug 2016, at 16:41, Loo, Ruby  wrote:
> 
> Hi,
> 
> In ironic, we have these ironic CLI commands:
> - ironic node-vendor-passthru (calls the specified passthru method)
> - ironic node-get-vendor-passthru-methods (lists the available passthru 
> methods)
> 
> For their corresponding openstackclient plugin commands, we (I, I guess) have 
> proposed [1]:
> - openstack baremetal node passthrough call
> - openstack baremetal node passthrough list
> 
> I did this because 'passthrough' is more English than 'passthru' and I 
> thought that was the 'way to go' in osc. But some folks wanted it to be 
> 'passthru' because in ironic, we've been calling them 'passthru' since day 2. 
> To make everyone happy, I also proposed (as aliases):
> 
> - openstack baremetal node passthru call
> - openstack baremetal node passthru list
> 

flavor has set the precedent of using americanised spellings (sorry, 
americanized). The native English are used to IT terms with american spellings.

So, I’d suggest only doing passthru and dropping the English alias would be OK.

Tim

> Unfortunately, I wasn't able to make everyone happy because someone else 
> thinks that we shouldn't be providing two different openstack commands that 
> provide the same functionality. (They're fine with either one, just not both.)
> 
> What do the rest of the folks think? Some guidance from the OpenStackClient 
> folks would be greatly appreciated.
> 
> --ruby
> 
> 
> [1] 
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/ironicclient-osc-plugin.html
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Lucas Alvares Gomes
On Mon, Aug 29, 2016 at 3:41 PM, Loo, Ruby  wrote:
> Hi,
>
> In ironic, we have these ironic CLI commands:
> - ironic node-vendor-passthru (calls the specified passthru method)
> - ironic node-get-vendor-passthru-methods (lists the available passthru 
> methods)
>
> For their corresponding openstackclient plugin commands, we (I, I guess) have 
> proposed [1]:
> - openstack baremetal node passthrough call
> - openstack baremetal node passthrough list
>
> I did this because 'passthrough' is more English than 'passthru' and I 
> thought that was the 'way to go' in osc. But some folks wanted it to be 
> 'passthru' because in ironic, we've been calling them 'passthru' since day 2. 
> To make everyone happy, I also proposed (as aliases):
>
> - openstack baremetal node passthru call
> - openstack baremetal node passthru list
>
> Unfortunately, I wasn't able to make everyone happy because someone else 
> thinks that we shouldn't be providing two different openstack commands that 
> provide the same functionality. (They're fine with either one, just not both.)
>

I also don't like having two commands for the same functionality, it
sounds wrong to me. I think we should deprecate one of the two.

I don't have a strong opinion of which one we should deprecate but I
slightly prefer "passthru" because that's how we call it in the API.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][barbican][cinder][cloudkitty][ironic][magnum][monasca][searchlight][senlin][solum][swift][tripleo][watcher][winstackers] tags needed to be considered part of Newton

2016-08-29 Thread Doug Hellmann
We have several projects using the cycle-with-intermediary release
model for which we have not had any releases yet this cycle. Please
consider a release this week, and be aware that we need a release
by the final deadline in order to consider these projects as part
of Newton (see [1] for details).

Thanks,
Doug


python-barbicanclient
python-brick-cinderclient-ext
cloudkitty
cloudkitty-dashboard
python-cloudkittyclient
bifrost
magnum
magnum-ui
monasca-transform
python-searchlightclient
senlin-dashboard
python-solumclient
solum
solum-dashboard
solum-infra-guestagent
python-swiftclient
tripleo-quickstart
tripleo-ui
watcher-dashboard
networking-hyperv

[1] 
http://governance.openstack.org/reference/tags/release_cycle-with-intermediary.html#requirements

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-08-29 Thread Matt Riedemann

On 8/29/2016 6:27 AM, tie...@vn.fujitsu.com wrote:

Hi Matt, Dan, Andrew,

@Matt: Hope you had a nice vacation.

For the feature Nova serial console support for Ironic [1][2], there are some 
good update from Ironic side. All our Ironic-side works [3][4][5] have been 
done, currently there is only the nova patch that needs to review.

Last week, I contacted Andrew and Dan for reviewing it. But both Andrew and Dan 
said they didn't notice you've removed -2 from the patch. So, Matt, can you 
notify the Nova core team about that so Andrew and Dan can review it again?

[1] https://blueprints.launchpad.net/nova/+spec/ironic-serial-console-support
[2] https://review.openstack.org/#/c/328157/  (Nova patch, in review)
[3] https://review.openstack.org/#/c/319505/  (Ironic spec, merged)
[4] https://review.openstack.org/#/c/328168/  (Ironic patch, merged)
[5] https://review.openstack.org/#/c/293873/  (Ironic patch, merged)

Thanks and Regards
TienDC

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
Sent: Thursday, July 07, 2016 3:16 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

On 7/5/2016 2:14 AM, tie...@vn.fujitsu.com wrote:

Hi folks,

I want to give more information about our nova patch for bp 
ironic-serial-console-support. The whole feature needs work to be done in Nova 
and Ironic. The nova bp [1] has been approved, and the Ironic spec [2] has been 
merged.

This nova patch [3] is simple, we got some reviews by some Nova and Ironic core 
reviewers. The depended patches in Ironic are [4][5] which [4] will get merged 
soon and [5] is in review progress.

Hope Nova core team considers adding this case to the exception list.

[1]
https://blueprints.launchpad.net/nova/+spec/ironic-serial-console-supp
ort  (Nova bp, approved by dansmith) [2]
https://review.openstack.org/#/c/319505/  (Ironic spec, merged)

[3] https://review.openstack.org/#/c/328157/  (Nova patch, in review)
[4] https://review.openstack.org/#/c/328168/  (Ironic patch 1st, got
two +2, will get merged soon) [5]
https://review.openstack.org/#/c/293873/  (Ironic patch 2nd, in
review)

Thanks and Regards
Dao Cong Tien




When I looked last week the nova change was dependent on multiple ironic 
patches which weren't merged yet, so it wasn't ready to go for the non-priority 
feature freeze. The ironic changes are all merged yet either when we were going 
over FFE candidates. So this is going to have to wait for Ocata.



Sorry, the -2 removal was a mistake, that happened when I removed myself 
as a reviewer from that change since I wasn't actively reviewing it but 
it kept changing, so I wanted to turn off the email noise I was getting 
from it.


I've re-applied the -2 on 328157 since we're long past non-priority 
feature freeze and the normal feature freeze is this week (9/1) and need 
to focus on closing out priority work.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Jay Pipes

On 08/29/2016 05:53 AM, Andrew Laski wrote:

Personally I believe the cat is out of the bag on bdms overriding
flavors and we should just commit to that path and make it work well.
And for deployers who rely on flavors being the source of truth maybe we
provide them a policy check or some other mechanism to control
acceptable bdm values so that they can rely on flavor packing.


Yes, I agree with everything you wrote above.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Loo, Ruby
Hi,

In ironic, we have these ironic CLI commands:
- ironic node-vendor-passthru (calls the specified passthru method)
- ironic node-get-vendor-passthru-methods (lists the available passthru methods)

For their corresponding openstackclient plugin commands, we (I, I guess) have 
proposed [1]:
- openstack baremetal node passthrough call
- openstack baremetal node passthrough list

I did this because 'passthrough' is more English than 'passthru' and I thought 
that was the 'way to go' in osc. But some folks wanted it to be 'passthru' 
because in ironic, we've been calling them 'passthru' since day 2. To make 
everyone happy, I also proposed (as aliases):

- openstack baremetal node passthru call
- openstack baremetal node passthru list

Unfortunately, I wasn't able to make everyone happy because someone else thinks 
that we shouldn't be providing two different openstack commands that provide 
the same functionality. (They're fine with either one, just not both.)

What do the rest of the folks think? Some guidance from the OpenStackClient 
folks would be greatly appreciated.

--ruby


[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/ironicclient-osc-plugin.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [os-vif] [neutron] Race in setting up linux bridge

2016-08-29 Thread Kevin Benton
Right, I was just pointing out that solving this in os-vif isn't enough in
this particular case since neutron can't use it so it's slightly different
than the os-brick situation. We have to reason about what two different
code paths will do.

On Aug 29, 2016 6:41 AM, "Sean Dague"  wrote:

> On 08/29/2016 08:29 AM, Kevin Benton wrote:
> > Sort of. The neutron agent code doesn't use os-vif because the os-vif
> > devs indicated that neutron's vif plugging code wasn't a use case they
> > cared about [1].
> >
> > So if we did generalize os-vif to work with the neutron agents then it
> > would be two calling the same locked code. But at this point it's just
> > two versions of similar logic trying to do the same thing.
> >
> > 1. https://review.openstack.org/#/c/284209/
>
> Calling the same locked code wouldn't help, these are different services
> that in *almost* all real deployments are running under different user
> ids. Which means shared locks between them are basically not possible.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-29 Thread Thomas Herve
On Mon, Aug 29, 2016 at 3:16 PM, Steven Hardy  wrote:
> On Mon, Aug 29, 2016 at 07:07:09AM +0200, Thomas Herve wrote:
>> dict($.groupBy($.keys().toList()[0], $.values().toList()[0][0]))
>>
>> ought to work, I believe?
>
> So, as it turns out, my example above was bad, and groupBy only works if
> you have a list of maps with exactly one key, we actually need this:
>
>   # Example of tripleo format
>   # We need an output of
>   # "gnocchi_metricd_node_names": ["overcloud-controller-0"]
>   # "tripleo_packages_node_names": ["overcloud-controller-0", 
> "overcloud-compute-0"]
>   # "nova_compute_node_names": ["overcloud-compute-0"]
>   debug_tripleo:
> value:
>   yaql:
> expression: dict($.data.l.groupBy($.keys().toList()[0], 
> $.values().toList()[0][0]))
> data:
>   l:
> - "gnocchi_metricd_node_names": ["overcloud-controller-0"]
>   "tripleo_packages_node_names": ["overcloud-controller-0"]
> - "nova_compute_node_names": ["overcloud-compute-0"]
>   "tripleo_packages_node_names": ["overcloud-compute-0"]
>
> So, I'm back to wondering how we make the intermediate assignement of 
> tripleo_packages_node_names

Well I didn't know all the constraints :).

$.selectMany($.items()).groupBy($[0], $[1][0])

is another attempt. It won't work if you have more than one value per
key in the original data, but I think it will handle multiple keys.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Kostiantyn.Volenbovskyi
> -Original Message-
> From: Sylvain Bauza [mailto:sba...@redhat.com]
> Sent: Monday, August 29, 2016 2:43 PM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] Reconciling flavors and block device
> mappings
> >> While :
> >> #1 the first change about setting root_gb equals 0 in RequestSpec for
> >> making sure BFV instances are correctly using the DiskFilter is fine
> >> by me having it merged with a TODO/FIXME saying that the code would
> >> be expired once the scheduler uses the resource providers, #2, then
> >> the second patch about trying to look at the BDMs for DiskFilter is
> >> very wrong by me, because the Compute API shouldn't accept IMHO to
> >> ask for a flavor *and* a BDM with a related disk size different from
> >> the flavor. AFAIT, we should return a 40x (probably 409 or 400) for
> >> that instead of accepting it silently.
> >
> > Well, a flavor is always required when launching an instance. I wasn't
> > aware until recently that one could "override" the root GB (or
> > eph/swap) sizes in the API. Apparently, the API allows it, though,
> > even though the code ignores whatever was passed as the override
> > value. If the API supports it, I think the code should probably be
> > changed to override the size values to whatever the user entered, no?
> >
> I'm very sad to see this possible. FWIW, I think that the flavor should 
> always be
> the source of truth for knowing the asked disk size (if no, imagine all the
> conditionals in code...) and we should somehow reconcile what the flavor
> provided and what the user explicitly asked for.
> 
> Having a possibility to trample the flavor size seems to be a very bad UX (I 
> guess
> most of the operators don't think about that when understanding why this
> instance could have a different size from the related flavor) hence me 
> thinking
> we should rather discuss on a possible new microversion for returning a 400
> when both sizes are not identical.
Hi, 
when it comes to more short-term I think that many comments of 
John/Tim/Andrew/Jay/Silvain are not against the fact that patches will get 
merged.
somehow I disagree with Silvain on the part of 'returning 400 in case both 
sizes are not identical' as mid-term direction

That will break the scheduling for those who now use root disk = 0.
And using root disk=0 for such instances should be not only acceptable 
workaround but also acceptable in mid- long- term IMHO.
I disagree because [1] gives definition of root disk as ' Virtual root disk 
size in gigabytes. This is an ephemeral disk that the base image is copied 
into. When booting from a persistent volume it is not used'
So there it is written 'root disk is an ephemeral disk', 'when booting from a 
persistent volume it is not used'
Using boot from volume [which should be referred 'using BDM, as more 
technically learnt, according to what I learnt in this thread] thus shouldn't 
have relation with root disk.
And not-yet-merged [2] gives direction "Therefore 0 should only be used for 
volume booted instances or for testing purposes."
So operators should not wonder [Sylvain] why its disk sizes could be different 
from the related flavor.
I have no objections on improving flavors (/flavor specs) or having other 
mechanism to control the size of the (Cinder) volume being used with VM. It 
sounds for me more Cinder area 
than Nova, but maybe I am wrong. I would say, that would be used to complement 
volume quota(s) 
Introducing something like max.instances could be part of addressing that use 
cases/possibly with other use cases. 
That however sounds like a new blueprint.

BR, 
Konstantin

[1] http://docs.openstack.org/admin-guide/compute-flavors.html
[2] https://review.openstack.org/#/c/339034/ 


> The other option I could see could be to have the nested flavor in the
> RequestSpec be reconciling those two objects being different from the user-
> provided flavor (eg. say a flavor with swap=10G and a BDM with swap=1G, then
> RequestSpec.flavor would have swap=1G) but given we don't expose the
> RequestSpec objects on the API level, that still leaves operators possibly
> confused.
> 
> -Sylvain
> 
> 
> > -jay
> >
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Andrew Laski



On Mon, Aug 29, 2016, at 09:06 AM, Jordan Pittier wrote:
>
>
> On Mon, Aug 29, 2016 at 8:50 AM, Zhenyu Zheng
>  wrote:
>> Hi, all
>>
>> Currently we have customer demands about adding parameter
>> "volume_type" to --block-device to provide the support of specified
>> storage backend to boot instance. And I find one newly drafted
>> Blueprint that aiming to address the same feature:
>> https://blueprints.launchpad.net/nova/+spec/support-boot-instance-set-store-type
>> ;
>>
>> As I know this is kind of "proxy" feature for cinder and we don't
>> like it in general, but as the boot from volume functional was
>> already there, so maybe it is OK to support another parameter?
>>
>> So, my question is that what are your opinions about this in general?
>> Do you like it or it will not be able to got approved at all?
>>
>> Thanks,
>>
>> Kevin Zheng
>
> Hi,
> I think it's not a great idea. Not only for the reason you mention,
> but also because the "nova boot" command is already way to complicated
> with way to many options. IMO we should only add support for new
> features, not "features" we can have by other means, just for
> convenience.

I completely agree with this. However I have some memory of us
saying(in Austin?) that adding volume_type would be acceptable since
it's a clear oversight in the list of parameters for specifying a block
device. So while I greatly dislike Nova creating volumes and would
rather users pass in pre-created volume ids I would support adding this
parameter. I do not support continuing to add parameters if Cinder adds
parameters though.


>
>
> -
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Matt Riedemann

On 8/29/2016 8:29 AM, Andrew Laski wrote:


I completely agree with this. However I have some memory of us saying(in
Austin?) that adding volume_type would be acceptable since it's a clear
oversight in the list of parameters for specifying a block device. So
while I greatly dislike Nova creating volumes and would rather users
pass in pre-created volume ids I would support adding this parameter. I
do not support continuing to add parameters if Cinder adds parameters
though.




Correct, Chet Burgess brought this up on Friday in Austin and we agreed 
to let this in. I don't think a spec ever showed up in Newton for it though.


The notes were in this etherpad:

https://etherpad.openstack.org/p/nova-bfv-volume-type

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Maxim Nestratov

29-Aug-16 09:50, Zhenyu Zheng пишет:


Hi, all

Currently we have customer demands about adding parameter "volume_type" to --block-device to provide the support 
of specified storage backend to boot instance. And I find one newly drafted Blueprint that aiming to address the same 
feature: https://blueprints.launchpad.net/nova/+spec/support-boot-instance-set-store-type ;


As I know this is kind of "proxy" feature for cinder and we don't like it in general, but as the boot from volume 
functional was already there, so maybe it is OK to support another parameter?


So, my question is that what are your opinions about this in general? Do you like it or it will not be able to got 
approved at all?


Thanks,

Kevin Zheng


Hi,

Do you need to boot from different volume types using a single image? If not, you can just set image metadata property 
"image_cinder_img_volume_type=your_volume_type" to associate certain image with some volume type.


Best,
Maxim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] No meeting today - 08/29/2016

2016-08-29 Thread Ренат Ахмеров
Hi, we are cancelling meeting today. Some people are on vacation, some a too 
busy.

Renat
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-29 Thread Steven Hardy
On Mon, Aug 29, 2016 at 07:07:09AM +0200, Thomas Herve wrote:
> On Sun, Aug 28, 2016 at 11:58 PM, Steven Hardy  wrote:
> > Hi all,
> >
> > I have a need to merge a list of maps of lists:
> >
> > heat_template_version: 2016-10-14
> >
> > outputs:
> >   debug:
> > value:
> >   yaql:
> > # dict(vms=>dict($.vms.select([$.name, $])))
> > expression: dict($.data.l.select([$.keys().toList()[0],
> > $.values().toList()[0]]))
> > data:
> >   l:
> > - a: [123]
> > - b: [123]
> > - a: [456]
> >
> >
> >
> > I want to end up with debug as:
> >
> >   a: [123, 456]
> >   b: [123]
> >
> > Perhaps we need a map_deep_merge function, but can this be done with yaql?
> >
> > I suspect it can, but can't currently figure out how the assignment to the
> > intermediate "a" value is supposed to work, any ideas on the cleanest
> > approach appreciated!
> 
> I believe you don't need the intermediate value, and can rely on what
> you'd do in Python with setdefault:
> 
> dict($.groupBy($.keys().toList()[0], $.values().toList()[0][0]))
> 
> ought to work, I believe?

So, as it turns out, my example above was bad, and groupBy only works if
you have a list of maps with exactly one key, we actually need this:

  # Example of tripleo format
  # We need an output of
  # "gnocchi_metricd_node_names": ["overcloud-controller-0"]
  # "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-compute-0"]
  # "nova_compute_node_names": ["overcloud-compute-0"]
  debug_tripleo:
value:
  yaql:
expression: dict($.data.l.groupBy($.keys().toList()[0], 
$.values().toList()[0][0]))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0"]
  "tripleo_packages_node_names": ["overcloud-controller-0"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]

So, I'm back to wondering how we make the intermediate assignement of 
tripleo_packages_node_names

Thanks for the ideas - I think this probably does highlight the value of a 
map_deep_merge function
so perhaps that's something we can consider for ocata :)

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Jordan Pittier
On Mon, Aug 29, 2016 at 8:50 AM, Zhenyu Zheng 
wrote:

> Hi, all
>
> Currently we have customer demands about adding parameter "volume_type" to
> --block-device to provide the support of specified storage backend to boot
> instance. And I find one newly drafted Blueprint that aiming to address the
> same feature: https://blueprints.launchpad.net/nova/+spec/support
> -boot-instance-set-store-type ;
>
> As I know this is kind of "proxy" feature for cinder and we don't like it
> in general, but as the boot from volume functional was already there, so
> maybe it is OK to support another parameter?
>
> So, my question is that what are your opinions about this in general? Do
> you like it or it will not be able to got approved at all?
>
> Thanks,
>
> Kevin Zheng
>

Hi,
I think it's not a great idea. Not only for the reason you mention, but
also because the "nova boot" command is already way to complicated with way
to many options. IMO we should only add support for new features, not
"features" we can have by other means, just for convenience.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Andrew Laski


On Mon, Aug 29, 2016, at 08:42 AM, Sylvain Bauza wrote:
> 
> 
> Le 29/08/2016 14:20, Jay Pipes a écrit :
> > On 08/29/2016 05:11 AM, Sylvain Bauza wrote:
> >> Le 29/08/2016 13:25, Jay Pipes a écrit :
> >>> On 08/26/2016 09:20 AM, Ed Leafe wrote:
>  On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:
> 
> > One other thing to note is that while a flavor constrains how much
> > local
> > disk is used it does not constrain volume size at all. So a user can
> > specify an ephemeral/swap disk <= to what the flavor provides but can
> > have an arbitrary sized root disk if it's a remote volume.
> 
>  This kind of goes to the heart of the argument against flavors being
>  the sole source of truth for a request. As cloud evolves, we keep
>  packing more and more stuff into a concept that was originally meant
>  to only divide up resources that came bundled together (CPU, RAM, and
>  local disk). This hasn’t been a good solution for years, and the
>  sooner we start accepting that a request can be much more complex
>  than a flavor can adequately express, the better.
> 
>  If we have decided that remote volumes are a good thing (I don’t
>  think there’s any argument there), then we should treat that part of
>  the request as being as fundamental as a flavor. We need to make the
>  scheduler smarter so that it doesn’t rely on flavor as being the only
>  source of truth.
> 
>  The first step to improving Nova is admitting we have a problem. :)
> >>>
> >>> FWIW, I agree with you on the above. The issue I had with the proposed
> >>> patches was that they would essentially be a hack for a short period
> >>> of time until the resource providers work standardized the way that
> >>> DISK_GB resources were tracked -- including for providers of shared
> >>> disk storage.
> >>>
> >>> I've long felt that flavors as a concept should be, as Chris so
> >>> adeptly wrote, "UI furniture" and should be decomposed into their
> >>> requisite lists of resource amounts, required traits and preferred
> >>> traits and that those decomposed parts are what should be passed to
> >>> the Compute API, not a flavor ID.
> >>>
> >>> But again, we're actively changing all this code in the resource
> >>> providers and qualitative traits patches so I warned about adding more
> >>> code that was essentially just a short-lived hack. I'd be OK adding
> >>> the hack code if there were some big bright WARNINGs placed in there
> >>> that likely the code would be removed in Ocata.
> >>>
> >>
> >> While :
> >> #1 the first change about setting root_gb equals 0 in RequestSpec for
> >> making sure BFV instances are correctly using the DiskFilter is fine by
> >> me having it merged with a TODO/FIXME saying that the code would be
> >> expired once the scheduler uses the resource providers,
> >> #2, then the second patch about trying to look at the BDMs for
> >> DiskFilter is very wrong by me, because the Compute API shouldn't accept
> >> IMHO to ask for a flavor *and* a BDM with a related disk size different
> >> from the flavor. AFAIT, we should return a 40x (probably 409 or 400) for
> >> that instead of accepting it silently.
> >
> > Well, a flavor is always required when launching an instance. I wasn't 
> > aware until recently that one could "override" the root GB (or 
> > eph/swap) sizes in the API. Apparently, the API allows it, though, 
> > even though the code ignores whatever was passed as the override 
> > value. If the API supports it, I think the code should probably be 
> > changed to override the size values to whatever the user entered, no?
> >
> I'm very sad to see this possible. FWIW, I think that the flavor should 
> always be the source of truth for knowing the asked disk size (if no, 
> imagine all the conditionals in code...) and we should somehow reconcile 
> what the flavor provided and what the user explicitly asked for.

At the heart of all of this is the fact that we've allowed people to
believe two different things: flavors are the source of truth, bdms can
be used to override flavors. But bdms only half override flavors because
they don't affect scheduling so people who believe the latter are
understandably trying to fix that. But we can't have it both ways, so we
need to have consensus about what we're supporting and then make it work
fully.

Personally I believe the cat is out of the bag on bdms overriding
flavors and we should just commit to that path and make it work well.
And for deployers who rely on flavors being the source of truth maybe we
provide them a policy check or some other mechanism to control
acceptable bdm values so that they can rely on flavor packing.


> 
> Having a possibility to trample the flavor size seems to be a very bad 
> UX (I guess most of the operators don't think about that when 
> understanding why this instance could have a different size from the 
> related flavor) hence me thinking we 

Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Andrew Laski


On Mon, Aug 29, 2016, at 07:25 AM, Jay Pipes wrote:
> On 08/26/2016 09:20 AM, Ed Leafe wrote:
> > On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:
> >
> >> One other thing to note is that while a flavor constrains how much local
> >> disk is used it does not constrain volume size at all. So a user can
> >> specify an ephemeral/swap disk <= to what the flavor provides but can
> >> have an arbitrary sized root disk if it's a remote volume.
> >
> > This kind of goes to the heart of the argument against flavors being the 
> > sole source of truth for a request. As cloud evolves, we keep packing more 
> > and more stuff into a concept that was originally meant to only divide up 
> > resources that came bundled together (CPU, RAM, and local disk). This 
> > hasn’t been a good solution for years, and the sooner we start accepting 
> > that a request can be much more complex than a flavor can adequately 
> > express, the better.
> >
> > If we have decided that remote volumes are a good thing (I don’t think 
> > there’s any argument there), then we should treat that part of the request 
> > as being as fundamental as a flavor. We need to make the scheduler smarter 
> > so that it doesn’t rely on flavor as being the only source of truth.
> >
> > The first step to improving Nova is admitting we have a problem. :)
> 
> FWIW, I agree with you on the above. The issue I had with the proposed 
> patches was that they would essentially be a hack for a short period of 
> time until the resource providers work standardized the way that DISK_GB 
> resources were tracked -- including for providers of shared disk storage.
> 
> I've long felt that flavors as a concept should be, as Chris so adeptly 
> wrote, "UI furniture" and should be decomposed into their requisite 
> lists of resource amounts, required traits and preferred traits and that 
> those decomposed parts are what should be passed to the Compute API, not 
> a flavor ID.
> 
> But again, we're actively changing all this code in the resource 
> providers and qualitative traits patches so I warned about adding more 
> code that was essentially just a short-lived hack. I'd be OK adding the 
> hack code if there were some big bright WARNINGs placed in there that 
> likely the code would be removed in Ocata.

I'd like to clarify that there are two parts to the patches proposed.
One part is about bdm overrides influencing the scheduler, and the other
part is about proper resource tracking. I've attempted to punt on the
resource tracking part in this thread because I agree that we have a
proper solution on the way and the current proposals are workarounds.
There is some value in the workarounds though as they could be
backported to Mitaka.

> 
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Sylvain Bauza



Le 29/08/2016 14:43, Andrew Laski a écrit :


On Mon, Aug 29, 2016, at 08:20 AM, Jay Pipes wrote:

On 08/29/2016 05:11 AM, Sylvain Bauza wrote:

Le 29/08/2016 13:25, Jay Pipes a écrit :

On 08/26/2016 09:20 AM, Ed Leafe wrote:

On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:


One other thing to note is that while a flavor constrains how much
local
disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides but can
have an arbitrary sized root disk if it's a remote volume.

This kind of goes to the heart of the argument against flavors being
the sole source of truth for a request. As cloud evolves, we keep
packing more and more stuff into a concept that was originally meant
to only divide up resources that came bundled together (CPU, RAM, and
local disk). This hasn’t been a good solution for years, and the
sooner we start accepting that a request can be much more complex
than a flavor can adequately express, the better.

If we have decided that remote volumes are a good thing (I don’t
think there’s any argument there), then we should treat that part of
the request as being as fundamental as a flavor. We need to make the
scheduler smarter so that it doesn’t rely on flavor as being the only
source of truth.

The first step to improving Nova is admitting we have a problem. :)

FWIW, I agree with you on the above. The issue I had with the proposed
patches was that they would essentially be a hack for a short period
of time until the resource providers work standardized the way that
DISK_GB resources were tracked -- including for providers of shared
disk storage.

I've long felt that flavors as a concept should be, as Chris so
adeptly wrote, "UI furniture" and should be decomposed into their
requisite lists of resource amounts, required traits and preferred
traits and that those decomposed parts are what should be passed to
the Compute API, not a flavor ID.

But again, we're actively changing all this code in the resource
providers and qualitative traits patches so I warned about adding more
code that was essentially just a short-lived hack. I'd be OK adding
the hack code if there were some big bright WARNINGs placed in there
that likely the code would be removed in Ocata.


While :
#1 the first change about setting root_gb equals 0 in RequestSpec for
making sure BFV instances are correctly using the DiskFilter is fine by
me having it merged with a TODO/FIXME saying that the code would be
expired once the scheduler uses the resource providers,
#2, then the second patch about trying to look at the BDMs for
DiskFilter is very wrong by me, because the Compute API shouldn't accept
IMHO to ask for a flavor *and* a BDM with a related disk size different
from the flavor. AFAIT, we should return a 40x (probably 409 or 400) for
that instead of accepting it silently.

Well, a flavor is always required when launching an instance. I wasn't
aware until recently that one could "override" the root GB (or eph/swap)
sizes in the API. Apparently, the API allows it, though, even though the
code ignores whatever was passed as the override value. If the API
supports it, I think the code should probably be changed to override the
size values to whatever the user entered, no?

That's the question here. There's a lot of desire to have the overrides
be the values used in scheduling, since currently the flavor values are
used, but making that change potentially impacts how flavors are packed
onto hosts for some deployers. The main thing I want to get out of this
thread is if that's okay.

The sense I get so far is that it's okay to make the change to have bdm
values be passed to the scheduler, via RequestSpec modifications, and
deployers can rely upon CPU/RAM constraints to determine packing. So the
main thing to do is have good release notes about the change.



Like I said just before, that could be an option but given we don't 
expose an instance-related RequestSpec, it means that litterally 
operators showing an instance would wonder why its disk sizes could be 
different from the related flavor.


-Sylvain


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Andrew Laski


On Mon, Aug 29, 2016, at 08:20 AM, Jay Pipes wrote:
> On 08/29/2016 05:11 AM, Sylvain Bauza wrote:
> > Le 29/08/2016 13:25, Jay Pipes a écrit :
> >> On 08/26/2016 09:20 AM, Ed Leafe wrote:
> >>> On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:
> >>>
>  One other thing to note is that while a flavor constrains how much
>  local
>  disk is used it does not constrain volume size at all. So a user can
>  specify an ephemeral/swap disk <= to what the flavor provides but can
>  have an arbitrary sized root disk if it's a remote volume.
> >>>
> >>> This kind of goes to the heart of the argument against flavors being
> >>> the sole source of truth for a request. As cloud evolves, we keep
> >>> packing more and more stuff into a concept that was originally meant
> >>> to only divide up resources that came bundled together (CPU, RAM, and
> >>> local disk). This hasn’t been a good solution for years, and the
> >>> sooner we start accepting that a request can be much more complex
> >>> than a flavor can adequately express, the better.
> >>>
> >>> If we have decided that remote volumes are a good thing (I don’t
> >>> think there’s any argument there), then we should treat that part of
> >>> the request as being as fundamental as a flavor. We need to make the
> >>> scheduler smarter so that it doesn’t rely on flavor as being the only
> >>> source of truth.
> >>>
> >>> The first step to improving Nova is admitting we have a problem. :)
> >>
> >> FWIW, I agree with you on the above. The issue I had with the proposed
> >> patches was that they would essentially be a hack for a short period
> >> of time until the resource providers work standardized the way that
> >> DISK_GB resources were tracked -- including for providers of shared
> >> disk storage.
> >>
> >> I've long felt that flavors as a concept should be, as Chris so
> >> adeptly wrote, "UI furniture" and should be decomposed into their
> >> requisite lists of resource amounts, required traits and preferred
> >> traits and that those decomposed parts are what should be passed to
> >> the Compute API, not a flavor ID.
> >>
> >> But again, we're actively changing all this code in the resource
> >> providers and qualitative traits patches so I warned about adding more
> >> code that was essentially just a short-lived hack. I'd be OK adding
> >> the hack code if there were some big bright WARNINGs placed in there
> >> that likely the code would be removed in Ocata.
> >>
> >
> > While :
> > #1 the first change about setting root_gb equals 0 in RequestSpec for
> > making sure BFV instances are correctly using the DiskFilter is fine by
> > me having it merged with a TODO/FIXME saying that the code would be
> > expired once the scheduler uses the resource providers,
> > #2, then the second patch about trying to look at the BDMs for
> > DiskFilter is very wrong by me, because the Compute API shouldn't accept
> > IMHO to ask for a flavor *and* a BDM with a related disk size different
> > from the flavor. AFAIT, we should return a 40x (probably 409 or 400) for
> > that instead of accepting it silently.
> 
> Well, a flavor is always required when launching an instance. I wasn't 
> aware until recently that one could "override" the root GB (or eph/swap) 
> sizes in the API. Apparently, the API allows it, though, even though the 
> code ignores whatever was passed as the override value. If the API 
> supports it, I think the code should probably be changed to override the 
> size values to whatever the user entered, no?

That's the question here. There's a lot of desire to have the overrides
be the values used in scheduling, since currently the flavor values are
used, but making that change potentially impacts how flavors are packed
onto hosts for some deployers. The main thing I want to get out of this
thread is if that's okay.

The sense I get so far is that it's okay to make the change to have bdm
values be passed to the scheduler, via RequestSpec modifications, and
deployers can rely upon CPU/RAM constraints to determine packing. So the
main thing to do is have good release notes about the change.

> 
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Sylvain Bauza



Le 29/08/2016 14:20, Jay Pipes a écrit :

On 08/29/2016 05:11 AM, Sylvain Bauza wrote:

Le 29/08/2016 13:25, Jay Pipes a écrit :

On 08/26/2016 09:20 AM, Ed Leafe wrote:

On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:


One other thing to note is that while a flavor constrains how much
local
disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides but can
have an arbitrary sized root disk if it's a remote volume.


This kind of goes to the heart of the argument against flavors being
the sole source of truth for a request. As cloud evolves, we keep
packing more and more stuff into a concept that was originally meant
to only divide up resources that came bundled together (CPU, RAM, and
local disk). This hasn’t been a good solution for years, and the
sooner we start accepting that a request can be much more complex
than a flavor can adequately express, the better.

If we have decided that remote volumes are a good thing (I don’t
think there’s any argument there), then we should treat that part of
the request as being as fundamental as a flavor. We need to make the
scheduler smarter so that it doesn’t rely on flavor as being the only
source of truth.

The first step to improving Nova is admitting we have a problem. :)


FWIW, I agree with you on the above. The issue I had with the proposed
patches was that they would essentially be a hack for a short period
of time until the resource providers work standardized the way that
DISK_GB resources were tracked -- including for providers of shared
disk storage.

I've long felt that flavors as a concept should be, as Chris so
adeptly wrote, "UI furniture" and should be decomposed into their
requisite lists of resource amounts, required traits and preferred
traits and that those decomposed parts are what should be passed to
the Compute API, not a flavor ID.

But again, we're actively changing all this code in the resource
providers and qualitative traits patches so I warned about adding more
code that was essentially just a short-lived hack. I'd be OK adding
the hack code if there were some big bright WARNINGs placed in there
that likely the code would be removed in Ocata.



While :
#1 the first change about setting root_gb equals 0 in RequestSpec for
making sure BFV instances are correctly using the DiskFilter is fine by
me having it merged with a TODO/FIXME saying that the code would be
expired once the scheduler uses the resource providers,
#2, then the second patch about trying to look at the BDMs for
DiskFilter is very wrong by me, because the Compute API shouldn't accept
IMHO to ask for a flavor *and* a BDM with a related disk size different
from the flavor. AFAIT, we should return a 40x (probably 409 or 400) for
that instead of accepting it silently.


Well, a flavor is always required when launching an instance. I wasn't 
aware until recently that one could "override" the root GB (or 
eph/swap) sizes in the API. Apparently, the API allows it, though, 
even though the code ignores whatever was passed as the override 
value. If the API supports it, I think the code should probably be 
changed to override the size values to whatever the user entered, no?


I'm very sad to see this possible. FWIW, I think that the flavor should 
always be the source of truth for knowing the asked disk size (if no, 
imagine all the conditionals in code...) and we should somehow reconcile 
what the flavor provided and what the user explicitly asked for.


Having a possibility to trample the flavor size seems to be a very bad 
UX (I guess most of the operators don't think about that when 
understanding why this instance could have a different size from the 
related flavor) hence me thinking we should rather discuss on a possible 
new microversion for returning a 400 when both sizes are not identical.


The other option I could see could be to have the nested flavor in the 
RequestSpec be reconciling those two objects being different from the 
user-provided flavor (eg. say a flavor with swap=10G and a BDM with 
swap=1G, then RequestSpec.flavor would have swap=1G) but given we don't 
expose the RequestSpec objects on the API level, that still leaves 
operators possibly confused.


-Sylvain



-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [os-vif] [neutron] Race in setting up linux bridge

2016-08-29 Thread Sean Dague
On 08/29/2016 08:29 AM, Kevin Benton wrote:
> Sort of. The neutron agent code doesn't use os-vif because the os-vif
> devs indicated that neutron's vif plugging code wasn't a use case they
> cared about [1].
> 
> So if we did generalize os-vif to work with the neutron agents then it
> would be two calling the same locked code. But at this point it's just
> two versions of similar logic trying to do the same thing.
> 
> 1. https://review.openstack.org/#/c/284209/

Calling the same locked code wouldn't help, these are different services
that in *almost* all real deployments are running under different user
ids. Which means shared locks between them are basically not possible.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [os-vif] [neutron] Race in setting up linux bridge

2016-08-29 Thread Kevin Benton
Sort of. The neutron agent code doesn't use os-vif because the os-vif devs
indicated that neutron's vif plugging code wasn't a use case they cared
about [1].

So if we did generalize os-vif to work with the neutron agents then it
would be two calling the same locked code. But at this point it's just two
versions of similar logic trying to do the same thing.

1. https://review.openstack.org/#/c/284209/

On Aug 29, 2016 05:56, "Sean Dague"  wrote:

> On 08/26/2016 05:23 PM, Armando M. wrote:
> > Folks,
> >
> > Today I spotted [1]. It turns out Neutron and Nova might be racing
> > trying to set up the bridge to provide VM with connectivity/dhcp. In the
> > observed failure mode, os-vif fails in [2].
> >
> > I suppose we might need to protect the bridge creation and make it
> > handle the potential exception. We would need a similar fix for Neutron
> > in [3].
> >
> > That said, knowing there is a looming deadline [4], I'd invite folks to
> > keep an eye on the bug.
> >
> > Many thanks,
> > Armando
> >
> > [1] https://bugs.launchpad.net/neutron/+bug/1617447
> > [2] https://github.com/openstack/os-vif/blob/master/vif_plug_
> linux_bridge/linux_net.py#L125
> > [3] http://git.openstack.org/cgit/openstack/neutron/tree/
> neutron/agent/linux/bridge_lib.py#n58
> > [4] http://lists.openstack.org/pipermail/openstack-dev/2016-
> August/102339.html
>
> Is this another issue where 2 processes are calling "locked" code, but
> from different projects, so the locks have no impact? We just went
> through this with os-brick and had to mitigate with a retry block.
>
> We probably need to make some strong rules about what locking can look
> like in these common libraries, because they have a lot of ported code
> that doesn't really work when communicating between 2 services.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tacker vnf-create is not bringing upalltheinterfaces

2016-08-29 Thread Abhilash Goyal
Default cirros that is added at the time of installation.
On 29 Aug 2016 16:25, "Zhi Chang"  wrote:

> My cirros image works ok. What version about your cirros image?
>
>
> -- Original --
> *From: * "Abhilash Goyal";
> *Date: * Mon, Aug 29, 2016 06:43 PM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] tacker vnf-create is not bringing
> upalltheinterfaces
>
> Hello Chang,
> thanks a lot, this image worked. Could you guide me the same for cirros
> image.
>
>
> On Mon, Aug 29, 2016 at 2:54 PM, Zhi Chang 
> wrote:
>
>> OpenWRT image should be enabled first nic's DHCP.
>>
>>
>> -- Original --
>> *From: * "Abhilash Goyal";
>> *Date: * Mon, Aug 29, 2016 05:35 PM
>> *To: * "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev@lists.openstack.org>;
>> *Subject: * Re: [openstack-dev] tacker vnf-create is not bringing up
>> alltheinterfaces
>>
>> Hi Chang,
>>
>> I am using https://downloads.openwrt.org/chaos_calmer/15.05/x86/
>> kvm_guest/openwrt-15.05-x86-kvm_guest-combined-ext4.img.gz image of
>> openWRT.
>> This feature is not working for Cirros either.
>>
>> On Mon, Aug 29, 2016 at 2:25 PM, Zhi Chang 
>> wrote:
>>
>>> Hi, Goyal.
>>>
>>> What version about your OpenWRT image? You can get OpenWRT image
>>> from  this: https://drive.google.com/open?id=0B-ruQ8Tx46wSMktKV3J
>>> LRWhnLTA
>>>
>>>
>>> Thanks
>>> Zhi Chang
>>>
>>> -- Original --
>>> *From: * "Abhilash Goyal";
>>> *Date: * Mon, Aug 29, 2016 05:18 PM
>>> *To: * "openstack-dev";
>>> *Subject: * [openstack-dev] tacker vnf-create is not bringing up all
>>> theinterfaces
>>>
>>> [Tacker]
>>> Hello team,
>>> I am trying to make an OpenWRT VNF through tacker using this vnf-d
>>> .
>>> VNF is spawning successfully, but expected VNF should have 3 connecting
>>> points with first one in management network, but this is not happening. It
>>> is getting up with default network configuration because of this, IPs are
>>> not getting assigned to it automatically.
>>> Guidance would be appreciated.
>>>
>>> --
>>> Abhilash Goyal
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Abhilash Goyal
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Abhilash Goyal
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-29 Thread Denis Egorenko
Thank you all! I will do my best! :)

2016-08-29 15:21 GMT+03:00 Maksim Malchuk :

> Although today is not the 31 Aug and we have a lack of core-reviewers in
> the fuel-library I would like to speed up the process a little.
>
> And as there are no objections so, please welcome Denis as he's just
> joined fuel-library Core Team!
>
> On Mon, Aug 29, 2016 at 3:15 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Although I am not a core in fuel-library, I am voting +1.
>>
>> Vladimir Kozhukalov
>>
>> On Fri, Aug 26, 2016 at 1:27 PM, Ivan Berezovskiy <
>> iberezovs...@mirantis.com> wrote:
>>
>>> +1, great job!
>>>
>>> 2016-08-26 10:33 GMT+03:00 Bogdan Dobrelya :
>>>
 +1

 On 25.08.2016 21:08, Stanislaw Bogatkin wrote:
 > +1
 >
 > On Thu, Aug 25, 2016 at 12:08 PM, Aleksandr Didenko
 > > wrote:
 >
 > +1
 >
 > On Thu, Aug 25, 2016 at 9:35 AM, Sergey Vasilenko
 > > wrote:
 >
 > +1
 >
 >
 > /sv
 >
 >
 > ___
 ___
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > 
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
 ck-dev 
 >
 >
 >
 > ___
 ___
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > 
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 
 >
 >
 >
 >
 > --
 > with best regards,
 > Stan.
 >
 >
 > 
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


 --
 Best regards,
 Bogdan Dobrelya,
 Irc #bogdando

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Thanks, Ivan Berezovskiy
>>> MOS Puppet Team Lead
>>> at Mirantis 
>>>
>>> slack: iberezovskiy
>>> skype: bouhforever
>>> phone: + 7-960-343-42-46
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Maksim Malchuk,
> Senior DevOps Engineer,
> MOS: Product Engineering,
> Mirantis, Inc
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Senior Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-29 Thread Maksim Malchuk
Although today is not the 31 Aug and we have a lack of core-reviewers in
the fuel-library I would like to speed up the process a little.

And as there are no objections so, please welcome Denis as he's just joined
fuel-library Core Team!

On Mon, Aug 29, 2016 at 3:15 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Although I am not a core in fuel-library, I am voting +1.
>
> Vladimir Kozhukalov
>
> On Fri, Aug 26, 2016 at 1:27 PM, Ivan Berezovskiy <
> iberezovs...@mirantis.com> wrote:
>
>> +1, great job!
>>
>> 2016-08-26 10:33 GMT+03:00 Bogdan Dobrelya :
>>
>>> +1
>>>
>>> On 25.08.2016 21:08, Stanislaw Bogatkin wrote:
>>> > +1
>>> >
>>> > On Thu, Aug 25, 2016 at 12:08 PM, Aleksandr Didenko
>>> > > wrote:
>>> >
>>> > +1
>>> >
>>> > On Thu, Aug 25, 2016 at 9:35 AM, Sergey Vasilenko
>>> > > wrote:
>>> >
>>> > +1
>>> >
>>> >
>>> > /sv
>>> >
>>> >
>>> > ___
>>> ___
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > >> unsubscribe>
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
>>> ck-dev >> ck-dev>
>>> >
>>> >
>>> >
>>> > ___
>>> ___
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > >> unsubscribe>
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> > >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > with best regards,
>>> > Stan.
>>> >
>>> >
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> --
>>> Best regards,
>>> Bogdan Dobrelya,
>>> Irc #bogdando
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Thanks, Ivan Berezovskiy
>> MOS Puppet Team Lead
>> at Mirantis 
>>
>> slack: iberezovskiy
>> skype: bouhforever
>> phone: + 7-960-343-42-46
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Maksim Malchuk,
Senior DevOps Engineer,
MOS: Product Engineering,
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Jay Pipes

On 08/29/2016 05:11 AM, Sylvain Bauza wrote:

Le 29/08/2016 13:25, Jay Pipes a écrit :

On 08/26/2016 09:20 AM, Ed Leafe wrote:

On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:


One other thing to note is that while a flavor constrains how much
local
disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides but can
have an arbitrary sized root disk if it's a remote volume.


This kind of goes to the heart of the argument against flavors being
the sole source of truth for a request. As cloud evolves, we keep
packing more and more stuff into a concept that was originally meant
to only divide up resources that came bundled together (CPU, RAM, and
local disk). This hasn’t been a good solution for years, and the
sooner we start accepting that a request can be much more complex
than a flavor can adequately express, the better.

If we have decided that remote volumes are a good thing (I don’t
think there’s any argument there), then we should treat that part of
the request as being as fundamental as a flavor. We need to make the
scheduler smarter so that it doesn’t rely on flavor as being the only
source of truth.

The first step to improving Nova is admitting we have a problem. :)


FWIW, I agree with you on the above. The issue I had with the proposed
patches was that they would essentially be a hack for a short period
of time until the resource providers work standardized the way that
DISK_GB resources were tracked -- including for providers of shared
disk storage.

I've long felt that flavors as a concept should be, as Chris so
adeptly wrote, "UI furniture" and should be decomposed into their
requisite lists of resource amounts, required traits and preferred
traits and that those decomposed parts are what should be passed to
the Compute API, not a flavor ID.

But again, we're actively changing all this code in the resource
providers and qualitative traits patches so I warned about adding more
code that was essentially just a short-lived hack. I'd be OK adding
the hack code if there were some big bright WARNINGs placed in there
that likely the code would be removed in Ocata.



While :
#1 the first change about setting root_gb equals 0 in RequestSpec for
making sure BFV instances are correctly using the DiskFilter is fine by
me having it merged with a TODO/FIXME saying that the code would be
expired once the scheduler uses the resource providers,
#2, then the second patch about trying to look at the BDMs for
DiskFilter is very wrong by me, because the Compute API shouldn't accept
IMHO to ask for a flavor *and* a BDM with a related disk size different
from the flavor. AFAIT, we should return a 40x (probably 409 or 400) for
that instead of accepting it silently.


Well, a flavor is always required when launching an instance. I wasn't 
aware until recently that one could "override" the root GB (or eph/swap) 
sizes in the API. Apparently, the API allows it, though, even though the 
code ignores whatever was passed as the override value. If the API 
supports it, I think the code should probably be changed to override the 
size values to whatever the user entered, no?


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Barcelona Design Summit space needs

2016-08-29 Thread Emilien Macchi
On Mon, Aug 22, 2016 at 1:26 PM, Cody Herriges  wrote:
>
> On August 18, 2016 at 05:12:56, Emilien Macchi (emil...@redhat.com) wrote:
>
> 
>>
>> Note:
>> - Ops summit on Tuesday morning until 4pm
>> - Cross-project workshops from Tuesday 4pm to Wednesday 4pm
>>
>> As a reminder, here's what we asked for Austin:
>> Fishbowl slots (Wed-Thu): 2
>> Workroom slots (Tue-Thu): 3
>> Contributors meetup (Fri): 1
>>
>> Those who were here can also remember we didn't need all those rooms.
>> I suggest this time we ask for 2 Workroom slots (max 3) and that's it.
>> I'm not sure we actually need Fishbowl and Contributor meetup slots,
>> but feel free to propose if I'm wrong.
>>
>
> I agree, we had many more sessions that we needed in Austin. A single
> fishbowl is the only other one I'd think would be of use.
>
>>
>> I created an etherpad for topic ideas, feel free to start thinking about
>> it:
>> https://etherpad.openstack.org/p/ocata-puppet
>
>
> --
> Cody Herriges

I'm going to request 2 workrooms and 1 fishbowl by the end of the day.
Feel free to comment / give feedback.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] Plans re newton-3 release and feature freeze exceptions

2016-08-29 Thread Steven Hardy
On Mon, Aug 29, 2016 at 12:18:17PM +0300, Juan Antonio Osorio wrote:
>On Fri, Aug 26, 2016 at 7:14 PM, Steven Hardy  wrote:
> 
>  Hi all,
> 
>  There have been some discussions on $subject recently, so I wanted to
>  give
>  a status update.
> 
>  Next week we will tag our newton-3 release, and we're currently working
>  to
>  either land or defer the remaining features tracked here:
> 
>  https://launchpad.net/tripleo/+milestone/newton-3
> 
>  We need to land as many of the "Needs Code Review" features as we can
>  before cutting the release next week, so please help by prioritizing
>  your
>  reviews.
> 
>  --
>  Feature Freeze
>  --
> 
>  After newton-3 is released, we'll have reached Feature Freeze for
>  Newton,
>  and any features landed after this point must be agreed as feature
>  freeze
>  exceptions (everything else will be deferred to Ocata) - the process is
>  to
>  mail this list with a justification, and details of which patches need
>  to
>  land, then we'll reach consensus over if it will be accepted or not
>  based
>  on the level of risk, and the status of the patches.
> 
>  Currently there are three potential FFEs which I'm aware of:
> 
>  1. Mistral API
> 
>  We've made good progress on this over recent weeks, but several patches
>  remain - this is the umbrella BP, and it links several dependent BPs
>  which
>  are mostly posted but need code reviews, please help by testing and
>  reviewing these:
> 
>  https://blueprints.launchpad.net/tripleo/+spec/mistral-deployment-library
> 
>  2. Composable Roles
> 
>  There are two parts to this, some remaining cleanups related to
>  per-service
>  configuration (such as bind_ip's) which need to land, and the related
>  custom-roles feature:
> 
>  https://bugs.launchpad.net/tripleo/+bug/1604414
> 
>  https://blueprints.launchpad.net/tripleo/+spec/custom-roles
> 
>  Some patches still need to be fixed or written to enable custom-roles -
>  it's a stretch but I'd say a FFE may be reasonable provided we can get
>  the
>  remaining patches functional in the next few days (I'm planning to do
>  this)
> 
>  3. Contrail integration
> 
>  There are patches posted for this, but they need work - Carlos is
>  helping
>  so I'd suggest it should be possible to land these as a FFE (should be
>  low
>  risk as it's all disabled by default)
> 
>  https://blueprints.launchpad.net/tripleo/+spec/contrail-services
> 
>  These are the main features I'm aware of that are targetted to newton-3
>  but will probably slip, are there others folks want to raise?
> 
>Well, https://review.openstack.org/#/c/282307/ is still in progress and
>will need an FFE. What's the official process to apply for the exception?

Just mail this list, either on this thread or an new one, provide details
of the justification, risk/benefit, and a detailed list of the outstanding
patches, if folks agree a FFE is reasonable (they can reply to the ML
and/or we'll discuss in IRC), I'll move the BP to our RC1 milestone in
launchpad.

I've not yet done that for custom-roles so here's my FFE request for
consideration, you can reply here in a similar format for tls-via-certmonger

FFE request for custom-roles:

Blueprint link: https://blueprints.launchpad.net/tripleo/+spec/custom-roles

Status:

It's about 90% complete, a few of the patches posted need some work, and
the final jina2 refactor of overcloud.yaml isn't finished, I expect all
remaining patches can be posted by the end of this week.

This has been blocked firstly on the composable services refactoring, and
more recently on the Mistral API work, both landed much later than
anticipated, and like everyone I've been struggling with pretty slow review
progress vs a long series of patches.

Justification/Benefit:

A lot of folks have been asking for this, and without the ability to spin
up arbitrary role types, a lot of the value of the effort we put into
composable services is diminished (because you still can't independently
scale anything).

Risk:

This is moderately risky, because some significant refactoring to
allNodesConfig and VipConfig have not yet landed.  I feel OK about this
other than the risk around anything we're not currently testing in CI, e.g
we disabled IPv6 testing so that will definitely need to be carefully
verified (and/or a CI test reinstated) prior to the final Newton release.

Detailed patch status:

Waiting on CI but ready for review:
https://review.openstack.org/#/c/361108/
https://review.openstack.org/#/c/355067
https://review.openstack.org/#/c/355068/
https://review.openstack.org/#/c/361731
https://review.openstack.org/#/c/361730/

WIP (I'm planning to update these later as there's some remaining issues):
https://review.openstack.org/#/c/361777/

Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-29 Thread Vladimir Kozhukalov
Although I am not a core in fuel-library, I am voting +1.

Vladimir Kozhukalov

On Fri, Aug 26, 2016 at 1:27 PM, Ivan Berezovskiy  wrote:

> +1, great job!
>
> 2016-08-26 10:33 GMT+03:00 Bogdan Dobrelya :
>
>> +1
>>
>> On 25.08.2016 21:08, Stanislaw Bogatkin wrote:
>> > +1
>> >
>> > On Thu, Aug 25, 2016 at 12:08 PM, Aleksandr Didenko
>> > > wrote:
>> >
>> > +1
>> >
>> > On Thu, Aug 25, 2016 at 9:35 AM, Sergey Vasilenko
>> > > wrote:
>> >
>> > +1
>> >
>> >
>> > /sv
>> >
>> >
>> > ___
>> ___
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > subject:unsubscribe>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
>> ck-dev > >
>> >
>> >
>> >
>> > ___
>> ___
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > subject:unsubscribe>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>> >
>> >
>> >
>> >
>> > --
>> > with best regards,
>> > Stan.
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Thanks, Ivan Berezovskiy
> MOS Puppet Team Lead
> at Mirantis 
>
> slack: iberezovskiy
> skype: bouhforever
> phone: + 7-960-343-42-46
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Sylvain Bauza



Le 29/08/2016 13:25, Jay Pipes a écrit :

On 08/26/2016 09:20 AM, Ed Leafe wrote:

On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:

One other thing to note is that while a flavor constrains how much 
local

disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides but can
have an arbitrary sized root disk if it's a remote volume.


This kind of goes to the heart of the argument against flavors being 
the sole source of truth for a request. As cloud evolves, we keep 
packing more and more stuff into a concept that was originally meant 
to only divide up resources that came bundled together (CPU, RAM, and 
local disk). This hasn’t been a good solution for years, and the 
sooner we start accepting that a request can be much more complex 
than a flavor can adequately express, the better.


If we have decided that remote volumes are a good thing (I don’t 
think there’s any argument there), then we should treat that part of 
the request as being as fundamental as a flavor. We need to make the 
scheduler smarter so that it doesn’t rely on flavor as being the only 
source of truth.


The first step to improving Nova is admitting we have a problem. :)


FWIW, I agree with you on the above. The issue I had with the proposed 
patches was that they would essentially be a hack for a short period 
of time until the resource providers work standardized the way that 
DISK_GB resources were tracked -- including for providers of shared 
disk storage.


I've long felt that flavors as a concept should be, as Chris so 
adeptly wrote, "UI furniture" and should be decomposed into their 
requisite lists of resource amounts, required traits and preferred 
traits and that those decomposed parts are what should be passed to 
the Compute API, not a flavor ID.


But again, we're actively changing all this code in the resource 
providers and qualitative traits patches so I warned about adding more 
code that was essentially just a short-lived hack. I'd be OK adding 
the hack code if there were some big bright WARNINGs placed in there 
that likely the code would be removed in Ocata.




While :
#1 the first change about setting root_gb equals 0 in RequestSpec for 
making sure BFV instances are correctly using the DiskFilter is fine by 
me having it merged with a TODO/FIXME saying that the code would be 
expired once the scheduler uses the resource providers,
#2, then the second patch about trying to look at the BDMs for 
DiskFilter is very wrong by me, because the Compute API shouldn't accept 
IMHO to ask for a flavor *and* a BDM with a related disk size different 
from the flavor. AFAIT, we should return a 40x (probably 409 or 400) for 
that instead of accepting it silently.


-Sylvain



-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [os-vif] [neutron] Race in setting up linux bridge

2016-08-29 Thread Sean Dague
On 08/26/2016 05:23 PM, Armando M. wrote:
> Folks,
> 
> Today I spotted [1]. It turns out Neutron and Nova might be racing
> trying to set up the bridge to provide VM with connectivity/dhcp. In the
> observed failure mode, os-vif fails in [2].
> 
> I suppose we might need to protect the bridge creation and make it
> handle the potential exception. We would need a similar fix for Neutron
> in [3].
> 
> That said, knowing there is a looming deadline [4], I'd invite folks to
> keep an eye on the bug.
> 
> Many thanks,
> Armando
> 
> [1] https://bugs.launchpad.net/neutron/+bug/1617447
> [2] 
> https://github.com/openstack/os-vif/blob/master/vif_plug_linux_bridge/linux_net.py#L125
> [3] 
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/linux/bridge_lib.py#n58
> [4] http://lists.openstack.org/pipermail/openstack-dev/2016-August/102339.html

Is this another issue where 2 processes are calling "locked" code, but
from different projects, so the locks have no impact? We just went
through this with os-brick and had to mitigate with a retry block.

We probably need to make some strong rules about what locking can look
like in these common libraries, because they have a lot of ported code
that doesn't really work when communicating between 2 services.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Yong Sheng Gong for Tacker core team

2016-08-29 Thread Kanagaraj Manickam
+1.

Congrats yong.

Thanks & Regards
Kanagaraj M

On Aug 27, 2016 2:12 AM, "Sridhar Ramaswamy"  wrote:

We have enough votes to proceed.

Yong - welcome to the Tacker core team!

- Sridhar

On Tue, Aug 23, 2016 at 12:06 PM, Stephen Wong 
wrote:

> +1
>
> On Tue, Aug 23, 2016 at 8:55 AM, Sridhar Ramaswamy 
> wrote:
>
>> Tackers,
>>
>> I'd like to propose Yong Sheng Gong to join the Tacker core team. Yong is
>> a seasoned OpenStacker and has been contributing to Tacker project since
>> Nov 2015 (early Mitaka). He has been the major force in helping Tacker to
>> shed its *Neutronisms*. He has low tolerance on unevenness in the code
>> base and he fixes them as he goes. Yong also participated in the Enhanced
>> Placement Awareness (EPA) blueprint in the Mitaka cycle. For Newton he took
>> up himself cleaning up the DB schema and in numerous reviews to keep the
>> project going. He has been a dependable member of the Tacker community [1].
>>
>> Please chime in with your +1 / -1 votes.
>>
>> thanks,
>> Sridhar
>>
>> [1] http://stackalytics.com/report/contribution/tacker/90
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-08-29 Thread tie...@vn.fujitsu.com
Hi Matt, Dan, Andrew,

@Matt: Hope you had a nice vacation.

For the feature Nova serial console support for Ironic [1][2], there are some 
good update from Ironic side. All our Ironic-side works [3][4][5] have been 
done, currently there is only the nova patch that needs to review.

Last week, I contacted Andrew and Dan for reviewing it. But both Andrew and Dan 
said they didn't notice you've removed -2 from the patch. So, Matt, can you 
notify the Nova core team about that so Andrew and Dan can review it again?

[1] https://blueprints.launchpad.net/nova/+spec/ironic-serial-console-support
[2] https://review.openstack.org/#/c/328157/  (Nova patch, in review)
[3] https://review.openstack.org/#/c/319505/  (Ironic spec, merged)
[4] https://review.openstack.org/#/c/328168/  (Ironic patch, merged)
[5] https://review.openstack.org/#/c/293873/  (Ironic patch, merged)

Thanks and Regards
TienDC

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Thursday, July 07, 2016 3:16 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Non-priority feature freeze and FFEs

On 7/5/2016 2:14 AM, tie...@vn.fujitsu.com wrote:
> Hi folks,
>
> I want to give more information about our nova patch for bp 
> ironic-serial-console-support. The whole feature needs work to be done in 
> Nova and Ironic. The nova bp [1] has been approved, and the Ironic spec [2] 
> has been merged.
>
> This nova patch [3] is simple, we got some reviews by some Nova and Ironic 
> core reviewers. The depended patches in Ironic are [4][5] which [4] will get 
> merged soon and [5] is in review progress.
>
> Hope Nova core team considers adding this case to the exception list.
>
> [1] 
> https://blueprints.launchpad.net/nova/+spec/ironic-serial-console-supp
> ort  (Nova bp, approved by dansmith) [2] 
> https://review.openstack.org/#/c/319505/  (Ironic spec, merged)
>
> [3] https://review.openstack.org/#/c/328157/  (Nova patch, in review) 
> [4] https://review.openstack.org/#/c/328168/  (Ironic patch 1st, got 
> two +2, will get merged soon) [5] 
> https://review.openstack.org/#/c/293873/  (Ironic patch 2nd, in 
> review)
>
> Thanks and Regards
> Dao Cong Tien
>
>

When I looked last week the nova change was dependent on multiple ironic 
patches which weren't merged yet, so it wasn't ready to go for the non-priority 
feature freeze. The ironic changes are all merged yet either when we were going 
over FFE candidates. So this is going to have to wait for Ocata.

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Jay Pipes

On 08/26/2016 09:20 AM, Ed Leafe wrote:

On Aug 25, 2016, at 3:19 PM, Andrew Laski  wrote:


One other thing to note is that while a flavor constrains how much local
disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides but can
have an arbitrary sized root disk if it's a remote volume.


This kind of goes to the heart of the argument against flavors being the sole 
source of truth for a request. As cloud evolves, we keep packing more and more 
stuff into a concept that was originally meant to only divide up resources that 
came bundled together (CPU, RAM, and local disk). This hasn’t been a good 
solution for years, and the sooner we start accepting that a request can be 
much more complex than a flavor can adequately express, the better.

If we have decided that remote volumes are a good thing (I don’t think there’s 
any argument there), then we should treat that part of the request as being as 
fundamental as a flavor. We need to make the scheduler smarter so that it 
doesn’t rely on flavor as being the only source of truth.

The first step to improving Nova is admitting we have a problem. :)


FWIW, I agree with you on the above. The issue I had with the proposed 
patches was that they would essentially be a hack for a short period of 
time until the resource providers work standardized the way that DISK_GB 
resources were tracked -- including for providers of shared disk storage.


I've long felt that flavors as a concept should be, as Chris so adeptly 
wrote, "UI furniture" and should be decomposed into their requisite 
lists of resource amounts, required traits and preferred traits and that 
those decomposed parts are what should be passed to the Compute API, not 
a flavor ID.


But again, we're actively changing all this code in the resource 
providers and qualitative traits patches so I warned about adding more 
code that was essentially just a short-lived hack. I'd be OK adding the 
hack code if there were some big bright WARNINGs placed in there that 
likely the code would be removed in Ocata.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tacker vnf-create is not bringing upalltheinterfaces

2016-08-29 Thread Zhi Chang
My cirros image works ok. What version about your cirros image? 
 
 
-- Original --
From:  "Abhilash Goyal";
Date:  Mon, Aug 29, 2016 06:43 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] tacker vnf-create is not bringing 
upalltheinterfaces

 
Hello Chang, thanks a lot, this image worked. Could you guide me the same for 
cirros image.




On Mon, Aug 29, 2016 at 2:54 PM, Zhi Chang  wrote:
OpenWRT image should be enabled first nic's DHCP. 
 
 
-- Original --
From:  "Abhilash Goyal";
Date:  Mon, Aug 29, 2016 05:35 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] tacker vnf-create is not bringing up 
alltheinterfaces

 
Hi Chang,

I am using 
https://downloads.openwrt.org/chaos_calmer/15.05/x86/kvm_guest/openwrt-15.05-x86-kvm_guest-combined-ext4.img.gz
 image of openWRT. 
This feature is not working for Cirros either.


On Mon, Aug 29, 2016 at 2:25 PM, Zhi Chang  wrote:
Hi, Goyal.


What version about your OpenWRT image? You can get OpenWRT image from  
this: https://drive.google.com/open?id=0B-ruQ8Tx46wSMktKV3JLRWhnLTA
 


Thanks
Zhi Chang
 
-- Original --
From:  "Abhilash Goyal";
Date:  Mon, Aug 29, 2016 05:18 PM
To:  "openstack-dev"; 

Subject:  [openstack-dev] tacker vnf-create is not bringing up all theinterfaces

 
[Tacker]Hello team,
I am trying to make an OpenWRT VNF through tacker using this vnf-d. VNF is 
spawning successfully, but expected VNF should have 3 connecting points with 
first one in management network, but this is not happening. It is getting up 
with default network configuration because of this, IPs are not getting 
assigned to it automatically. 
Guidance would be appreciated.


-- 
Abhilash Goyal




 
 




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





-- 
Abhilash Goyal




 
 




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





-- 
Abhilash Goyal__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Jay Pipes

On 08/27/2016 11:16 AM, HU, BIN wrote:

The challenge in OpenStack is how to enable the innovation built on top of 
OpenStack.


No, that's not the challenge for OpenStack.

That's like saying the challenge for gasoline is how to enable the 
innovation of a jet engine.



So telco use cases is not only the innovation built on top of OpenStack. 
Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile 
Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in 
OpenStack itself. If OpenStack don't address those basic requirements,


That's the thing, Bin, those are *not* "basic" requirements. The Telco 
vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking 
for fundamental architectural and design changes to the foundational 
components of OpenStack. Instead of Nova being designed to manage a 
bunch of hardware in a relatively close location (i.e. a datacenter or 
multiple datacenters), vCPE is asking for Nova to transform itself into 
a micro-agent that can be run on an Apple Watch and do things in 
resource-constrained environments that it was never built to do.


And, honestly, I have no idea what Gluon is trying to do. Ian sent me 
some information a while ago on it. I read it. I still have no idea what 
Gluon is trying to accomplish other than essentially bypassing Neutron 
entirely. That's not "innovation". That's subterfuge.



the innovation will never happen on top of OpenStack.


Sure it will. AT and BT and other Telcos just need to write their own 
software that runs their proprietary vCPE software distribution 
mechanism, that's all. The OpenStack community shouldn't be relied upon 
to create software that isn't applicable to general cloud computing and 
cloud management platforms.



An example is - self-driving car is built on top of many technologies, such as 
sensor/camera, AI, maps, middleware etc. All innovations in each technology 
(sensor/camera, AI, map, etc.) bring together the innovation of self-driving 
car.


Yes, indeed, but the people who created the self-driving car software 
didn't ask the people who created the cameras to write the software for 
them that does the self-driving.



WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on top 
of OpenStack.


You are defining "innovation" in an odd way, IMHO. "Innovation" for the 
vCPE use case sounds a whole lot like "rearchitect your entire software 
stack so that we don't have to write much code that runs on set-top boxes."


Just being honest,
-jay


Thanks
Bin
-Original Message-
From: Edward Leafe [mailto:e...@leafe.com]
Sent: Saturday, August 27, 2016 10:49 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On Aug 27, 2016, at 12:18 PM, HU, BIN  wrote:


From telco perspective, those are the areas that allow innovation, and provide 
telco customers with new types of services.


We need innovation, starting from not limiting ourselves from bringing new idea 
and new use cases, and bringing those impossibility to reality.


There is innovation in OpenStack, and there is innovation in things built on 
top of OpenStack. We are simply trying to keep the two layers from getting 
confused.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-29 Thread Jay Pipes

On 08/28/2016 09:02 PM, joehuang wrote:

Hello, Bin,

Understand your expectation. In Tricircle big-tent application: 
https://review.openstack.org/#/c/338796/, a proposal was also given to add 
plugin mechnism in Nova/Cinder API layer, just like Neutron support plugin 
mechanism in API layer, that boosts innovation for different backend 
implementation to be supported, from ODL to OVN, Open Contrail

Mobile edging computing, NFV netwoking, distributed edge cloud etc are some new 
scneario for OpenStack, I suggest to have at least two successive dedicated 
design summit sessions to discuss about that f2f, the topics to be discussed 
could be:

1, Use cases
2, Requirements  in detail
3, Gaps in OpenStack
4, Proposal to be discussed

Arhietcture level proposal discussion
1, Proposals
2, Pros. and Cons. comparation
3, Chellenges
4, next step


Looking forward to your thoughts.


We could also have a design summit session on how to use a mail user 
agent that doesn't create new mailing list thread when you're responding 
to an existing thread. We could also include a topic about top-posting.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Regarding footer blocks

2016-08-29 Thread Paul Bourke

Kolla,

There seems to be a lot of confusion and thrashing on the customisation 
patches regarding the footer blocks.


To help this I have documented the scheme at 
https://review.openstack.org/#/c/361253/2/doc/CONTRIBUTING.rst.


Please review this and respond with questions if it doesn't make sense. 
Currently there are patches open with +2's that are incorrect.


Thanks!
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tacker vnf-create is not bringing up alltheinterfaces

2016-08-29 Thread Abhilash Goyal
Hello Chang,
thanks a lot, this image worked. Could you guide me the same for cirros
image.


On Mon, Aug 29, 2016 at 2:54 PM, Zhi Chang  wrote:

> OpenWRT image should be enabled first nic's DHCP.
>
>
> -- Original --
> *From: * "Abhilash Goyal";
> *Date: * Mon, Aug 29, 2016 05:35 PM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] tacker vnf-create is not bringing up
> alltheinterfaces
>
> Hi Chang,
>
> I am using https://downloads.openwrt.org/chaos_calmer/15.
> 05/x86/kvm_guest/openwrt-15.05-x86-kvm_guest-combined-ext4.img.gz image
> of openWRT.
> This feature is not working for Cirros either.
>
> On Mon, Aug 29, 2016 at 2:25 PM, Zhi Chang 
> wrote:
>
>> Hi, Goyal.
>>
>> What version about your OpenWRT image? You can get OpenWRT image from
>>  this: https://drive.google.com/open?id=0B-ruQ8Tx46wSMktKV3JLRWhnLTA
>>
>>
>> Thanks
>> Zhi Chang
>>
>> -- Original --
>> *From: * "Abhilash Goyal";
>> *Date: * Mon, Aug 29, 2016 05:18 PM
>> *To: * "openstack-dev";
>> *Subject: * [openstack-dev] tacker vnf-create is not bringing up all
>> theinterfaces
>>
>> [Tacker]
>> Hello team,
>> I am trying to make an OpenWRT VNF through tacker using this vnf-d
>> .
>> VNF is spawning successfully, but expected VNF should have 3 connecting
>> points with first one in management network, but this is not happening. It
>> is getting up with default network configuration because of this, IPs are
>> not getting assigned to it automatically.
>> Guidance would be appreciated.
>>
>> --
>> Abhilash Goyal
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Abhilash Goyal
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Abhilash Goyal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Sylvain Bauza



Le 26/08/2016 12:33, Chris Dent a écrit :

On Thu, 25 Aug 2016, Sylvain Bauza wrote:

Of course, long-term, we could try to see how to have composite 
flavors for helping users to not create a whole handful of flavors 
for quite the same user requests, but that would still be flavors (or 
the name for saying a flavor composition).


long-term flavors should be a piece of UI furniture that is present in a
human-oriented-non-nova UI/API that provides raw information to the
computers-talking-to-computers API that is provided by nova.

But that's very long term.



Here, I didn't wanted to discuss on the long-term strategy about what 
could be a "composite" flavor (even if I tend to agree with you on the 
above) but rather explaining that "flavor" (aka. the concept for 
user-provided piece of information self-defining the request 
constraints) should be kept as the only source of truth.


TBH, I very much dislike the fact that we can at the API level set a 
very different BDM size from the one the flavor gave (for the same 
volume type). In CLI, that's even worst, we just consider "ephemeral" 
and "swap" as being things totally unrelated to a flavor *facepalm*






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [cinder] [api] [doc] API status report

2016-08-29 Thread Andreas Jaeger
On 2016-08-28 21:57, Sean McGinnis wrote:
> On Sat, Aug 27, 2016 at 07:46:48PM +0200, Andreas Jaeger wrote:
>> On 08/26/2016 11:33 PM, Anne Gentle wrote:
>>> Hi cinder block storage peeps: 
>>>
>>> I haven't heard from you on your comfort level with publishing so I went
>>> ahead and made the publishing job myself with this review:
>>>
>>> https://review.openstack.org/361475
>>>
>>> Please let me know your thoughts there. Is the document ready to
>>> publish? Need anything else to get comfy? Let me know.
>>>
>>
>> The current api-ref does not build at all, let's not merge 361475 yet.
>>
>> I've rebased https://review.openstack.org/#/c/322489 and added
>> https://review.openstack.org/#/c/361616 so that the cinder api-ref
>> follows the same patterns as other repositories - including building and
>> review on docs-draft.
>>
>> Once those two are in, we can merge 361475.
>>
>> Cinder team, could you prioritize these reviews, please?
> 
> Merged.
> 
> Thanks Andreas and Anne!
> 
> I'm sure there will be some things we will need to fix or adjust, but I
> think this is a good time to get it published. That will help as well
> with getting visibility on it and start identifying those issues.
> 
> Sorry for the delay getting back to you Anne. Thanks for moving that
> forward.

And published now at:
http://developer.openstack.org/api-ref/block-storage/

Direct links for V1 and V2:
http://developer.openstack.org/api-ref/block-storage/v1/index.html
http://developer.openstack.org/api-ref/block-storage/v2/index.html

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tacker vnf-create is not bringing up alltheinterfaces

2016-08-29 Thread Zhi Chang
OpenWRT image should be enabled first nic's DHCP. 
 
 
-- Original --
From:  "Abhilash Goyal";
Date:  Mon, Aug 29, 2016 05:35 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] tacker vnf-create is not bringing up 
alltheinterfaces

 
Hi Chang,

I am using 
https://downloads.openwrt.org/chaos_calmer/15.05/x86/kvm_guest/openwrt-15.05-x86-kvm_guest-combined-ext4.img.gz
 image of openWRT. 
This feature is not working for Cirros either.


On Mon, Aug 29, 2016 at 2:25 PM, Zhi Chang  wrote:
Hi, Goyal.


What version about your OpenWRT image? You can get OpenWRT image from  
this: https://drive.google.com/open?id=0B-ruQ8Tx46wSMktKV3JLRWhnLTA
 


Thanks
Zhi Chang
 
-- Original --
From:  "Abhilash Goyal";
Date:  Mon, Aug 29, 2016 05:18 PM
To:  "openstack-dev"; 

Subject:  [openstack-dev] tacker vnf-create is not bringing up all theinterfaces

 
[Tacker]Hello team,
I am trying to make an OpenWRT VNF through tacker using this vnf-d. VNF is 
spawning successfully, but expected VNF should have 3 connecting points with 
first one in management network, but this is not happening. It is getting up 
with default network configuration because of this, IPs are not getting 
assigned to it automatically. 
Guidance would be appreciated.


-- 
Abhilash Goyal




 
 




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





-- 
Abhilash Goyal__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] Plans re newton-3 release and feature freeze exceptions

2016-08-29 Thread Juan Antonio Osorio
On Fri, Aug 26, 2016 at 7:14 PM, Steven Hardy  wrote:

> Hi all,
>
> There have been some discussions on $subject recently, so I wanted to give
> a status update.
>
> Next week we will tag our newton-3 release, and we're currently working to
> either land or defer the remaining features tracked here:
>
> https://launchpad.net/tripleo/+milestone/newton-3
>
> We need to land as many of the "Needs Code Review" features as we can
> before cutting the release next week, so please help by prioritizing your
> reviews.
>
> --
> Feature Freeze
> --
>
> After newton-3 is released, we'll have reached Feature Freeze for Newton,
> and any features landed after this point must be agreed as feature freeze
> exceptions (everything else will be deferred to Ocata) - the process is to
> mail this list with a justification, and details of which patches need to
> land, then we'll reach consensus over if it will be accepted or not based
> on the level of risk, and the status of the patches.
>
> Currently there are three potential FFEs which I'm aware of:
>
> 1. Mistral API
>
> We've made good progress on this over recent weeks, but several patches
> remain - this is the umbrella BP, and it links several dependent BPs which
> are mostly posted but need code reviews, please help by testing and
> reviewing these:
>
> https://blueprints.launchpad.net/tripleo/+spec/mistral-deployment-library
>
> 2. Composable Roles
>
> There are two parts to this, some remaining cleanups related to per-service
> configuration (such as bind_ip's) which need to land, and the related
> custom-roles feature:
>
> https://bugs.launchpad.net/tripleo/+bug/1604414
>
> https://blueprints.launchpad.net/tripleo/+spec/custom-roles
>
> Some patches still need to be fixed or written to enable custom-roles -
> it's a stretch but I'd say a FFE may be reasonable provided we can get the
> remaining patches functional in the next few days (I'm planning to do this)
>
> 3. Contrail integration
>
> There are patches posted for this, but they need work - Carlos is helping
> so I'd suggest it should be possible to land these as a FFE (should be low
> risk as it's all disabled by default)
>
> https://blueprints.launchpad.net/tripleo/+spec/contrail-services
>
> These are the main features I'm aware of that are targetted to newton-3
> but will probably slip, are there others folks want to raise?
>
Well, https://review.openstack.org/#/c/282307/ is still in progress and
will need an FFE. What's the official process to apply for the exception?

>
> 
> Bugs
> 
>
> Any bugs not fixed by newton-3 will be deferred to an RC1 milestone I
> created, so that we can track remaining release-blocker bugs in the weeks
> leading to the final release.  Please ensure all bugs are targetted to this
> milestone so we don't miss them.
>
> https://launchpad.net/tripleo/+milestone/newton-rc1
>
> Please let me know if there are any questions or concerns, and thanks to
> everyone for all the help getting to this point, it's been a tough but
> productive cycle, and I'm looking forward to reaching our final newton
> release! :)
>
> Thanks,
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] Plans re newton-3 release and feature freeze exceptions

2016-08-29 Thread Julie Pichon
On 26 August 2016 at 19:04, James Slagle  wrote:
> On Fri, Aug 26, 2016 at 12:14 PM, Steven Hardy  wrote:
>>
>> 1. Mistral API
>>
>> We've made good progress on this over recent weeks, but several patches
>> remain - this is the umbrella BP, and it links several dependent BPs which
>> are mostly posted but need code reviews, please help by testing and
>> reviewing these:
>>
>> https://blueprints.launchpad.net/tripleo/+spec/mistral-deployment-library
>
> Based on what's linked off of that blueprint, here's what's left:
>
> https://blueprints.launchpad.net/tripleo/+spec/cli-deployment-via-workflow
> topic branch: 
> https://review.openstack.org/#/q/status:open+project:openstack/python-tripleoclient+branch:master+topic:deploy
> 5 patches, 2 are marked WIP, all need reviews
>
> https://blueprints.launchpad.net/tripleo-ui/+spec/tripleo-ui-mistral-refactoring
> topic branch: 
> https://review.openstack.org/#/q/topic:bp/tripleo-ui-mistral-refactoring
> 1 tripleo-ui patch
> 1 tripleo-common patch that is Workflow -1
> 1 tripleoclient patch that I just approved

Thank you for moving the tripleoclient patch forward. The Workflow-1'd
patch can be ignored, I updated the topic name to avoid confusion.
(The approach was initially rejected but there's been small voices of
"maybe it would be handy after all" coming up afterwards; I'm keeping
it around to do some rework and perhaps re-discuss it at a later
point.)

Thanks,

Julie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tacker vnf-create is not bringing up all theinterfaces

2016-08-29 Thread Abhilash Goyal
Hi Chang,

I am using
https://downloads.openwrt.org/chaos_calmer/15.05/x86/kvm_guest/openwrt-15.05-x86-kvm_guest-combined-ext4.img.gz
image of openWRT.
This feature is not working for Cirros either.

On Mon, Aug 29, 2016 at 2:25 PM, Zhi Chang  wrote:

> Hi, Goyal.
>
> What version about your OpenWRT image? You can get OpenWRT image from
>  this: https://drive.google.com/open?id=0B-ruQ8Tx46wSMktKV3JLRWhnLTA
>
>
> Thanks
> Zhi Chang
>
> -- Original --
> *From: * "Abhilash Goyal";
> *Date: * Mon, Aug 29, 2016 05:18 PM
> *To: * "openstack-dev";
> *Subject: * [openstack-dev] tacker vnf-create is not bringing up all
> theinterfaces
>
> [Tacker]
> Hello team,
> I am trying to make an OpenWRT VNF through tacker using this vnf-d
> .
> VNF is spawning successfully, but expected VNF should have 3 connecting
> points with first one in management network, but this is not happening. It
> is getting up with default network configuration because of this, IPs are
> not getting assigned to it automatically.
> Guidance would be appreciated.
>
> --
> Abhilash Goyal
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Abhilash Goyal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-29 Thread Lucas Alvares Gomes
Hi,

I overall agree with the proposed plan. I like the idea of having a
"supported" flag (or another name as per Kurt's email) that makes it
easy mark a driver as "unsupported" indicating it might be removed
soon.

About point #3 I'm indifferent, it's a common approach in the project
to log a warning + release notes to mark something as deprecated and I
don't see the real benefit of having an extra Boolean flag to be able
to enable certain drivers, it just sounds like an extra pain. If a
driver is deprecated in the previous cycle and that
"enable_unsupported_drivers" is set to False (the default) after the
upgrading the Ironic services the conductor will fail to start and
operators will most likely just set it to True straight away. It's not
that trivial to replace the deprecated driver on all nodes that are
using it (specially if they are active) and IMO, only having the
warning message (and release notes) is enough and give people enough
time to replace the drivers when possible.

Cheers,
Lucas

On Tue, Aug 23, 2016 at 3:55 PM, Vladyslav Drok  wrote:
>
> On Mon, Aug 22, 2016 at 9:48 PM, Loo, Ruby  wrote:
>>
>> Hi,
>>
>>
>>
>> I admit, I didn't read the entire thread [0], but did read the summary
>> [1]. I like this, except that I'm not sure about #3. What's the rationale of
>> adding a new config option 'enable_unsupported_drivers' that defaults to
>> False. Versus not having it, and "just" logging a warning if they are
>> loading an unsupported (in-tree) driver?
>>
>>
>>
>> If I understand this...
>>
>>
>>
>> If we have 'enable_unsupported_drivers': since it defaults to False, the
>> conductor will fail on startup, if an unsupported driver is in the
>> enabled_drivers config. (The conductor will emit a warning in the logs, or
>> maybe it won't?) The operator (if they haven't changed it), will now change
>> it to True if they want to use any unsupported drivers. The conductor will
>> start up and emit a warning in the logs.
>>
>>
>>
>> If we don't have an enable_unsupported_drivers config, will the conductor
>> start up and emit a warning in the logs?
>
>
> We have not added any deprecation warnings to drivers. I think that just
> gives a bit more time to switch to other drivers and it will make the
> removal more visible, as the current spec states: "Third party driver teams
> that do not implement a reliable reporting CI test system by the N release
> feature freeze (see Deliverable milestones above) will be removed from the
> ironic source tree.", IIUC meaning that conductor will fail startup not
> being able to find the removed code.
>
>>
>>
>>
>> I was also wondering, where is the value for the 'supported' flag for each
>> driver going to be kept? Hard-coded in the driver code?
>
>
> Yep, seems like it.
>
>>
>>
>>
>> --ruby
>>
>>
>>
>>
>>
>> On 2016-08-19, 10:15 AM, "Jim Rollenhagen"  wrote:
>>
>>
>>
>> Hi Ironickers,
>>
>>
>>
>> There was a big thread here[0] about Cinder, driver removal, and standard
>>
>> deprecation policy. If you haven't read through it yet, please do before
>>
>> continuing here. :)
>>
>>
>>
>> The outcome of that thread is summarized well here.[1]
>>
>>
>>
>> I know that I previously had a different opinion on this, but I think we
>>
>> should go roughly the same route, for the sake of the users.
>>
>>
>>
>> 1) A ``supported`` flag for each driver that is True if and only if the
>> driver
>>
>>is tested in infra or third-party CI (and meets our third party CI
>>
>>requirements).
>>
>> 2) If the supported flag is False for a driver, deprecation is implied
>> (and
>>
>>a warning is emitted at load time). A driver may be removed per
>> standard
>>
>>deprecation policies, with turning the supported flag False to start
>> the
>>
>>clock.
>>
>> 3) Add a ``enable_unsupported_drivers`` config option that allows enabling
>>
>>drivers marked supported=False. If a driver is in enabled_drivers, has
>>
>>supported=False, and enable_unsupported_drivers=False, ironic-conductor
>>
>>will fail to start. Setting enable_unsupported_drivers=True will allow
>>
>>ironic-conductor to start with warnings emitted.
>>
>>
>>
>> It is important to note that (3) does still technically break the standard
>>
>> deprecation policy (old config may not work with new version of ironic).
>>
>> However, this is a much softer landing than the original plan. FWIW, I do
>>
>> expect (but not hope!) this part will be somewhat contentious.
>>
>>
>>
>> I'd like to hear thoughts and get consensus on this from the rest of the
>>
>> ironic community, so please do reply whether you agree or disagree.
>>
>>
>>
>> I'm happy to do the work required (update spec, code patches, doc updates)
>>
>> when we do come to agreement.
>>
>>
>>
>> // jim
>>
>>
>>
>> [0]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101428.html
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101898.html
>>

Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-29 Thread Yujun Zhang
Thanks, Alexey. Point 1 and 3 are pretty clear.

As for point 2, if I understand it correctly, you are suggesting to modify
the static_physical.yaml as following

entities:
 - type: switch
   name: switch-1
   id: switch-1 # should be same as name
   state: available
   relationships:
 - type: nova.host
   name: host-1
   id: host-1 # should be same as name*   is_source: true #
entity is `source` in this relationship
*   relation_type: attached - type: switch   name:
switch-2   id: switch-2 # should be same as name
*   is_source: false # entity is `target` in this relationship*
   relation_type: backup

But I wonder why the static physical configuration file use a different
format from vitrage template definitions[1]

[1]
https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst

On Sun, Aug 28, 2016 at 4:14 PM Weyl, Alexey (Nokia - IL) <
alexey.w...@nokia.com> wrote:

> Hi Yujun,
>
>
>
> In order for the static_physical to work for different datasources without
> the restrictions, you need to do the following changes:
>
> Go to the static_physical transformer:
>
> 1.   Remove the methods: _register_relations_direction,
> _find_relation_direction_source.
>
> 2.   Add to the static_physical.yaml for each definition also a field
> for direction which will indicate the source and the destination between
> the datasources.
>
> 3.   In method: _create_neighbor, remove the usage of method
> _find_relation_direction_source, and use the new definition from the yaml
> file here to decide the edge direction.
>
>
>
> Is it ok?
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Friday, August 26, 2016 4:22 AM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
> static_datasources
>
>
>
> Lost in the code...It seems the datasource just construct the entities and
> send them over event bus to entity graph processor. I need to dig further
> to find out the exact point the "backup" relationship is filtered.
>
>
>
> I think we should some how keep the validation of relationship type. It is
> so easy to make typo when creating the template manually (I did this quite
> often...).
>
>
>
> My idea is to delegate the validation to datasource instead of enumerating
> all constants it in evaluator. I think this will introduce better
> extensibility. Any comments?
>
>
>
> On Thu, Aug 25, 2016 at 1:32 PM Weyl, Alexey (Nokia - IL) <
> alexey.w...@nokia.com> wrote:
>
> Hi Yujun,
>
>
>
> You can find the names of the lables in the constants.py file.
>
>
>
> In addition, the restriction on the physical_static datasource is done in
> it’s driver.py.
>
>
>
> Alexey
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Thursday, August 25, 2016 4:50 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
> static_datasources
>
>
>
> Hi, Ifat,
>
>
>
> I searched for edge_labels in the project. It seems it is validated only
> in `vitrage/evaluator/template_validation/template_syntax_validator.py`.
> Where is such restriction applied in static_datasources?
>
>
>
> --
>
> Yujun
>
>
>
> On Wed, Aug 24, 2016 at 3:19 PM Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com> wrote:
>
> Hi Yujun,
>
>
>
> Indeed, we have some restrictions on the relationship types that can be
> used in the static datasources. I think we should remove these
> restrictions, and allow any kind of relationship type.
>
>
>
> Best regards,
>
> Ifat.
>
>
>
> *From: *Yujun Zhang
> *Date: *Monday, 22 August 2016 at 08:37
>
> I'm following the sample configuration in docs [1] to verify how static
> datasources works.
>
>
>
> It seems `backup` relationship is not displayed in the entity graph view
> and neither is it included in topology show.
>
>
>
> There is an enumeration for edge labels [2]. Should relationship in static
> datasource be limited to it?
>
>
>
> [1]
> https://github.com/openstack/vitrage/blob/master/doc/source/static-physical-config.rst
>
> [2]
> https://github.com/openstack/vitrage/blob/master/vitrage/common/constants.py#L49
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [neutron] neutronclient check queue is broken

2016-08-29 Thread Ihar Hrachyshka

Assaf Muller  wrote:

On Thu, Aug 25, 2016 at 6:58 AM, Ihar Hrachyshka   
wrote:

Akihiro Motoki  wrote:


In the neutronclient check queue,
gate-neutronclient-test-dsvm-functional is broken now [1].
Please avoid issuing 'recheck'.

[1] https://bugs.launchpad.net/python-neutronclient/+bug/1616749

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:  
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The proposed fix (removal tests for lbaasv1) made me wonder why we don’t
gate on neutron stable branches in client master branch. Isn’t it a test
matrix gap that could allow a new client to introduce a regression that
would break interactions with older clouds?


Absolutely. Feel free to send a project-config patch :)


Arie Bregman did: https://review.openstack.org/#/c/361603/

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tacker vnf-create is not bringing up all theinterfaces

2016-08-29 Thread Zhi Chang
Hi, Goyal.


What version about your OpenWRT image? You can get OpenWRT image from  
this: https://drive.google.com/open?id=0B-ruQ8Tx46wSMktKV3JLRWhnLTA
 


Thanks
Zhi Chang
 
-- Original --
From:  "Abhilash Goyal";
Date:  Mon, Aug 29, 2016 05:18 PM
To:  "openstack-dev"; 

Subject:  [openstack-dev] tacker vnf-create is not bringing up all theinterfaces

 
[Tacker]Hello team,
I am trying to make an OpenWRT VNF through tacker using this vnf-d. VNF is 
spawning successfully, but expected VNF should have 3 connecting points with 
first one in management network, but this is not happening. It is getting up 
with default network configuration because of this, IPs are not getting 
assigned to it automatically. 
Guidance would be appreciated.


-- 
Abhilash Goyal__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tacker vnf-create is not bringing up all the interfaces

2016-08-29 Thread Abhilash Goyal
[Tacker]
Hello team,
I am trying to make an OpenWRT VNF through tacker using this vnf-d
.
VNF is spawning successfully, but expected VNF should have 3 connecting
points with first one in management network, but this is not happening. It
is getting up with default network configuration because of this, IPs are
not getting assigned to it automatically.
Guidance would be appreciated.

-- 
Abhilash Goyal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Live-migration subteam meeting

2016-08-29 Thread Timofei Durakov
Hello,

Next meeting will be on August 30, 14:00 UTC as usual. Agenda is available
here:
https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration#Agenda_for_next_meeting


If you have topics to discuss, please update it accordingly.

Timofey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for API docs

2016-08-29 Thread hie...@vn.fujitsu.com
Hi,

The works in Magnum api-ref can be reviewed at [1]. Please take a look and get 
these to be merged ASAP.

[1]. https://blueprints.launchpad.net/magnum/+spec/magnum-doc-rest-api

Thanks,
Hieu LE.

From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: Sunday, August 21, 2016 8:20 AM
To: Shuu Mutou 
Cc: m...@redhat.com; Haruhiko Katou ; 
openstack-dev@lists.openstack.org; openstack-d...@lists.openstack.org; 
kenichi.omi...@necam.com
Subject: Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for 
API docs



On Fri, Aug 19, 2016 at 2:27 AM, Shuu Mutou 
> wrote:
>   AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API
> docs.
>   Anne, has not the adoption been changed? Otherwise, do we need to
> maintain much RST files also?
>
>
>
> It does say either/or in the API WG guideline:
> http://specs.openstack.org/openstack/api-wg/guidelines/api-docs.html

Yes. Also Ken'ichi Omichi said.


> This isn't about a contest between projects for the best approach. This
> is about serving end-users the best information we can.

Yes. Correct information is the best information. The Accuracy is more 
important than web experience. When I was a user (and SIer), the document 
accuracy had not been kept. So we had to read source code at last. And now, as 
a developer (mainly UI plugins), I don't want maintain overlapped content 
several times (API source code, API reference, helps in client, helps in WebUI, 
etc). So I spend efforts to the spec auto-generation.


> I'm reporting what I'm seeing from a broader viewpoint than a single project.
> I don't have a solution other than RST/YAML for common navigation, and I'm
> asking you to provide ideas for that integration point.
>
> My vision is that even if you choose to publish with OpenAPI, you would
> find a way to make this web experience better. We can do better than this
> scattered approach. I'm asking you to find a way to unify and consider the
> web experience of a consumer of OpenStack services. Can you generate HTML
> that can plug into the openstackdocstheme we are providing as a common tool?

I need to know about the "common tools". Please, let me know what is difference 
between HTMLs built by Lars's patch and by common tools? Or can fairy-slipper 
do that from OpenAPI file?

Sure, sounds like there's some info missing that I can clarify.

All HTML built for OpenStack sites are copied via FTP. There's no difference 
except for the CSS and JavaScript provided by openstackdocstheme and built by 
os-api-ref.

Fairy-slipper is no longer being worked on as a common solution to serving all 
OpenStack API information. It was used for migration purposes.

Lars's patch could find a way to use the CSS and JS to create a seamless 
experience for end-users.

Anne



Thanks,
Shu


> -Original Message-
> From: Anne Gentle 
> [mailto:annegen...@justwriteclick.com]
> Sent: Wednesday, August 17, 2016 11:55 AM
> To: Mutou Shuu(武藤 周) >
> Cc: 
> openstack-dev@lists.openstack.org; 
> m...@redhat.com; Katou Haruhiko(加
> 藤 治彦) >; 
> openstack-d...@lists.openstack.org;
> kenichi.omi...@necam.com
> Subject: Re: [OpenStack-docs] [openstack-dev] [Magnum] Using common
> tooling for API docs
>
>
>
> On Tue, Aug 16, 2016 at 1:05 AM, Shuu Mutou 
> 
> > > wrote:
>
>
>   Hi Anne,
>
>   AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API
> docs.
>   Anne, has not the adoption been changed? Otherwise, do we need to
> maintain much RST files also?
>
>
>
> It does say either/or in the API WG guideline:
> http://specs.openstack.org/openstack/api-wg/guidelines/api-docs.html
>
>
>
>   IMO, for that the reference and the source code doesn't have
> conflict, these should be near each other as possible as follow. And it
> decreases maintainance costs for documents, and increases document
> reliability. So I believe our approach is more ideal.
>
>
>
>
> This isn't about a contest between projects for the best approach. This
> is about serving end-users the best information we can.
>
>
>   The Best: the references generated from source code.
>
>
>
> I don't want to argue, but anything generated from the source code suffers
> if the source code changes in a way that reviewers don't catch as a
> backwards-incompatible change you can break your contract.
>
>
>   Better: the references written in docstring.
>
>   We know some projects abandoned these approach, and then they uses
> RST + YAML.
>   But we 

[openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Zhenyu Zheng
Hi, all

Currently we have customer demands about adding parameter "volume_type" to
--block-device to provide the support of specified storage backend to boot
instance. And I find one newly drafted Blueprint that aiming to address the
same feature:
https://blueprints.launchpad.net/nova/+spec/support-boot-instance-set-store-type
;

As I know this is kind of "proxy" feature for cinder and we don't like it
in general, but as the boot from volume functional was already there, so
maybe it is OK to support another parameter?

So, my question is that what are your opinions about this in general? Do
you like it or it will not be able to got approved at all?

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-29 Thread Steven Hardy
On Mon, Aug 29, 2016 at 07:07:09AM +0200, Thomas Herve wrote:
> On Sun, Aug 28, 2016 at 11:58 PM, Steven Hardy  wrote:
> > Hi all,
> >
> > I have a need to merge a list of maps of lists:
> >
> > heat_template_version: 2016-10-14
> >
> > outputs:
> >   debug:
> > value:
> >   yaql:
> > # dict(vms=>dict($.vms.select([$.name, $])))
> > expression: dict($.data.l.select([$.keys().toList()[0],
> > $.values().toList()[0]]))
> > data:
> >   l:
> > - a: [123]
> > - b: [123]
> > - a: [456]
> >
> >
> >
> > I want to end up with debug as:
> >
> >   a: [123, 456]
> >   b: [123]
> >
> > Perhaps we need a map_deep_merge function, but can this be done with yaql?
> >
> > I suspect it can, but can't currently figure out how the assignment to the
> > intermediate "a" value is supposed to work, any ideas on the cleanest
> > approach appreciated!
> 
> I believe you don't need the intermediate value, and can rely on what
> you'd do in Python with setdefault:
> 
> dict($.groupBy($.keys().toList()[0], $.values().toList()[0][0]))
> 
> ought to work, I believe?

Indeed it does work, thanks!  groupBy is what I was missing :)

For anyone following, here's the updated template, and the output it
gives:

heat_template_version: 2016-10-14

outputs:
  debug:
value:
  yaql:
expression: dict($.data.l.groupBy($.keys().toList()[0],
$.values().toList()[0][0]))
data:
  l:
- a: [123]
- b: [123]
- a: [456]

[stack@instack ~]$ heat output-show foo debug
{
  "a": [
123,
456
  ], 
  "b": [
123
  ]
}

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][FFE]Support a param to specify subnet or fixed IP when creating port

2016-08-29 Thread Kenji Ishii
Hi, horizoners

I'd like to request a feature freeze exception for the feature.
(This is a bug ticket, but the contents written in this ticket is a new 
feature.)
https://bugs.launchpad.net/horizon/+bug/1588663

This is implemented by the following patch.
https://review.openstack.org/#/c/325104/

It is useful to be able to create a port with using subnet or IP address which 
a user want to use.
And this has already reviewed by many reviewers, so I think the risk in this 
patch is very few.

---
Best regards,
Kenji Ishii

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-29 Thread Kekane, Abhishek
From: John Griffith [mailto:john.griffi...@gmail.com]
Sent: Friday, August 26, 2016 10:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Reconciling flavors and block device 
mappings



On Fri, Aug 26, 2016 at 10:20 AM, Ed Leafe 
> wrote:
On Aug 25, 2016, at 3:19 PM, Andrew Laski 
> wrote:

> One other thing to note is that while a flavor constrains how much local
> disk is used it does not constrain volume size at all. So a user can
> specify an ephemeral/swap disk <= to what the flavor provides but can
> have an arbitrary sized root disk if it's a remote volume.

> This kind of goes to the heart of the argument against flavors being the sole 
> source of truth for a request. As cloud evolves, we keep packing more and 
> more stuff into a concept that was originally meant to only divide up 
> resources that came bundled together (CPU, RAM, and local disk). This hasn’t 
> been a good solution for years, and the sooner we start accepting that a 
> request can be much more complex than a flavor can adequately express, the 
> better.

> If we have decided that remote volumes are a good thing (I don’t think 
> there’s any argument there), then we should treat that part of the request as 
> being as fundamental as a flavor. We need to make the scheduler smarter so 
> that it doesn’t rely on > flavor as being the only source of truth.
​> +1

We have done extensive testing with patch [1] and ensured that it’s not 
breaking anything. IMO this patch should be the best solution till now and 
there should not be any issues in accepting the same. Please review the patch 
and provide your opinions so that we can take appropriate actions to get this 
resolved.

[1] https://review.openstack.org/#/c/200870/

Thank you,

Abhishek Kekane
​


The first step to improving Nova is admitting we have a problem. :)


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev