Re: [openstack-dev] [kolla] Ansible 2.0.0 functional

2016-05-31 Thread Joshua Harlow
Out of curiosity, what keeps on changing (breaking?) in ansible that 
makes it so that something working in 2.0 doesn't work in 2.1? Isn't the 
point of minor version numbers like that so that things in the same 
major version number still actually work...


Steven Dake (stdake) wrote:

Hey folks,

In case you haven't been watching the review queue, Kolla has been
ported to Ansible 2.0. It does not work with Ansible 2.1, however.

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-05-31 Thread Shuu Mutou
I found container related names and checked whether other project uses.

https://en.wikipedia.org/wiki/Straddle_carrier
https://en.wikipedia.org/wiki/Suezmax
https://en.wikipedia.org/wiki/Twistlock

These words are not used by other project on PYPI and Launchpad.

ex.)
https://pypi.python.org/pypi/straddle
https://launchpad.net/straddle


However the chance of renaming in N cycle will be done by Infra-team on this 
Friday, we would not meet the deadline. So

1. use 'Higgins' ('python-higgins' for package name)
2. consider other name for next renaming chance (after a half year)

Thoughts?


Regards,
Shu


> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Wednesday, June 01, 2016 11:37 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Shu,
> 
> According to the feedback from the last team meeting, Gatling doesn't seem
> to be a suitable name. Are you able to find an alternative name?
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> > Sent: May-24-16 4:30 AM
> > To: openstack-dev@lists.openstack.org
> > Cc: Haruhiko Katou
> > Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> >
> > Hi all,
> >
> > Unfortunately "higgins" is used by media server project on Launchpad
> > and CI software on PYPI. Now, we use "python-higgins" for our project
> > on Launchpad.
> >
> > IMO, we should rename project to prevent increasing points to patch.
> >
> > How about "Gatling"? It's only association from Magnum. It's not used
> > on both Launchpad and PYPI.
> > Is there any idea?
> >
> > Renaming opportunity will come (it seems only twice in a year) on
> > Friday, June 3rd. Few projects will rename on this date.
> > http://markmail.org/thread/ia3o3vz7mzmjxmcx
> >
> > And if project name issue will be fixed, I'd like to propose UI
> > subproject.
> >
> > Thanks,
> > Shu
> >
> >
> >
> __
> _
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-05-31 Thread Doug Wiegley
Agreed.

doug

> On May 31, 2016, at 12:12 PM, Armando M.  wrote:
> 
> Hi folks,
> 
> Having looked at the recent commit volume that has been going into the *-aas 
> repos, I am considering changing the release model for neutron-vpnaas, 
> neutron-fwaas, neutron-lbaas from release:cycle-with-milestones [1] to 
> release:cycle-with-intermediary [2]. This change will allow us to avoid 
> publishing a release at fixed times when there's nothing worth releasing.
> 
> I'll follow up with a governance change, as I know of the imminent deadline 
> [3].
> 
> Thoughts?
> Armando
> 
> [1] 
> https://governance.openstack.org/reference/tags/release_cycle-with-milestones.html
>  
> 
> [2] 
> https://governance.openstack.org/reference/tags/release_cycle-with-intermediary.html
>  
> 
> [3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095490.html 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Does Murano support version management?

2016-05-31 Thread Jay Lau
Hi,

I have a question for Murano: Suppose I want to manage two different
version Spark packages, does Murano can enable me create one Application in
application catalog but can enable me select different version spark
packages to install?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-31 Thread Ben Pfaff
On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> Ben, yes, we submitted nsh support patch set last year, but ovs
> community told me we have to push kernel part into Linux kernel tree,
> we're struggling to do this, but something blocked us from doing this.

It's quite difficult to get patches for a new protocol into the kernel.
You have my sympathy.

> Recently, ovs did some changes in tunnel protocols which requires the
> packet decapsulated by a tunnel must be a Ethernet packet, but Linux
> kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy
> (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port
> must be L3 packet but not L2 Ethernet packet, this blocked us from
> progressing better.
> 
> Simon Horman (Netronome guy) has posted a series of patches to remove
> the mandatory requirement from ovs in order that the packet from a
> tunnel can be any packet, but so far we didn't see they are merged.

These are slowly working their way through OVS review, but these also
have a prerequisite on kernel patches, so it's not easy to get them in
either.

> I heard ovs community looks forward to getting nsh patches merged, it
> will be great if ovs guys can help progress this.

I do plan to do my part in review (but much of this is kernel review,
which I'm not really involved in anymore).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Yuanying OTSUKA
Just F.Y.I.

When Magnum wanted to become “Container as a Service”,
There were some discussion about API design.

* https://etherpad.openstack.org/p/containers-service-api
* https://etherpad.openstack.org/p/openstack-containers-service-api



2016年6月1日(水) 12:09 Hongbin Lu :

> Sheel,
>
>
>
> Thanks for taking the responsibility. Assigned the BP to you. As
> discussed, please submit a spec for the API design. Feel free to let us
> know if you need any help.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Sheel Rana Insaan [mailto:ranasheel2...@gmail.com]
> *Sent:* May-31-16 9:23 PM
> *To:* Hongbin Lu
> *Cc:* adit...@nectechnologies.in; vivek.jain.openst...@gmail.com;
> flw...@catalyst.net.nz; Shuu Mutou; Davanum Srinivas; OpenStack
> Development Mailing List (not for usage questions); Chandan Kumar;
> hai...@xr.jp.nec.com; Qi Ming Teng; sitlani.namr...@yahoo.in; Yuanying;
> Kumari, Madhuri; yanya...@cn.ibm.com
> *Subject:* Re: [Higgins] Call for contribution for Higgins API design
>
>
>
> Dear Hongbin,
>
> I am interested in this.
> Thanks!!
>
> Best Regards,
> Sheel Rana
>
> On Jun 1, 2016 3:53 AM, "Hongbin Lu"  wrote:
>
> Hi team,
>
>
>
> As discussed in the last team meeting, we agreed to define core use cases
> for the API design. I have created a blueprint for that. We need an owner
> of the blueprint and it requires a spec to clarify the API design. Please
> let me know if you interest in this work (it might require a significant
> amount of time to work on the spec).
>
>
>
> https://blueprints.launchpad.net/python-higgins/+spec/api-design
>
>
>
> Best regards,
>
> Hongbin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Ansible 2.0.0 functional

2016-05-31 Thread Steven Dake (stdake)
Hey folks,

In case you haven't been watching the review queue, Kolla has been ported to 
Ansible 2.0.  It does not work with Ansible 2.1, however.

Regards,
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-05-31 Thread Na Zhu
John,

Thanks.

Me and Srilatha (srila...@us.ibm.com) want to working together with you, I 
know you already did some development works.
Can you tell me what you have done and put the latest code in your private 
repo?
Can we work out a plan and the remaining work?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Ryan Moats 
Cc: OpenStack Development Mailing List 
, "disc...@openvswitch.org" 

Date:   2016/06/01 08:58
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC and OVN
Sent by:"discuss" 



Ryan,

More help is always great :-). As far as who to collaborate, what ever Is 
easiest for everyone �C I am pretty flexible.

Regards

John

From: Ryan Moats 
Date: Tuesday, May 31, 2016 at 1:59 PM
To: John McDowall 
Cc: Ben Pfaff , "disc...@openvswitch.org" <
disc...@openvswitch.org>, Justin Pettit , OpenStack 
Development Mailing List , Russell 
Bryant 
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN

John McDowall  wrote on 05/31/2016 
03:21:30 PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org" 
> , Justin Pettit , 
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 05/31/2016 03:22 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> 
> Ryan,
> 
> Let me add the tables to OVN for SFC. That will give us a working 
> system to prototype the flow classifier approach on. Hopefully I can
> get something done by end of week.
> 
> Regards
> 
> John

I've got some internal folks that are willing to help with writing code 
(as
I will be once I clear my current firefights) so the question of how to
collaborate with code now arises...

Are you comfortable with putting the changes on r.o.o as WiP and 
patchworks
as RFC and work through the review process or would you rather work via
forks and pull requests in github?

Ryan

> From: Ryan Moats 
> Date: Tuesday, May 31, 2016 at 10:17 AM
> To: John McDowall 
> Cc: Ben Pfaff , "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit , OpenStack
> Development Mailing List , Russell 
Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> 
> John McDowall  wrote on 05/26/2016 
> 11:08:43 AM:
> 
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: Ben Pfaff , "disc...@openvswitch.org" 
> > , Justin Pettit , 
> > "OpenStack Development Mailing List"  > d...@lists.openstack.org>, Russell Bryant 
> > Date: 05/26/2016 11:09 AM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> > 
> > Ryan,
> > 
> > My (incomplete) throughts about the flow-classifier are:
> > 
> > 1)  ACL’s are more about denying access, while the flow classifier 
> > is more about steering selected traffic to a path, so we would need 
> > to deny-all except allowed flows.
> > 2)  The networking-sfc team has done a nice job with the drivers so 
> > ovn has its own flow-classifier driver which allows us to align the 
> > flow-classifier with the matches supported in ovs/ovn, which could 
> > be an advantage.
> 
> The ACL table has a very simple flow-classifier structure and I'd
> like to see if that can be re-used for the purpose of the SFC classifier
> (read that I feel the Logical_Flow_Classifier table is too complex).
> My initial thoughts were to look at extending the action column and
> using the external-ids field to differentiate between legacy ACLs and
> those that are used to intercept traffic and route it to an SFC.
> 
> > 
> > What were your thoughts on the schema it adds a lot of tables and a 
> > lot of commands �C cannot think of anyway around it
> 
> In this case, I think that the other tables are reasonable and I'm 
> uncomfortable trying to stretch the existing tables to cover that
> information...
> 
> Ryan
> 
> > 
> > Regards
> > 
> > John
> > 
> > From: Ryan Moats 
> > Date: Wednesday, May 25, 2016 at 9:12 PM
> > To: John McDowall 
> > Cc: Ben Pfaff , "disc...@openvswitch.org" <
> > disc...@openvswitch.org>, Justin Pettit 

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-05-31 Thread Na Zhu
+ Add Srilatha.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Na Zhu/China/IBM
To: John McDowall 
Cc: Ryan Moats , OpenStack Development Mailing List 
, "disc...@openvswitch.org" 

Date:   2016/06/01 12:01
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC and OVN


John,

Thanks.

Me and Srilatha (srila...@us.ibm.com) want to working together with you, I 
know you already did some development works.
Can you tell me what you have done and put the latest code in your private 
repo?
Can we work out a plan and the remaining work?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)




From:   John McDowall 
To: Ryan Moats 
Cc: OpenStack Development Mailing List 
, "disc...@openvswitch.org" 

Date:   2016/06/01 08:58
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC and OVN
Sent by:"discuss" 



Ryan,

More help is always great :-). As far as who to collaborate, what ever Is 
easiest for everyone �C I am pretty flexible.

Regards

John

From: Ryan Moats 
Date: Tuesday, May 31, 2016 at 1:59 PM
To: John McDowall 
Cc: Ben Pfaff , "disc...@openvswitch.org" <
disc...@openvswitch.org>, Justin Pettit , OpenStack 
Development Mailing List , Russell 
Bryant 
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN

John McDowall  wrote on 05/31/2016 
03:21:30 PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org" 
> , Justin Pettit , 
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 05/31/2016 03:22 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> 
> Ryan,
> 
> Let me add the tables to OVN for SFC. That will give us a working 
> system to prototype the flow classifier approach on. Hopefully I can
> get something done by end of week.
> 
> Regards
> 
> John

I've got some internal folks that are willing to help with writing code 
(as
I will be once I clear my current firefights) so the question of how to
collaborate with code now arises...

Are you comfortable with putting the changes on r.o.o as WiP and 
patchworks
as RFC and work through the review process or would you rather work via
forks and pull requests in github?

Ryan

> From: Ryan Moats 
> Date: Tuesday, May 31, 2016 at 10:17 AM
> To: John McDowall 
> Cc: Ben Pfaff , "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit , OpenStack
> Development Mailing List , Russell 
Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> 
> John McDowall  wrote on 05/26/2016 
> 11:08:43 AM:
> 
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: Ben Pfaff , "disc...@openvswitch.org" 
> > , Justin Pettit , 
> > "OpenStack Development Mailing List"  > d...@lists.openstack.org>, Russell Bryant 
> > Date: 05/26/2016 11:09 AM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> > 
> > Ryan,
> > 
> > My (incomplete) throughts about the flow-classifier are:
> > 
> > 1)  ACL’s are more about denying access, while the flow classifier 
> > is more about steering selected traffic to a path, so we would need 
> > to deny-all except allowed flows.
> > 2)  The networking-sfc team has done a nice job with the drivers so 
> > ovn has its own flow-classifier driver which allows us to align the 
> > flow-classifier with the matches supported in ovs/ovn, which could 
> > be an advantage.
> 
> The ACL table has a very simple flow-classifier structure and I'd
> like to see if that can be re-used for the purpose of the SFC classifier
> (read that I feel the Logical_Flow_Classifier table is too complex).
> My initial thoughts were to look at extending the action column and
> using the external-ids field to differentiate between legacy ACLs and
> those that are used to intercept traffic and route it to an SFC.
> 
> > 
> > What were your 

[openstack-dev] [TripleO] [diskimage-builder] Howto refactor?

2016-05-31 Thread Andre Florath
Hello!

Currently I'm working on diskimage-builder.
My long term goal is, to add some functionality to the DIB's block
device layer, like to be able to use multiple partitions, logical
volumes and mount points.
Initially I created a general partitioning element [1] and based on
this an experimental lvm element [2].
During the implementation I realized, that it 'works somehow', but
there are a lot of drawbacks implementing things as separate elements.
Because of this, I started to refactor the block device layer -
i.e. the portions of the DIB where the block devices for the VM image
are created and prepared. [3] [4] [5]  Currently I'm working on the
file system and mount layers.

I have the feeling, that patches from other persons - like [6] or [7]
- are somewhat blocked, because of the uncertainty how to continue
here (please correct me, if I'm wrong).

Because I'm a newbie here, I ask you for help, support and
consultation how to continue. I see the following possibilities:
1. Start a discussion about the requirements
   Problem here: the requirements will not change during the
   refactoring phase - but maybe it's a good starting point.
2. Start a discussion about design
3. Continue the implementation and hope that the whole patch set
   gets accepted some time

But maybe there are more possibilities?

Any comment or review is welcome!

Kind regards

Andreas

P.S.: The technical details are described in the documentation of the
  appropriate patches.


[1] "New Element: partitioning" https://review.openstack.org/#/c/313938/
[2] "New Element: lvm" https://review.openstack.org/#/c/316529/
[3] "Refactor: Infrastructure" https://review.openstack.org/#/c/319591/
[4] "Refactor: Image creation" https://review.openstack.org/#/c/322397/
[5] "Refactor: Partitioning" https://review.openstack.org/#/c/322671/
[6] https://review.openstack.org/#/c/252041/
[7] https://review.openstack.org/#/c/287784/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Kumari, Madhuri
+1 for Eli. Great addition to team.

Regards,
Madhuri

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: Wednesday, June 1, 2016 8:11 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

+1 Welcome Eli

-- Dims

On Tue, May 31, 2016 at 9:22 PM, Yanyan Hu  wrote:
> +1, welcome, Eli :)
>
> 2016-06-01 7:07 GMT+08:00 Yuanying OTSUKA :
>>
>> +1, He will become a good contributor!
>>
>>
>>
>> 2016年6月1日(水) 7:14 Fei Long Wang :
>>>
>>> +1
>>>
>>>
>>> On 01/06/16 09:39, Hongbin Lu wrote:
>>>
>>> Hi team,
>>>
>>>
>>>
>>> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core.
>>> Normally, the requirement to join the core team is to consistently 
>>> contribute to the project for a certain period of time. However, 
>>> given the fact that the project is new and the initial core team was 
>>> formed based on a commitment, I am fine to propose a new core based 
>>> on a strong commitment to contribute plus a few useful 
>>> patches/reviews. In addition, Eli Qiao is currently a Magnum core 
>>> and I believe his expertise will be an asset of Higgins team.
>>>
>>>
>>>
>>> According to the OpenStack Governance process [1], we require a 
>>> minimum of 4 +1 votes from existing Higgins core team within a 1 
>>> week voting window (consider this proposal as a +1 vote from me). A 
>>> vote of -1 is a veto. If we cannot get enough votes or there is a 
>>> veto vote prior to the end of the voting window, Eli is not able to 
>>> join the core team and needs to wait 30 days to reapply.
>>>
>>>
>>>
>>> The voting is open until Tuesday June 7st.
>>>
>>>
>>>
>>> [1] 
>>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>
>>>
>>>
>>> Best regards,
>>>
>>> Hongbin
>>>
>>>
>>>
>>>
>>> 
>>> __ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> --
>>> Cheers & Best regards,
>>> Fei Long Wang (王飞龙)
>>>
>>> 
>>> --
>>> Senior Cloud Software Engineer
>>> Tel: +64-48032246
>>> Email: flw...@catalyst.net.nz
>>> Catalyst IT Limited
>>> Level 6, Catalyst House, 150 Willis Street, Wellington
>>>
>>> 
>>> --
>>>
>>>
>>> 
>>> __ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best regards,
>
> Yanyan
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-05-31 Thread Tan, Lin
Thanks Devananda for your suggestions. I opened a new bug for it.

But I am asking this is because this is a task from newton summit to create a 
new command "for getting nodes out of stuck *ing states"
https://etherpad.openstack.org/p/ironic-newton-summit-ops
And we have a RFE bug already for this[1]

But as Dmitry said, there is a big risk to remove the lock of nodes and mark it 
as deploy failed state. But if the tool didn't remove the lock of nodes, then 
users still cannot manipulate the node resource. So I want to involve more 
people to discuss the spec[2].

Considering ironic already have _check_deploying_states() to recover deploying 
state, should I focus on improving it?
Or
There is still a need to create a new command.

B.R

Tan

[1]https://bugs.launchpad.net/ironic/+bug/1580931
[2]https://review.openstack.org/#/c/319812
-Original Message-
From: Devananda van der Veen [mailto:devananda@gmail.com] 
Sent: Wednesday, June 1, 2016 3:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ironic] Tooling for recovering nodes

On 05/31/2016 01:35 AM, Dmitry Tantsur wrote:
> On 05/31/2016 10:25 AM, Tan, Lin wrote:
>> Hi,
>>
>> Recently, I am working on a spec[1] in order to recover nodes which 
>> get stuck in deploying state, so I really expect some feedback from you guys.
>>
>> Ironic nodes can be stuck in
>> deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the 
>> node is reserved by a dead conductor (the exclusive lock was not released).
>> Any further requests will be denied by ironic because it thinks the 
>> node resource is under control of another conductor.
>>
>> To be more clear, let's narrow the scope and focus on the deploying 
>> state first. Currently, people do have several choices to clear the reserved 
>> lock:
>> 1. restart the dead conductor
>> 2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the 
>> lock.
>> 3. The operator touches the DB to manually recover these nodes.
>>
>> Option two looks very promising but there are some weakness:
>> 2.1 It won't work if the dead conductor was renamed or deleted.
>> 2.2 It won't work if the node's specific driver was not enabled on 
>> live conductors.
>> 2.3 It won't work if the node is in maintenance. (only a corner case).
> 
> We can and should fix all three cases.

2.1 and 2.2 appear to be a bug in the behavior of _check_deploying_status().

The method claims to do exactly what you suggest in 2.1 and 2.2 -- it gathers a 
list of Nodes reserved by *any* offline conductor and tries to release the lock.
However, it will always fail to update them, because objects.Node.release() 
raises a NodeLocked exception when called on a Node locked by a different 
conductor.

Here's the relevant code path:

ironic/conductor/manager.py:
1259 def _check_deploying_status(self, context):
...
1269 offline_conductors = self.dbapi.get_offline_conductors()
...
1273 node_iter = self.iter_nodes(
1274 fields=['id', 'reservation'],
1275 filters={'provision_state': states.DEPLOYING,
1276  'maintenance': False,
1277  'reserved_by_any_of': offline_conductors})
...
1281 for node_uuid, driver, node_id, conductor_hostname in node_iter:
1285 try:
1286 objects.Node.release(context, conductor_hostname, node_id)
...
1292 except exception.NodeLocked:
1293 LOG.warning(...)
1297 continue


As far as 2.3, I think we should change the query string at the start of this 
method so that it includes nodes in maintenance mode. I think it's both safe 
and reasonable (and, frankly, what an operator will expect) that a node which 
is in maintenance mode, and in DEPLOYING state, whose conductor is offline, 
should have that reservation cleared and be set to DEPLOYFAILED state.

--devananda

>>
>> Definitely we should improve the option 2, but there are could be 
>> more issues I didn't know in a more complicated environment.
>> So my question is do we still need a new command to recover these 
>> node easier without accessing DB, like this PoC [2]:
>>   ironic-noderecover --node_uuids=UUID1,UUID2 
>> --config-file=/etc/ironic/ironic.conf
> 
> I'm -1 to anything silently removing the lock until I see a clear use 
> case which is impossible to improve within Ironic itself. Such utility may 
> and will be abused.
> 
> I'm fine with anything that does not forcibly remove the lock by default.
> 
>>
>> Best Regards,
>>
>> Tan
>>
>>
>> [1] https://review.openstack.org/#/c/319812
>> [2] https://review.openstack.org/#/c/311273/
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Hongbin Lu
Sheel,

Thanks for taking the responsibility. Assigned the BP to you. As discussed, 
please submit a spec for the API design. Feel free to let us know if you need 
any help.

Best regards,
Hongbin

From: Sheel Rana Insaan [mailto:ranasheel2...@gmail.com]
Sent: May-31-16 9:23 PM
To: Hongbin Lu
Cc: adit...@nectechnologies.in; vivek.jain.openst...@gmail.com; 
flw...@catalyst.net.nz; Shuu Mutou; Davanum Srinivas; OpenStack Development 
Mailing List (not for usage questions); Chandan Kumar; hai...@xr.jp.nec.com; Qi 
Ming Teng; sitlani.namr...@yahoo.in; Yuanying; Kumari, Madhuri; 
yanya...@cn.ibm.com
Subject: Re: [Higgins] Call for contribution for Higgins API design


Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu" 
> wrote:
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Davanum Srinivas
+1 Welcome Eli

-- Dims

On Tue, May 31, 2016 at 9:22 PM, Yanyan Hu  wrote:
> +1, welcome, Eli :)
>
> 2016-06-01 7:07 GMT+08:00 Yuanying OTSUKA :
>>
>> +1, He will become a good contributor!
>>
>>
>>
>> 2016年6月1日(水) 7:14 Fei Long Wang :
>>>
>>> +1
>>>
>>>
>>> On 01/06/16 09:39, Hongbin Lu wrote:
>>>
>>> Hi team,
>>>
>>>
>>>
>>> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core.
>>> Normally, the requirement to join the core team is to consistently
>>> contribute to the project for a certain period of time. However, given the
>>> fact that the project is new and the initial core team was formed based on a
>>> commitment, I am fine to propose a new core based on a strong commitment to
>>> contribute plus a few useful patches/reviews. In addition, Eli Qiao is
>>> currently a Magnum core and I believe his expertise will be an asset of
>>> Higgins team.
>>>
>>>
>>>
>>> According to the OpenStack Governance process [1], we require a minimum
>>> of 4 +1 votes from existing Higgins core team within a 1 week voting window
>>> (consider this proposal as a +1 vote from me). A vote of -1 is a veto. If we
>>> cannot get enough votes or there is a veto vote prior to the end of the
>>> voting window, Eli is not able to join the core team and needs to wait 30
>>> days to reapply.
>>>
>>>
>>>
>>> The voting is open until Tuesday June 7st.
>>>
>>>
>>>
>>> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>
>>>
>>>
>>> Best regards,
>>>
>>> Hongbin
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> --
>>> Cheers & Best regards,
>>> Fei Long Wang (王飞龙)
>>>
>>> --
>>> Senior Cloud Software Engineer
>>> Tel: +64-48032246
>>> Email: flw...@catalyst.net.nz
>>> Catalyst IT Limited
>>> Level 6, Catalyst House, 150 Willis Street, Wellington
>>>
>>> --
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best regards,
>
> Yanyan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-05-31 Thread Hongbin Lu
Shu,

According to the feedback from the last team meeting, Gatling doesn't seem to 
be a suitable name. Are you able to find an alternative name?

Best regards,
Hongbin

> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: May-24-16 4:30 AM
> To: openstack-dev@lists.openstack.org
> Cc: Haruhiko Katou
> Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Hi all,
> 
> Unfortunately "higgins" is used by media server project on Launchpad
> and CI software on PYPI. Now, we use "python-higgins" for our project
> on Launchpad.
> 
> IMO, we should rename project to prevent increasing points to patch.
> 
> How about "Gatling"? It's only association from Magnum. It's not used
> on both Launchpad and PYPI.
> Is there any idea?
> 
> Renaming opportunity will come (it seems only twice in a year) on
> Friday, June 3rd. Few projects will rename on this date.
> http://markmail.org/thread/ia3o3vz7mzmjxmcx
> 
> And if project name issue will be fixed, I'd like to propose UI
> subproject.
> 
> Thanks,
> Shu
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Sheel Rana Insaan
Eli seems strong addition to team, I am  happy to have Eli Qiao's expertise
in core team.

+1 from my side.

Best Regards,
Sheel Rana
On Jun 1, 2016 3:09 AM, "Hongbin Lu"  wrote:

> Hi team,
>
>
>
> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core.
> Normally, the requirement to join the core team is to consistently
> contribute to the project for a certain period of time. However, given the
> fact that the project is new and the initial core team was formed based on
> a commitment, I am fine to propose a new core based on a strong commitment
> to contribute plus a few useful patches/reviews. In addition, Eli Qiao is
> currently a Magnum core and I believe his expertise will be an asset of
> Higgins team.
>
>
>
> According to the OpenStack Governance process [1], we require a minimum of
> 4 +1 votes from existing Higgins core team within a 1 week voting window
> (consider this proposal as a +1 vote from me). A vote of -1 is a veto. If
> we cannot get enough votes or there is a veto vote prior to the end of the
> voting window, Eli is not able to join the core team and needs to wait 30
> days to reapply.
>
>
>
> The voting is open until Tuesday June 7st.
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>
>
> Best regards,
>
> Hongbin
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Yanyan Hu
+1, welcome, Eli :)

2016-06-01 7:07 GMT+08:00 Yuanying OTSUKA :

> +1, He will become a good contributor!
>
>
>
> 2016年6月1日(水) 7:14 Fei Long Wang :
>
>> +1
>>
>>
>> On 01/06/16 09:39, Hongbin Lu wrote:
>>
>> Hi team,
>>
>>
>>
>> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core.
>> Normally, the requirement to join the core team is to consistently
>> contribute to the project for a certain period of time. However, given the
>> fact that the project is new and the initial core team was formed based on
>> a commitment, I am fine to propose a new core based on a strong commitment
>> to contribute plus a few useful patches/reviews. In addition, Eli Qiao is
>> currently a Magnum core and I believe his expertise will be an asset of
>> Higgins team.
>>
>>
>>
>> According to the OpenStack Governance process [1], we require a minimum
>> of 4 +1 votes from existing Higgins core team within a 1 week voting window
>> (consider this proposal as a +1 vote from me). A vote of -1 is a veto. If
>> we cannot get enough votes or there is a veto vote prior to the end of the
>> voting window, Eli is not able to join the core team and needs to wait 30
>> days to reapply.
>>
>>
>>
>> The voting is open until Tuesday June 7st.
>>
>>
>>
>> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>> Cheers & Best regards,
>> Fei Long Wang (王飞龙)
>> --
>> Senior Cloud Software Engineer
>> Tel: +64-48032246
>> Email: flw...@catalyst.net.nz
>> Catalyst IT Limited
>> Level 6, Catalyst House, 150 Willis Street, Wellington
>> --
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,

Yanyan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Sheel Rana Insaan
Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu"  wrote:

> Hi team,
>
>
>
> As discussed in the last team meeting, we agreed to define core use cases
> for the API design. I have created a blueprint for that. We need an owner
> of the blueprint and it requires a spec to clarify the API design. Please
> let me know if you interest in this work (it might require a significant
> amount of time to work on the spec).
>
>
>
> https://blueprints.launchpad.net/python-higgins/+spec/api-design
>
>
>
> Best regards,
>
> Hongbin
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread John McDowall
Ryan,

More help is always great :-). As far as who to collaborate, what ever Is 
easiest for everyone – I am pretty flexible.

Regards

John

From: Ryan Moats >
Date: Tuesday, May 31, 2016 at 1:59 PM
To: John McDowall 
>
Cc: Ben Pfaff >, 
"disc...@openvswitch.org" 
>, Justin Pettit 
>, OpenStack Development Mailing List 
>, 
Russell Bryant >
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
> wrote 
on 05/31/2016 03:21:30 PM:

> From: John McDowall 
> >
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org"
> >, Justin Pettit 
> >,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> >
> Date: 05/31/2016 03:22 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Let me add the tables to OVN for SFC. That will give us a working
> system to prototype the flow classifier approach on. Hopefully I can
> get something done by end of week.
>
> Regards
>
> John

I've got some internal folks that are willing to help with writing code (as
I will be once I clear my current firefights) so the question of how to
collaborate with code now arises...

Are you comfortable with putting the changes on r.o.o as WiP and patchworks
as RFC and work through the review process or would you rather work via
forks and pull requests in github?

Ryan

> From: Ryan Moats >
> Date: Tuesday, May 31, 2016 at 10:17 AM
> To: John McDowall 
> >
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit 
> >, OpenStack
> Development Mailing List 
> >,
>  Russell Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall 
> > wrote 
> on 05/26/2016
> 11:08:43 AM:
>
> > From: John McDowall 
> > >
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: Ben Pfaff >, 
> > "disc...@openvswitch.org"
> > >, Justin Pettit 
> > >,
> > "OpenStack Development Mailing List"  > d...@lists.openstack.org>, Russell Bryant 
> > >
> > Date: 05/26/2016 11:09 AM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > My (incomplete) throughts about the flow-classifier are:
> >
> > 1)  ACL’s are more about denying access, while the flow classifier
> > is more about steering selected traffic to a path, so we would need
> > to deny-all except allowed flows.
> > 2)  The networking-sfc team has done a nice job with the drivers so
> > ovn has its own flow-classifier driver which allows us to align the
> > flow-classifier with the matches supported in ovs/ovn, which could
> > be an advantage.
>
> The ACL table has a very simple flow-classifier structure and I'd
> like to see if that can be re-used for the purpose of the SFC classifier
> (read that I feel the Logical_Flow_Classifier table is too complex).
> My initial thoughts were to look at extending the action column and
> using the external-ids field to differentiate between legacy ACLs and
> those that are used to intercept traffic and route it to an SFC.
>
> >
> > What were your thoughts on the schema it adds a lot of tables and a
> > lot of commands – cannot think of anyway around it
>
> In this case, I think that the other tables are reasonable and I'm
> uncomfortable trying to stretch the existing tables to cover that
> information...
>
> Ryan
>
> >
> > Regards
> >
> > John
> >
> > From: Ryan Moats >
> > Date: Wednesday, May 25, 2016 at 9:12 PM
> > To: John McDowall 
> > 

Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-05-31 Thread Steven Dake (stdake)


On 5/31/16, 1:42 PM, "Michał Jastrzębski"  wrote:

>I am opposed to this idea as I don't think we need this. We can solve
>many problems by using jinja2 to greater extend. I'll publish demo of
>few improvements soon, please bear with me before we make a
>arch-changing call.

Can you make a specification please as you have asked me to do?

>
>On 29 May 2016 at 14:41, Steven Dake (stdake)  wrote:
>>
>>>On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:
>>>


On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)" 
wrote:

>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake)
>
>wrote:
>> Hey folks,
>>
>> While Swapnil has been busy churning the dockerfile.j2 files to all
>>match
>> the same style, and we also had summit where we declared we would
>>solve
>>the
>> plugin problem, I have decided to begin work on a DSL prototype.
>>
>> Here are the problems I want to solve in order of importance by this
>>work:
>>
>> Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
>> Provide a programmatic way to manage Dockerfile construction rather
>>then a
>> manual (with vi or emacs or the like) mechanism
>> Allow complete overrides of every facet of Dockerfile construction,
>>most
>> especially repositories per container (rather than in the base
>>container) to
>> permit the use case of dependencies from one version with
>>dependencies
>>in
>> another version of a different service
>> Get out of the business of maintaining 100+ dockerfiles but instead
>>maintain
>> one master file which defines the data that needs to be used to
>>construct
>> Dockerfiles
>> Permit different types of optimizations or Dockerfile building by
>>changing
>> around the parser implementation ­ to allow layering of each
>>operation,
>>or
>> alternatively to merge layers as we do today
>>
>> I don't believe we can proceed with both binary and source plugins
>>given our
>> current implementation of Dockerfiles in any sane way.
>>
>> I further don't believe it is possible to customize repositories &
>>installed
>> files per container, which I receive increasing requests for
>>offline.
>>
>> To that end, I've created a very very rough prototype which builds
>>the
>>base
>> container as well as a mariadb container.  The mariadb container
>>builds
>>and
>> I suspect would work.
>>
>> An example of the DSL usage is here:
>> https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml
>>
>> A very poorly written parser is here:
>> https://review.openstack.org/#/c/321468/4/dockerdsl/load.py
>>
>> I played around with INI as a format, to take advantage of
>>oslo.config
>>and
>> kolla-build.conf, but that didn't work out.  YML is the way to go.
>>
>> I'd appreciate reviews on the YML implementation especially.
>>
>> How I see this work progressing is as follows:
>>
>> A yml file describing all docker containers for all distros is
>>placed
>>in
>> kolla/docker
>> The build tool adds an option ‹use-yml which uses the YML file
>> A parser (such as load.py above) is integrated into build.py to lay
>>down he
>> Dockerfiles
>> Wait 4-6 weeks for people to find bugs and complain
>> Make the ‹use-yml the default for 4-6 weeks
>> Once we feel confident in the yml implementation, remove all
>>Dockerfile.j2
>> files
>> Remove ‹use-yml option
>> Remove all jinja2-isms from build.py
>>
>> This is similar to the work that took place to convert from raw
>>Dockerfiles
>> to Dockerfile.j2 files.  We are just reusing that pattern.
>>Hopefully
>>this
>> will be the last major refactor of the dockerfiles unless someone
>>has
>>some
>> significant complaints about the approach.
>>
>> Regards
>> -steve

Hey folks,

I have produced a specification for Kolla's DSL (which I call Elemental).
The spec is ready for review here:
https://review.openstack.org/#/c/323612/


Regards
-steve

>>
>>
>> On 5/27/16, 3:44 AM, "Britt Houser (bhouser)"  wrote:
>>
>>>I admit I'm not as knowledgable about the Kolla codebase as I'd like to
>>>be, so most of what you're saying is going over my head.  I think mainly
>>>I don't understand the problem statement.  It looks like you're pulling
>>>all the "hard coded" things out of the docker files, and making them
>>>user
>>>replaceable?  So the dockerfiles just become a list of required steps,
>>>and the user can change how each step is implemented?  Would this also
>>>unify the dockefiles so there wouldn't be a huge if statements between
>>>Centos and Ubuntu?
>>>
>>>Thx,
>>>Britt
>>>
>>
>> What is being pulled out is all of 

Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-31 Thread Yang, Yi Y
Ben, yes, we submitted nsh support patch set last year, but ovs community told 
me we have to push kernel part into Linux kernel tree, we're struggling to do 
this, but something blocked us from doing this.

Recently, ovs did some changes in tunnel protocols which requires the packet 
decapsulated by a tunnel must be a Ethernet packet, but Linux kernel (net-next) 
tree accepted VxLAN-gpe patch set from Redhat guy (Jiri Benc) which requires 
the packet decapsulated by VxLAN-gpe port must be L3 packet but not L2 Ethernet 
packet, this blocked us from progressing better.

Simon Horman (Netronome guy) has posted a series of patches to remove the 
mandatory requirement from ovs in order that the packet from a tunnel can be 
any packet, but so far we didn't see they are merged.

I heard ovs community looks forward to getting nsh patches merged, it will be 
great if ovs guys can help progress this.

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org] 
Sent: Tuesday, May 31, 2016 10:38 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Mon, May 30, 2016 at 10:12:34PM -0400, Paul Carver wrote:
> I don't know the details of why OvS hasn't added NSH support so I 
> can't judge the validity of the concerns, but one way or another there 
> has to be a production-quality dataplane for networking-sfc to front-end.

It looks like the last time anyone submitted NSH patches to Open vSwitch was 
September 2015.  They got some reviews but no new version has been posted since.

Basically, we can't add NSH support if no one submits patches.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][vpnaas]Question about MPLS VPN

2016-05-31 Thread Paul Carver

On 5/26/2016 02:50, zhangyali (D) wrote:

I am interested in the VPNaaS project in Neutron. Now I notice that only IPsec 
tunnel has completed, but other types of VPN, such as, MPLS/BGP, have not 
completed. I'd like to know how's going about MPLS/BGP vpn? What's the 
mechanism or extra work need to be done?


For MPLS/BGP VPNs refer to the networking-bgpvpn project rather than VPNaaS.

http://docs.openstack.org/developer/networking-bgpvpn



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Yuanying OTSUKA
+1, He will become a good contributor!



2016年6月1日(水) 7:14 Fei Long Wang :

> +1
>
>
> On 01/06/16 09:39, Hongbin Lu wrote:
>
> Hi team,
>
>
>
> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core.
> Normally, the requirement to join the core team is to consistently
> contribute to the project for a certain period of time. However, given the
> fact that the project is new and the initial core team was formed based on
> a commitment, I am fine to propose a new core based on a strong commitment
> to contribute plus a few useful patches/reviews. In addition, Eli Qiao is
> currently a Magnum core and I believe his expertise will be an asset of
> Higgins team.
>
>
>
> According to the OpenStack Governance process [1], we require a minimum of
> 4 +1 votes from existing Higgins core team within a 1 week voting window
> (consider this proposal as a +1 vote from me). A vote of -1 is a veto. If
> we cannot get enough votes or there is a veto vote prior to the end of the
> voting window, Eli is not able to join the core team and needs to wait 30
> days to reapply.
>
>
>
> The voting is open until Tuesday June 7st.
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>
>
> Best regards,
>
> Hongbin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Cheers & Best regards,
> Fei Long Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [oslo] Template to follow for policy support?

2016-05-31 Thread Jay Faulkner
Hi all,


During this cycle, on behalf of OSIC, I'll be working on implementing proper 
oslo.policy support for Ironic. The reasons this is needed probably don't need 
to be explained here, so I won't :).


I have two requests for the list regarding this though:


1) Is there a general guideline to follow when designing policy roles? There 
appears to have been some discussion around this already here: 
https://review.openstack.org/#/c/245629/, but it hasn't moved in over a month. 
I want Ironic's implementation of policy to be as 'standard' as possible; but 
I've had trouble finding any kind of standard.


2) A general call for contributors to help make this happen in Ironic. I want, 
in the next week, to finish up the research and start on a spec. Anyone willing 
to help with the design or implementation let me know here or in IRC so we can 
work together.


Thanks in advance,

Jay Faulkner


P.S. Yes, I am aware of 
http://specs.openstack.org/openstack/oslo-specs/specs/newton/policy-in-code.html
 and will ensure whatever Ironic does follows this specification.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-05-31 Thread Joshua Harlow

Andrew Laski wrote:


On Tue, May 31, 2016, at 04:26 PM, Joshua Harlow wrote:

Timofei Durakov wrote:

Hi team,

there is blueprint[1] that was approved during Liberty and resubmitted
to Newton(with spec[2]).
The idea is to define state machines for operations as live-migration,
resize, etc. and to deal with them operation states.
The spec PoC patches are overall good. At the same time I think is will
be good to get agreement on the usage of state-machines in Nova.
There are 2 options:

   * implement proposed change and use state machines to deal with states
 only

I think this is what could be called the ironic equivalent correct?

In ironic @
https://github.com/openstack/ironic/blob/master/ironic/common/states.py
the state machine here is used to ensure proper states are transitioned
over and no invalid/unexpected state transitions happen. The code though
itself still runs in a implicit fashion and afaik only interacts with
the state machine as a side-effect of actions occurring (instead of the
reverse where the state machine itself is 'driving' those actions to
happen/to completion).


Yes. This exists in a limited form already in Nova for instances and
task_states.


Right, I think I remember some attempts by some redhat folks to try to 
extract that information (I think it was via some complex grep scripts) 
into a state-table; don't quite think that ever got anywhere though 
(that I think was trying to create an equivalent of 
http://docs.openstack.org/developer/ironic/dev/states.html if I recall).


Maybe a first step to this all is to try to extract the task_states into 
an official state machine, ending up with something like 
https://github.com/openstack/ironic/blob/master/ironic/common/states.py 
(which is combined into a state machine at 
https://github.com/openstack/ironic/blob/master/ironic/common/states.py#L197 
...); then associate that machine with an instance and then in all prior 
locations where something like 'instance.task_state=XYZ' was happening 
change that to be instance.task_transition(new_task_state); that would 
use the state-machine for validation for allowed transitions (and would 
likely also help in centralizing what the valid states are, and as a 
side-effect of doing that nova can produce a similar svg diagram as 
ironic has when that is done).


Might be useful to find out some of the pros/cons from the ironic folks 
as they have gone through option #1 already (not many projects, maybe 
outside of heat, cue, octavia,  ? from what I can tell have gone 
with option #2 from the start, although I would have liked them to, ha).





   o procs:
   + could be implemented/merged right now
   + cleans up states for migrations
   o cons:
   + state machine only deal with states, and it will be hard to
 build on top of it task API, as bp [1] was designed for
 another thing.

   * use state machines in Task API(which I'm going to work on during
 next release):

So this would be the second model described above, where the state
machine (or set of state machines) itself (together could be formed into
a action plan, or action workflow or ...) would be the 'entity'
realizing a given action and ensuring that it is performed until
completed (or tracking where it was paused and such); is that correct?


   o procs:
   + Task API will orchestrate and deal with long running tasks
   + usage state-machines could help with actions
 rollbacks/retries/etc.
   o cons:
   + big amount of work
   + requires time.

I'd like to discuss these options in this thread.

It seems like one could progress from the first model to the second one,
although that kind of progression would still be large (because if my
understanding is correct the control of who runs what has to be given
over to something else in the second model, similar to the control a
taskflow engine or mistral engine has over what it runs); said control
means that certain programming models may not map so well (from what I
have seen).


I think working through this as a progression from the first model to
the second one would be the best plan. Start with formalizing the states
and their allowed transitions and add checking and error handling around
that. Then work towards handing off control to an engine that could
drive the operation.


Timofey

[1] -
https://blueprints.launchpad.net/openstack/?searchtext=migration-state-machine
[2] - https://review.openstack.org/#/c/320849/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

[openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Hongbin Lu
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Fei Long Wang
+1

On 01/06/16 09:39, Hongbin Lu wrote:
>
> Hi team,
>
>  
>
> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core.
> Normally, the requirement to join the core team is to consistently
> contribute to the project for a certain period of time. However, given
> the fact that the project is new and the initial core team was formed
> based on a commitment, I am fine to propose a new core based on a
> strong commitment to contribute plus a few useful patches/reviews. In
> addition, Eli Qiao is currently a Magnum core and I believe his
> expertise will be an asset of Higgins team.
>
>  
>
> According to the OpenStack Governance process [1], we require a
> minimum of 4 +1 votes from existing Higgins core team within a 1 week
> voting window (consider this proposal as a +1 vote from me). A vote of
> -1 is a veto. If we cannot get enough votes or there is a veto vote
> prior to the end of the voting window, Eli is not able to join the
> core team and needs to wait 30 days to reapply.
>
>  
>
> The voting is open until Tuesday June 7st.
>
>  
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>  
>
> Best regards,
>
> Hongbin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-05-31 Thread Andrew Laski


On Tue, May 31, 2016, at 04:26 PM, Joshua Harlow wrote:
> Timofei Durakov wrote:
> > Hi team,
> >
> > there is blueprint[1] that was approved during Liberty and resubmitted
> > to Newton(with spec[2]).
> > The idea is to define state machines for operations as live-migration,
> > resize, etc. and to deal with them operation states.
> > The spec PoC patches are overall good. At the same time I think is will
> > be good to get agreement on the usage of state-machines in Nova.
> > There are 2 options:
> >
> >   * implement proposed change and use state machines to deal with states
> > only
> 
> I think this is what could be called the ironic equivalent correct?
> 
> In ironic @ 
> https://github.com/openstack/ironic/blob/master/ironic/common/states.py 
> the state machine here is used to ensure proper states are transitioned 
> over and no invalid/unexpected state transitions happen. The code though 
> itself still runs in a implicit fashion and afaik only interacts with 
> the state machine as a side-effect of actions occurring (instead of the 
> reverse where the state machine itself is 'driving' those actions to 
> happen/to completion).

Yes. This exists in a limited form already in Nova for instances and
task_states.

> 
> >   o procs:
> >   + could be implemented/merged right now
> >   + cleans up states for migrations
> >   o cons:
> >   + state machine only deal with states, and it will be hard to
> > build on top of it task API, as bp [1] was designed for
> > another thing.
> >
> >   * use state machines in Task API(which I'm going to work on during
> > next release):
> 
> So this would be the second model described above, where the state 
> machine (or set of state machines) itself (together could be formed into 
> a action plan, or action workflow or ...) would be the 'entity' 
> realizing a given action and ensuring that it is performed until 
> completed (or tracking where it was paused and such); is that correct?
> 
> >   o procs:
> >   + Task API will orchestrate and deal with long running tasks
> >   + usage state-machines could help with actions
> > rollbacks/retries/etc.
> >   o cons:
> >   + big amount of work
> >   + requires time.
> >
> > I'd like to discuss these options in this thread.
> 
> It seems like one could progress from the first model to the second one, 
> although that kind of progression would still be large (because if my 
> understanding is correct the control of who runs what has to be given 
> over to something else in the second model, similar to the control a 
> taskflow engine or mistral engine has over what it runs); said control 
> means that certain programming models may not map so well (from what I 
> have seen).

I think working through this as a progression from the first model to
the second one would be the best plan. Start with formalizing the states
and their allowed transitions and add checking and error handling around
that. Then work towards handing off control to an engine that could
drive the operation.

> 
> >
> > Timofey
> >
> > [1] -
> > https://blueprints.launchpad.net/openstack/?searchtext=migration-state-machine
> > [2] - https://review.openstack.org/#/c/320849/
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Hongbin Lu
Hi team,

I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core. 
Normally, the requirement to join the core team is to consistently contribute 
to the project for a certain period of time. However, given the fact that the 
project is new and the initial core team was formed based on a commitment, I am 
fine to propose a new core based on a strong commitment to contribute plus a 
few useful patches/reviews. In addition, Eli Qiao is currently a Magnum core 
and I believe his expertise will be an asset of Higgins team.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from existing Higgins core team within a 1 week voting window (consider 
this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get 
enough votes or there is a veto vote prior to the end of the voting window, Eli 
is not able to join the core team and needs to wait 30 days to reapply.

The voting is open until Tuesday June 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-31 Thread Nikhil Komawar
Hey,


Thanks for your interest.

Sorry about the confusion. Please consider the same time for Thursday
June 9th.


Thur June 9th proposed time:
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=11=0=0=881=196=47=22=157=87=24=78=283


Alternate time proposal:
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=23=0=0=881=196=47=22=157=87=24=78=283


Overall time planner:
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160609=881=196=47=22=157=87=24=78=283



It will really depend on who is strongly interested in the discussions.
Scheduling with EMEA, Pacific time (US), Australian (esp. Eastern) is
quite difficult. If there's strong interest from San Jose, we may have
to settle for a rather awkward choice below:


http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=4=0=0=881=196=47=22=157=87=24=78=283



A vote of +1, 0, -1 on these times would help long way.


On 5/31/16 4:35 PM, Belmiro Moreira wrote:
> Hi Nikhil,
> I'm interested in this discussion.
>
> Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
> Are you suggesting to change also the date? Because in the new
> timeanddate suggestions is 6/7 of June.
>
> Belmiro
>
> On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar  > wrote:
>
> Hey,
>
>
>
>
>
> Thanks for the feedback. 0800UTC is 4am EDT for some of the US
> Glancers :-)
>
>
>
>
>
> I request this time which may help the folks in Eastern and Central US
>
> time.
>
> 
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=7=11=0=0=881=196=47=22=157=87=24=78
>
>
>
>
>
> If it still does not work, I may have to poll the folks in EMEA on how
>
> strong their intentions are for joining this call.  Because
> another time
>
> slot that works for folks in Australia & US might be too inconvenient
>
> for those in EMEA:
>
> 
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=6=23=0=0=881=196=47=22=157=87=24=78
>
>
>
>
>
> Here's the map of cities that may be involved:
>
> 
> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160607=881=196=47=22=157=87=24=78
>
>
>
>
>
> Please let me know which ones are possible and we can try to work
> around
>
> the times.
>
>
>
>
>
> On 5/31/16 2:54 AM, Blair Bethwaite wrote:
>
> > Hi Nikhil,
>
> >
>
> > 2000UTC might catch a few kiwis, but it's 6am everywhere on the east
>
> > coast of Australia, and even earlier out west. 0800UTC, on the other
>
> > hand, would be more sociable.
>
> >
>
> > On 26 May 2016 at 15:30, Nikhil Komawar  > wrote:
>
> >> Thanks Sam. We purposefully chose that time to accommodate some
> of our
>
> >> community members from the Pacific. I'm assuming it's just your
> case
>
> >> that's not working out for that time? So, hopefully other
> Australian/NZ
>
> >> friends can join.
>
> >>
>
> >>
>
> >> On 5/26/16 12:59 AM, Sam Morrison wrote:
>
> >>> I’m hoping some people from the Large Deployment Team can come
> along. It’s not a good time for me in Australia but hoping someone
> else can join in.
>
> >>>
>
> >>> Sam
>
> >>>
>
> >>>
>
>  On 26 May 2016, at 2:16 AM, Nikhil Komawar
> > wrote:
>
> 
>
>  Hello,
>
> 
>
> 
>
>  Firstly, I would like to thank Fei Long for bringing up a few
> operator
>
>  centric issues to the Glance team. After chatting with him on
> IRC, we
>
>  realized that there may be more operators who would want to
> contribute
>
>  to the discussions to help us take some informed decisions.
>
> 
>
> 
>
>  So, I would like to call for a 2 hour sync for the Glance
> team along
>
>  with interested operators on Thursday June 9th, 2016 at 2000UTC.
>
> 
>
> 
>
>  If you are interested in participating please RSVP here [1], and
>
>  participate in the poll for the tool you'd prefer. I've also
> added a
>
>  section for Topics and provided a template to document the
> issues clearly.
>
> 
>
> 
>
>  Please be mindful of everyone's time and if you are proposing
> issue(s)
>
>  to be discussed, come prepared with well documented &
> referenced topic(s).
>
> 
>
> 
>
>  If you've feedback that you are not sure if appropriate for the
>
>  etherpad, you can reach me on irc (nick: nikhil).
>
> 
>
> 
>
>  [1]
> https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
>
> 
>
>  --
>
> 
>
>  Thanks,
>
>  Nikhil Komawar
>
>  Newton PTL for OpenStack Glance
>
>

Re: [openstack-dev] [Neutron] Question about service subnets spec

2016-05-31 Thread Brian Haley

Thanks Carl for bringing this up, comments below.

On 05/26/2016 02:04 PM, Carl Baldwin wrote:

Hi folks,

Some (but not all) of you will remember a discussion we had about
service subnets at the last mid-cycle.  We've been iterating a little
bit on a spec [1] and we have just one issue that we'd like to get a
little bit more feedback on.

As a summary:  To me, the idea of this spec is to reserve certain
subnets for certain kinds of ports.  For example, DVR FIP gateway
ports, and router ports.  The goal of this is to be able to use
subnets with private addresses for these kinds of ports instead of
wasting public IP addresses.

The remaining question is how to expose this through the API.  I had
thought about just attaching a list of device_owners to the subnet
resource.  If a list is attached, then only ports with the right
device_owner will be allocated IP addresses from that subnet.  I
thought this would be an easy way to implement it and I thought since
device owner is already exposed through the API, maybe it would be
acceptable.  However, there is some concern that this exposes too much
of the internal implementation.  I understand this concern.

At the mid-cycle we had discussed some enumeration values that
combined several types to avoid having to allow a list of types on a
subnet.  They were going to look like this:

   dvr_gateway -> ["network:floatingip_agent_gateway"]
   router_gateway -> ["network:floatingip_agent_gateway",
"network:router_gateway"]

The idea was that we'd only allow one value for a subnet and the
difference between the two would be whether you wanted router ports to
use private IPs.  I think it would be clearer if we just have simpler
definitions of types and allow a list of them.


Yes, this was the original plan - two values (well, three since None was 
default), each mapping to a set of owners.



At this point the enumeration values map simply to device owners.  For example:

   router_ports -> "network:router_gateway"
   dvr_fip_ports -> "network:floatingip_agent_gateway"

It was at this point that I questioned the need for the abstraction at
all.  Hence the proposal to use the device owners directly.


I would agree, think having another name to refer to a device_owner makes it 
more confusing.  Using it directly let's us be flexible for deployers, and 
allows for using additional owners values if/when they are added.



Armando expressed some concern about using the device owner as a
security issue.  We have the following policy on device_owner:

   "not rule:network_device or rule:context_is_advsvc or
rule:admin_or_network_owner"

At the moment, I don't see this as much of an issue.  Do you?


I don't, since only admins should be able to set device_owner to these values 
(that's the policy we're talking about here, right?).


To be honest, I think Armando's other comment - "Do we want to expose 
device_owner via tha API or leave it an implementation detail?" is important as 
well.  Even though I think an admin should know this level of neutron detail, 
will they really?  It's hard to answer that question being so close to the code.


-Brian


[1] 
https://review.openstack.org/#/c/300207/3/specs/newton/subnet-service-types.rst




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-31 Thread James Slagle
On Mon, May 30, 2016 at 6:12 PM, Steve Baker  wrote:
> This raises the possibility of an alternative to OVB for trying/developing
> TripleO on a host cloud.
>
> If a vm version of the overcloud-full image is also generated then the host
> cloud can boot these directly. The approach above can then be used to treat
> these nodes as pre-existing nodes to adopt.
>
> I did this for a while configuring the undercloud nova to use the fake virt
> driver, but it sounds like the approach above doesn't interact with nova at
> all.

Correct, the nodes could come from anywhere. They could be prelaunched
instances on an OpenStack cloud, or any cloud for that matter. I in
fact tested this out on the Rackspace public cloud by just launching 3
vanilla Centos instances, installed an undercloud on one, and then
used the other 2 for the overcloud.

>
> So I'm +1 on this approach for *some* development environments too. Can you
> provide a list of the changes?

This is the primary patch to tripleo-heat-templates that enables it to work:
https://review.openstack.org/#/c/222772/

And a couple of other patches on the same topic branch:
https://review.openstack.org/#/q/topic:deployed-server

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread Ryan Moats
John McDowall  wrote on 05/31/2016 03:21:30
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org"
> , Justin Pettit ,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 05/31/2016 03:22 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Let me add the tables to OVN for SFC. That will give us a working
> system to prototype the flow classifier approach on. Hopefully I can
> get something done by end of week.
>
> Regards
>
> John

I've got some internal folks that are willing to help with writing code (as
I will be once I clear my current firefights) so the question of how to
collaborate with code now arises...

Are you comfortable with putting the changes on r.o.o as WiP and patchworks
as RFC and work through the review process or would you rather work via
forks and pull requests in github?

Ryan

> From: Ryan Moats 
> Date: Tuesday, May 31, 2016 at 10:17 AM
> To: John McDowall 
> Cc: Ben Pfaff , "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit , OpenStack
> Development Mailing List , Russell
Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall  wrote on 05/26/2016
> 11:08:43 AM:
>
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: Ben Pfaff , "disc...@openvswitch.org"
> > , Justin Pettit ,
> > "OpenStack Development Mailing List"  > d...@lists.openstack.org>, Russell Bryant 
> > Date: 05/26/2016 11:09 AM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > My (incomplete) throughts about the flow-classifier are:
> >
> > 1)  ACL’s are more about denying access, while the flow classifier
> > is more about steering selected traffic to a path, so we would need
> > to deny-all except allowed flows.
> > 2)  The networking-sfc team has done a nice job with the drivers so
> > ovn has its own flow-classifier driver which allows us to align the
> > flow-classifier with the matches supported in ovs/ovn, which could
> > be an advantage.
>
> The ACL table has a very simple flow-classifier structure and I'd
> like to see if that can be re-used for the purpose of the SFC classifier
> (read that I feel the Logical_Flow_Classifier table is too complex).
> My initial thoughts were to look at extending the action column and
> using the external-ids field to differentiate between legacy ACLs and
> those that are used to intercept traffic and route it to an SFC.
>
> >
> > What were your thoughts on the schema it adds a lot of tables and a
> > lot of commands – cannot think of anyway around it
>
> In this case, I think that the other tables are reasonable and I'm
> uncomfortable trying to stretch the existing tables to cover that
> information...
>
> Ryan
>
> >
> > Regards
> >
> > John
> >
> > From: Ryan Moats 
> > Date: Wednesday, May 25, 2016 at 9:12 PM
> > To: John McDowall 
> > Cc: Ben Pfaff , "disc...@openvswitch.org" <
> > disc...@openvswitch.org>, Justin Pettit , OpenStack
> > Development Mailing List ,
> Russell Bryant <
> > russ...@ovn.org>
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > John McDowall  wrote on 05/25/2016
> > 07:27:46 PM:
> >
> > > From: John McDowall 
> > > To: Ryan Moats/Omaha/IBM@IBMUS
> > > Cc: "disc...@openvswitch.org" , "OpenStack
> > > Development Mailing List" , Ben
> > > Pfaff , Justin Pettit , Russell Bryant
> > > 
> > > Date: 05/25/2016 07:28 PM
> > > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> > >
> > > Ryan,
> > >
> > > Ok – I will let the experts weigh in on load balancing.
> > >
> > > In the meantime I have attached a couple of files to show where I am
> > > going. The first is sfc_dict.py and is a representation of the dict
> > > I am passing from SFC to OVN. This will then translate to the
> > > attached ovn-nb schema file.
> > >
> > > One of my concerns is that SFC almost doubles the size of the ovn-nb
> > > schema but I could not think of any other way of doing it.
> > >
> > > Thoughts?
> > >
> > > John
> >
> > The dictionary looks fine for a starting point, and the more I look
> > at the classifier, the more I wonder if we can't do something with
> > the current ACL table to avoid duplication in the NB 

Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread Ryan Moats
John McDowall  wrote on 05/31/2016 03:19:54
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org"
> , Justin Pettit ,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 05/31/2016 03:20 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Hopefully – just wanted to make sure it was there.
>
> Regards
>
> John

I think having that as one of the tests to make sure is a good idea...

Ryan

>
> From: Ryan Moats 
> Date: Tuesday, May 31, 2016 at 10:02 AM
> To: John McDowall 
> Cc: Ben Pfaff , "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit , OpenStack
> Development Mailing List , Russell
Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall  wrote on 05/26/2016
> 10:59:48 AM:
>
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS, Ben Pfaff 
> > Cc: "disc...@openvswitch.org" , Justin
> > Pettit , OpenStack Development Mailing List
> > , Russell Bryant 
> > Date: 05/26/2016 11:00 AM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > Agree with your description of the problem. The only thing I would
> > add is that in the case of bi-directional chains the return flows
> > need to go through the same VNF(Port-pair).
>
> I'm pretty sure that is caught automagically, isn't it?
>
> Ryan
>
> >
> > Regards
> >
> > John
> >
> > From: Ryan Moats 
> > Date: Wednesday, May 25, 2016 at 9:29 PM
> > To: Ben Pfaff 
> > Cc: "disc...@openvswitch.org" , John McDowall
<
> > jmcdow...@paloaltonetworks.com>, Justin Pettit ,
> > OpenStack Development Mailing List  > >, Russell Bryant 
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ben Pfaff  wrote on 05/25/2016 07:44:43 PM:
> >
> > > From: Ben Pfaff 
> > > To: Ryan Moats/Omaha/IBM@IBMUS
> > > Cc: John McDowall ,
> > > "disc...@openvswitch.org" , OpenStack
> > > Development Mailing List , Justin
> > > Pettit , Russell Bryant 
> > > Date: 05/25/2016 07:44 PM
> > > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> > >
> > > On Wed, May 25, 2016 at 09:27:31AM -0500, Ryan Moats wrote:
> > > > As I understand it, Table 0 identifies the logical port and logical
> > > > flow. I'm worried that this means we'll end up with separate bucket
> > > > rules for each ingress port of the port pairs that make up a port
> > > > group, leading to a cardinality product in the number of rules.
> > > > I'm trying to think of a way where Table 0 could identify the
packet
> > > > as being part of a particular port group, and then I'd only need
one
> > > > set of bucket rules to figure out the egress side.  However, the
> > > > amount of free metadata space is limited and so before we go down
> > > > this path, I'm going to pull Justin, Ben and Russell in to see if
> > > > they buy into this idea or if they can think of an alternative.
> > >
> > > I've barely been following the discussion, so a recap of the question
> > > here would help a lot.
> > >
> >
> > Sure (and John gets to correct me where I'm wrong) - the SFC proposal
> > is to carry a chain as a ordered set of port groups, where each group
> > consists of multiple port pairs. Each port pair consists of an ingress
> > port and an egress port, so that traffic is load balanced between
> > the ingress ports of a group. Traffic from the egress port of a group
> > is sent to the ingress port of the next group (ingress and egress here
> > are from the point of view of the thing getting the traffic).
> >
> > I was suggesting to John that from the view of the switch, this would
> > be reversed in the openvswitch rules - the proposed CHAINING stage
> > in the ingress pipeline would apply the classifier for traffic entering
> > a chain and identify traffic coming from an egress SFC port in the
> > midst of a chain. The egress pipeline would identify the next ingress
SFC
> > port that gets the traffic or the final destination for traffic exiting
> > the chain.
> >
> > Further, I pointed him at the select group for how traffic could be
> > load balanced between the different ports that are contained in a port
> > group, but that I was worried that I'd need a cartesian product 

Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-05-31 Thread Michał Jastrzębski
I am opposed to this idea as I don't think we need this. We can solve
many problems by using jinja2 to greater extend. I'll publish demo of
few improvements soon, please bear with me before we make a
arch-changing call.

On 29 May 2016 at 14:41, Steven Dake (stdake)  wrote:
>
>>On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:
>>
>>>
>>>
>>>On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)" 
>>>wrote:
>>>
On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake) 
wrote:
> Hey folks,
>
> While Swapnil has been busy churning the dockerfile.j2 files to all
>match
> the same style, and we also had summit where we declared we would
>solve
>the
> plugin problem, I have decided to begin work on a DSL prototype.
>
> Here are the problems I want to solve in order of importance by this
>work:
>
> Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
> Provide a programmatic way to manage Dockerfile construction rather
>then a
> manual (with vi or emacs or the like) mechanism
> Allow complete overrides of every facet of Dockerfile construction,
>most
> especially repositories per container (rather than in the base
>container) to
> permit the use case of dependencies from one version with dependencies
>in
> another version of a different service
> Get out of the business of maintaining 100+ dockerfiles but instead
>maintain
> one master file which defines the data that needs to be used to
>construct
> Dockerfiles
> Permit different types of optimizations or Dockerfile building by
>changing
> around the parser implementation ­ to allow layering of each
>operation,
>or
> alternatively to merge layers as we do today
>
> I don't believe we can proceed with both binary and source plugins
>given our
> current implementation of Dockerfiles in any sane way.
>
> I further don't believe it is possible to customize repositories &
>installed
> files per container, which I receive increasing requests for offline.
>
> To that end, I've created a very very rough prototype which builds the
>base
> container as well as a mariadb container.  The mariadb container
>builds
>and
> I suspect would work.
>
> An example of the DSL usage is here:
> https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml
>
> A very poorly written parser is here:
> https://review.openstack.org/#/c/321468/4/dockerdsl/load.py
>
> I played around with INI as a format, to take advantage of oslo.config
>and
> kolla-build.conf, but that didn't work out.  YML is the way to go.
>
> I'd appreciate reviews on the YML implementation especially.
>
> How I see this work progressing is as follows:
>
> A yml file describing all docker containers for all distros is placed
>in
> kolla/docker
> The build tool adds an option ‹use-yml which uses the YML file
> A parser (such as load.py above) is integrated into build.py to lay
>down he
> Dockerfiles
> Wait 4-6 weeks for people to find bugs and complain
> Make the ‹use-yml the default for 4-6 weeks
> Once we feel confident in the yml implementation, remove all
>Dockerfile.j2
> files
> Remove ‹use-yml option
> Remove all jinja2-isms from build.py
>
> This is similar to the work that took place to convert from raw
>Dockerfiles
> to Dockerfile.j2 files.  We are just reusing that pattern.  Hopefully
>this
> will be the last major refactor of the dockerfiles unless someone has
>some
> significant complaints about the approach.
>
> Regards
> -steve
>
>
> On 5/27/16, 3:44 AM, "Britt Houser (bhouser)"  wrote:
>
>>I admit I'm not as knowledgable about the Kolla codebase as I'd like to
>>be, so most of what you're saying is going over my head.  I think mainly
>>I don't understand the problem statement.  It looks like you're pulling
>>all the "hard coded" things out of the docker files, and making them user
>>replaceable?  So the dockerfiles just become a list of required steps,
>>and the user can change how each step is implemented?  Would this also
>>unify the dockefiles so there wouldn't be a huge if statements between
>>Centos and Ubuntu?
>>
>>Thx,
>>Britt
>>
>
> What is being pulled out is all of the metadata used by the Dockerfiles or
> Kolla in general.  This metadata, being structured either as a dictionary
> or ordered list, can be manipulated by simple python tools to do things
> like merge sections and override sections or optimize the built images.
> FWIW it looks without even trying the Dockerfiles produce a 50MB smaller
> image produced by the parser.  The jinja2 templates we have today cannot
> be easily overridden.  We have to provide a new key for each type of
> 

Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-31 Thread Belmiro Moreira
Hi Nikhil,
I'm interested in this discussion.

Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
Are you suggesting to change also the date? Because in the new timeanddate
suggestions is 6/7 of June.

Belmiro

On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar 
wrote:

> Hey,
>
>
> Thanks for the feedback. 0800UTC is 4am EDT for some of the US Glancers :-)
>
>
> I request this time which may help the folks in Eastern and Central US
> time.
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=7=11=0=0=881=196=47=22=157=87=24=78
>
>
> If it still does not work, I may have to poll the folks in EMEA on how
> strong their intentions are for joining this call.  Because another time
> slot that works for folks in Australia & US might be too inconvenient
> for those in EMEA:
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=6=23=0=0=881=196=47=22=157=87=24=78
>
>
> Here's the map of cities that may be involved:
>
> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160607=881=196=47=22=157=87=24=78
>
>
> Please let me know which ones are possible and we can try to work around
> the times.
>
>
> On 5/31/16 2:54 AM, Blair Bethwaite wrote:
> > Hi Nikhil,
> >
> > 2000UTC might catch a few kiwis, but it's 6am everywhere on the east
> > coast of Australia, and even earlier out west. 0800UTC, on the other
> > hand, would be more sociable.
> >
> > On 26 May 2016 at 15:30, Nikhil Komawar  wrote:
> >> Thanks Sam. We purposefully chose that time to accommodate some of our
> >> community members from the Pacific. I'm assuming it's just your case
> >> that's not working out for that time? So, hopefully other Australian/NZ
> >> friends can join.
> >>
> >>
> >> On 5/26/16 12:59 AM, Sam Morrison wrote:
> >>> I’m hoping some people from the Large Deployment Team can come along.
> It’s not a good time for me in Australia but hoping someone else can join
> in.
> >>>
> >>> Sam
> >>>
> >>>
>  On 26 May 2016, at 2:16 AM, Nikhil Komawar 
> wrote:
> 
>  Hello,
> 
> 
>  Firstly, I would like to thank Fei Long for bringing up a few operator
>  centric issues to the Glance team. After chatting with him on IRC, we
>  realized that there may be more operators who would want to contribute
>  to the discussions to help us take some informed decisions.
> 
> 
>  So, I would like to call for a 2 hour sync for the Glance team along
>  with interested operators on Thursday June 9th, 2016 at 2000UTC.
> 
> 
>  If you are interested in participating please RSVP here [1], and
>  participate in the poll for the tool you'd prefer. I've also added a
>  section for Topics and provided a template to document the issues
> clearly.
> 
> 
>  Please be mindful of everyone's time and if you are proposing issue(s)
>  to be discussed, come prepared with well documented & referenced
> topic(s).
> 
> 
>  If you've feedback that you are not sure if appropriate for the
>  etherpad, you can reach me on irc (nick: nikhil).
> 
> 
>  [1]
> https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
> 
>  --
> 
>  Thanks,
>  Nikhil Komawar
>  Newton PTL for OpenStack Glance
> 
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> --
> >>
> >> Thanks,
> >> Nikhil
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> openstack-operat...@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-05-31 Thread Joshua Harlow

Timofei Durakov wrote:

Hi team,

there is blueprint[1] that was approved during Liberty and resubmitted
to Newton(with spec[2]).
The idea is to define state machines for operations as live-migration,
resize, etc. and to deal with them operation states.
The spec PoC patches are overall good. At the same time I think is will
be good to get agreement on the usage of state-machines in Nova.
There are 2 options:

  * implement proposed change and use state machines to deal with states
only


I think this is what could be called the ironic equivalent correct?

In ironic @ 
https://github.com/openstack/ironic/blob/master/ironic/common/states.py 
the state machine here is used to ensure proper states are transitioned 
over and no invalid/unexpected state transitions happen. The code though 
itself still runs in a implicit fashion and afaik only interacts with 
the state machine as a side-effect of actions occurring (instead of the 
reverse where the state machine itself is 'driving' those actions to 
happen/to completion).



  o procs:
  + could be implemented/merged right now
  + cleans up states for migrations
  o cons:
  + state machine only deal with states, and it will be hard to
build on top of it task API, as bp [1] was designed for
another thing.

  * use state machines in Task API(which I'm going to work on during
next release):


So this would be the second model described above, where the state 
machine (or set of state machines) itself (together could be formed into 
a action plan, or action workflow or ...) would be the 'entity' 
realizing a given action and ensuring that it is performed until 
completed (or tracking where it was paused and such); is that correct?



  o procs:
  + Task API will orchestrate and deal with long running tasks
  + usage state-machines could help with actions
rollbacks/retries/etc.
  o cons:
  + big amount of work
  + requires time.

I'd like to discuss these options in this thread.


It seems like one could progress from the first model to the second one, 
although that kind of progression would still be large (because if my 
understanding is correct the control of who runs what has to be given 
over to something else in the second model, similar to the control a 
taskflow engine or mistral engine has over what it runs); said control 
means that certain programming models may not map so well (from what I 
have seen).




Timofey

[1] -
https://blueprints.launchpad.net/openstack/?searchtext=migration-state-machine
[2] - https://review.openstack.org/#/c/320849/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread John McDowall
Ryan,

Let me add the tables to OVN for SFC. That will give us a working system to 
prototype the flow classifier approach on. Hopefully I can get something done 
by end of week.

Regards

John

From: Ryan Moats >
Date: Tuesday, May 31, 2016 at 10:17 AM
To: John McDowall 
>
Cc: Ben Pfaff >, 
"disc...@openvswitch.org" 
>, Justin Pettit 
>, OpenStack Development Mailing List 
>, 
Russell Bryant >
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
> wrote 
on 05/26/2016 11:08:43 AM:

> From: John McDowall 
> >
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org"
> >, Justin Pettit 
> >,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> >
> Date: 05/26/2016 11:09 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> My (incomplete) throughts about the flow-classifier are:
>
> 1)  ACL’s are more about denying access, while the flow classifier
> is more about steering selected traffic to a path, so we would need
> to deny-all except allowed flows.
> 2)  The networking-sfc team has done a nice job with the drivers so
> ovn has its own flow-classifier driver which allows us to align the
> flow-classifier with the matches supported in ovs/ovn, which could
> be an advantage.

The ACL table has a very simple flow-classifier structure and I'd
like to see if that can be re-used for the purpose of the SFC classifier
(read that I feel the Logical_Flow_Classifier table is too complex).
My initial thoughts were to look at extending the action column and
using the external-ids field to differentiate between legacy ACLs and
those that are used to intercept traffic and route it to an SFC.

>
> What were your thoughts on the schema it adds a lot of tables and a
> lot of commands – cannot think of anyway around it

In this case, I think that the other tables are reasonable and I'm
uncomfortable trying to stretch the existing tables to cover that
information...

Ryan

>
> Regards
>
> John
>
> From: Ryan Moats >
> Date: Wednesday, May 25, 2016 at 9:12 PM
> To: John McDowall 
> >
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit 
> >, OpenStack
> Development Mailing List 
> >,
>  Russell Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall 
> > wrote 
> on 05/25/2016
> 07:27:46 PM:
>
> > From: John McDowall 
> > >
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: "disc...@openvswitch.org" 
> > >, "OpenStack
> > Development Mailing List" 
> > >,
> >  Ben
> > Pfaff >, Justin Pettit 
> > >, Russell Bryant
> > >
> > Date: 05/25/2016 07:28 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > Ok – I will let the experts weigh in on load balancing.
> >
> > In the meantime I have attached a couple of files to show where I am
> > going. The first is sfc_dict.py and is a representation of the dict
> > I am passing from SFC to OVN. This will then translate to the
> > attached ovn-nb schema file.
> >
> > One of my concerns is that SFC almost doubles the size of the ovn-nb
> > schema but I could not think of any other way of doing it.
> >
> > Thoughts?
> >
> > John
>
> The dictionary looks fine for a starting point, and the more I look
> at the classifier, the more I wonder if we can't do something with
> the current ACL table to avoid duplication in the NB database
> 

Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread John McDowall
Ryan,

Hopefully – just wanted to make sure it was there.

Regards

John

From: Ryan Moats >
Date: Tuesday, May 31, 2016 at 10:02 AM
To: John McDowall 
>
Cc: Ben Pfaff >, 
"disc...@openvswitch.org" 
>, Justin Pettit 
>, OpenStack Development Mailing List 
>, 
Russell Bryant >
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
> wrote 
on 05/26/2016 10:59:48 AM:

> From: John McDowall 
> >
> To: Ryan Moats/Omaha/IBM@IBMUS, Ben Pfaff >
> Cc: "disc...@openvswitch.org" 
> >, Justin
> Pettit >, OpenStack Development 
> Mailing List
> >,
>  Russell Bryant >
> Date: 05/26/2016 11:00 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Agree with your description of the problem. The only thing I would
> add is that in the case of bi-directional chains the return flows
> need to go through the same VNF(Port-pair).

I'm pretty sure that is caught automagically, isn't it?

Ryan

>
> Regards
>
> John
>
> From: Ryan Moats >
> Date: Wednesday, May 25, 2016 at 9:29 PM
> To: Ben Pfaff >
> Cc: "disc...@openvswitch.org" 
> >, John McDowall <
> jmcdow...@paloaltonetworks.com>, 
> Justin Pettit >,
> OpenStack Development Mailing List 
> 
> >, Russell Bryant >
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ben Pfaff > wrote on 05/25/2016 07:44:43 PM:
>
> > From: Ben Pfaff >
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: John McDowall 
> > >,
> > "disc...@openvswitch.org" 
> > >, OpenStack
> > Development Mailing List 
> > >,
> >  Justin
> > Pettit >, Russell Bryant 
> > >
> > Date: 05/25/2016 07:44 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > On Wed, May 25, 2016 at 09:27:31AM -0500, Ryan Moats wrote:
> > > As I understand it, Table 0 identifies the logical port and logical
> > > flow. I'm worried that this means we'll end up with separate bucket
> > > rules for each ingress port of the port pairs that make up a port
> > > group, leading to a cardinality product in the number of rules.
> > > I'm trying to think of a way where Table 0 could identify the packet
> > > as being part of a particular port group, and then I'd only need one
> > > set of bucket rules to figure out the egress side.  However, the
> > > amount of free metadata space is limited and so before we go down
> > > this path, I'm going to pull Justin, Ben and Russell in to see if
> > > they buy into this idea or if they can think of an alternative.
> >
> > I've barely been following the discussion, so a recap of the question
> > here would help a lot.
> >
>
> Sure (and John gets to correct me where I'm wrong) - the SFC proposal
> is to carry a chain as a ordered set of port groups, where each group
> consists of multiple port pairs. Each port pair consists of an ingress
> port and an egress port, so that traffic is load balanced between
> the ingress ports of a group. Traffic from the egress port of a group
> is sent to the ingress port of the next group (ingress and egress here
> are from the point of view of the thing getting the traffic).
>
> I was suggesting to John that from the view of the switch, this would
> be reversed in the openvswitch rules - the proposed CHAINING stage
> in the ingress pipeline would apply the classifier for traffic entering
> a chain and identify traffic coming from an egress SFC port in the
> midst of a chain. The egress pipeline would identify the next ingress SFC
> port that gets the traffic 

Re: [openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Hongbin Lu
I don’t think it is a good to re-invent docker-compose in Higgins. Instead, we 
should leverage existing libraries/tools if we can.

Frankly, I don’t think Higgins should interpret any docker-compose like DSL in 
server, but maybe it is a good idea to have a CLI extension to interpret 
specific DSL and translate it to a set of REST API calls to Higgins server. The 
solution should be generic enough so that we can re-use it to interpret another 
DSL (e.g. pod, TOSCA, etc.) in the future.

Best regards,
Hongbin

From: Denis Makogon [mailto:lildee1...@gmail.com]
Sent: May-31-16 3:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Docker-compose support

Hello.

It is hard to tell if given API will be final version, but i tried to make it 
similar to CLI and its capabilities. So, why not?

2016-05-31 22:02 GMT+03:00 Joshua Harlow 
>:
Cool good to know,

I see 
https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66

Would that be the primary API? Hard to tell what is the API there actually, 
haha. Is it the run() method?

I was thinking more along the line that higgins could be a 'interpreter' of the 
same docker-compose format (or similar format); if the library that is being 
created takes a docker-compose file and turns it into a 'intermediate' 
version/format that'd be cool. The compiled version would then be 'executable' 
(and introspectable to) by say higgins (which could say traverse over that 
intermediate version and activate its own code to turn the intermediate 
versions primitives into reality), or a docker-compose service could or ...

What abou TOSCA? From my own perspective compose format is too limited, so it 
is really necessary to consider regarding use of TOSCA in Higgins workflows.


Libcompose also seems to be targeted at a higher level library, from at least 
reading the summary, neither seem to be taking a compose yaml file, turning it 
into a intermediate format, exposing that intermediate format to others for 
introspection/execution (and also likely providing a default execution engine 
that understands that format) but instead both just provide an equivalent of:

That's why i've started this thread, as community we have use cases for Higgins 
itself and for compose but most of them are not formalized or even written. 
Isn't this a good time to define them?

  project = make_project(yaml_file)
  project.run/up()

Which probably isn't the best API for something like a web-service that uses 
that same library to have. IMHO having a long running run() method

Well, compose allows to run detached executions for most of its API calls. By 
use of events, we can track service/containers statuses (but it is not really 
trivial).

exposed, without the necessary state tracking, ability to 
interrupt/pause/resume that run() method and such is not going to end well for 
users of that lib (especially a web-service that needs to periodically be 
`service webservice stop` or restart, or ...).

Yes, agreed. But docker or swarm by itself doesn't provide such API (can't tell 
the same for K8t).

Denis Makogon wrote:
Hello Stackers.


As part of discussions around what Higgins is and what its mission there
are were couple of you who mentioned docker-compose [1] and necessity of
doing the same thing for Higgins but from scratch.

I don't think that going that direction is the best way to spend
development cycles. So, that's why i ask you to take a look at recent
patchset submitted to docker-compose upstream [2] that makes this tool
(initially designed as CLI) to become a library with Python API.  The
whole idea is to make docker-compose look similar to libcompose [3]
(written on Go).

If we need to utilize docker-compose features in Higgins i'd recommend
to work on this with Docker community and convince them to land that
patch to upstream.

If you have any questions, please let me know.

[1] https://docs.docker.com/compose/
[2] https://github.com/docker/compose/pull/3535
[3] https://github.com/docker/libcompose


Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)

[openstack-dev] [Nova] State machines in Nova

2016-05-31 Thread Timofei Durakov
Hi team,

there is blueprint[1] that was approved during Liberty and resubmitted to
Newton(with spec[2]).
The idea is to define state machines for operations as live-migration,
resize, etc. and to deal with them operation states.
The spec PoC patches are overall good. At the same time I think is will be
good to get agreement on the usage of state-machines in Nova.
There are 2 options:

   - implement proposed change and use state machines to deal with states
   only
   - procs:
 - could be implemented/merged right now
 - cleans up states for migrations
  - cons:
 - state machine only deal with states, and it will be hard to
 build on top of it task API, as bp [1] was designed for another thing.


   - use state machines in Task API(which I'm going to work on during next
   release):
  - procs:
 - Task API will orchestrate and deal with long running tasks
 - usage state-machines could help with actions
 rollbacks/retries/etc.
  - cons:
 - big amount of work
 - requires time.

I'd like to discuss these options in this thread.

Timofey

[1] -
https://blueprints.launchpad.net/openstack/?searchtext=migration-state-machine
[2] - https://review.openstack.org/#/c/320849/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Joshua Harlow

Denis Makogon wrote:

Hello.

It is hard to tell if given API will be final version, but i tried to
make it similar to CLI and its capabilities. So, why not?

2016-05-31 22:02 GMT+03:00 Joshua Harlow >:

Cool good to know,

I see

https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66

Would that be the primary API? Hard to tell what is the API there
actually, haha. Is it the run() method?

I was thinking more along the line that higgins could be a
'interpreter' of the same docker-compose format (or similar format);
if the library that is being created takes a docker-compose file and
turns it into a 'intermediate' version/format that'd be cool. The
compiled version would then be 'executable' (and introspectable to)
by say higgins (which could say traverse over that intermediate
version and activate its own code to turn the intermediate versions
primitives into reality), or a docker-compose service could or ...


What abou TOSCA? From my own perspective compose format is too limited,
so it is really necessary to consider regarding use of TOSCA in Higgins
workflows.


Does anyone in the wider world actually use TOSCA anywhere? Has it 
gained any adoption? I've watched the TOSCA stuff, but have really been 
unable to tell what kind of an impact TOSCA actually has had (everyone 
seems to make there own format, and not care that much about TOSCA in 
general, for better or worse).





Libcompose also seems to be targeted at a higher level library, from
at least reading the summary, neither seem to be taking a compose
yaml file, turning it into a intermediate format, exposing that
intermediate format to others for introspection/execution (and also
likely providing a default execution engine that understands that
format) but instead both just provide an equivalent of:


That's why i've started this thread, as community we have use cases for
Higgins itself and for compose but most of them are not formalized or
even written. Isn't this a good time to define them?

   project = make_project(yaml_file)
   project.run/up()

Which probably isn't the best API for something like a web-service
that uses that same library to have. IMHO having a long running
run() method


Well, compose allows to run detached executions for most of its API
calls. By use of events, we can track service/containers statuses (but
it is not really trivial).


That's not exactly the same as what I was thinking,

Let's take a compose yaml file, 
https://github.com/DataDog/docker-compose-example/blob/master/docker-compose.yml


At some point this is turned into a set of actions to run (a workflow 
perhaps) to turn that yaml file into an actual running solution, now 
likely the creators of libcompose or the python version embedded those 
actions directly into the interpretation and made them inseparable but 
that doesn't need to be the case.




exposed, without the necessary state tracking, ability to
interrupt/pause/resume that run() method and such is not going to
end well for users of that lib (especially a web-service that needs
to periodically be `service webservice stop` or restart, or ...).


Yes, agreed. But docker or swarm by itself doesn't provide such API
(can't tell the same for K8t).


Meh, that's not such a good excuse to try to do it (or at least to think 
about it). If we only did what was already done, we probably wouldn't be 
doing things over email or driving cars or... :-P




Denis Makogon wrote:

Hello Stackers.


As part of discussions around what Higgins is and what its
mission there
are were couple of you who mentioned docker-compose [1] and
necessity of
doing the same thing for Higgins but from scratch.

I don't think that going that direction is the best way to spend
development cycles. So, that's why i ask you to take a look at
recent
patchset submitted to docker-compose upstream [2] that makes
this tool
(initially designed as CLI) to become a library with Python
API.  The
whole idea is to make docker-compose look similar to libcompose [3]
(written on Go).

If we need to utilize docker-compose features in Higgins i'd
recommend
to work on this with Docker community and convince them to land that
patch to upstream.

If you have any questions, please let me know.

[1] https://docs.docker.com/compose/
[2] https://github.com/docker/compose/pull/3535
[3] https://github.com/docker/libcompose


Kind regards,
Denys Makogon


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

Re: [openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

2016-05-31 Thread Chris Dent

On Tue, 31 May 2016, Jay Pipes wrote:


So this seem rather fragile and pretty user-hostile. We're creating an
opportunity for people to easily replace their existing bad tracking of
disk usage with a different style of bad tracking of disk usage.


I'm not clear why the new way of tracking disk usage would be "bad tracking"? 
The new way is correct -- i.e. the total amount of DISK_GB will be correct 
instead of multiplied by the number of compute nodes using that shared 
storage.


The issue is not with the new way, but rather that unless we protect
against multiple pools of the same class associating with the
same aggregate _or_ teach the scheduler and resource tracker to
choose the right one when recording allocations then we have pools
being updated unpredictably.

But the solutions below ought to deal with it, so: under control.

Sure, but I'm saying that, for now, this isn't something I think we need to 
be concerned about. Deployers cannot *currently* have multiple shared storage 
pools used for providing VM ephemeral disk resources. So, there is no danger 
-- outside of a deployer deliberately sabotaging things -- for a compute node 
to have >1 DISK_GB inventory record if we have a standard process for 
deployers that use shared storage to create their resource pools for DISK_GB 
and assign compute nodes to that resource pool.


I'm not sure I would categorize "just happened to add an aggregate
to a resource pool" as "deliberately sabotaging things". That's all
I'm getting at with this particular concern.

And if we do this:


Maybe that's fine, for now, but it seems we need to be aware of, not
only for ourselves, but in the documentation when we tell people how
to start using resource pools: Oh, by the way, for now, just
associate one shared disk pool to an aggregate.


Then we get this:


Sure, absolutely.


so is probably okay enough.

I suppose the alternative would be to "deal" with the multiple resource 
providers by just having the resource tracker pick whichever one appears 
first for a resource class (and order by the resource provider ID...). This 
might actually be a better alternative long-term, since then all we would 
need to do is change the ordering logic to take into account multiple 
resource providers of the same resource class instead of dealing with all 
this messy validation and conversion.


That's kind of what I was thinking. Get the multiple providers, sort
them by arbitrary something now, something smarter later. Can think of
three off the top of my head: least used, most used, random.


In my scribbles when I was thinking this through (that led to the
start of this thread) I had imagined that rather than finding both
the resource pool and compute node resource providers when finding
available disk we'd instead see if there was resource pool, use it
if it was there, and if not, just use the compute node. Therefore if
the resource pool was ever disassociated, we'd be back to where we
were before without needing to reset the state in the artifact
world.


That would work too, yes. And seems simpler to reason about... but has the 
potential of leaving bad inventory records in the inventories table for 
"local" DISK_GB resources that never will be used.


Well, presumably we're still going to need some way for a node to
update its inventory (after an upgrade and reboot) so that
functionality ought to take care of it: If the node hasn't been
reboot we assumed the representation of reality in the Inventory is
correct. If there's a reboot it gets updated?

Dunno, riffing.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Denis Makogon
Hello.

It is hard to tell if given API will be final version, but i tried to make
it similar to CLI and its capabilities. So, why not?

2016-05-31 22:02 GMT+03:00 Joshua Harlow :

> Cool good to know,
>
> I see
> https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66
>
> Would that be the primary API? Hard to tell what is the API there
> actually, haha. Is it the run() method?
>
> I was thinking more along the line that higgins could be a 'interpreter'
> of the same docker-compose format (or similar format); if the library that
> is being created takes a docker-compose file and turns it into a
> 'intermediate' version/format that'd be cool. The compiled version would
> then be 'executable' (and introspectable to) by say higgins (which could
> say traverse over that intermediate version and activate its own code to
> turn the intermediate versions primitives into reality), or a
> docker-compose service could or ...
>

What abou TOSCA? From my own perspective compose format is too limited, so
it is really necessary to consider regarding use of TOSCA in Higgins
workflows.


>
> Libcompose also seems to be targeted at a higher level library, from at
> least reading the summary, neither seem to be taking a compose yaml file,
> turning it into a intermediate format, exposing that intermediate format to
> others for introspection/execution (and also likely providing a default
> execution engine that understands that format) but instead both just
> provide an equivalent of:
>
>
That's why i've started this thread, as community we have use cases for
Higgins itself and for compose but most of them are not formalized or even
written. Isn't this a good time to define them?


>   project = make_project(yaml_file)
>   project.run/up()
>
> Which probably isn't the best API for something like a web-service that
> uses that same library to have. IMHO having a long running run() method


Well, compose allows to run detached executions for most of its API calls.
By use of events, we can track service/containers statuses (but it is not
really trivial).


> exposed, without the necessary state tracking, ability to
> interrupt/pause/resume that run() method and such is not going to end well
> for users of that lib (especially a web-service that needs to periodically
> be `service webservice stop` or restart, or ...).
>
>
Yes, agreed. But docker or swarm by itself doesn't provide such API (can't
tell the same for K8t).


> Denis Makogon wrote:
>
>> Hello Stackers.
>>
>>
>> As part of discussions around what Higgins is and what its mission there
>> are were couple of you who mentioned docker-compose [1] and necessity of
>> doing the same thing for Higgins but from scratch.
>>
>> I don't think that going that direction is the best way to spend
>> development cycles. So, that's why i ask you to take a look at recent
>> patchset submitted to docker-compose upstream [2] that makes this tool
>> (initially designed as CLI) to become a library with Python API.  The
>> whole idea is to make docker-compose look similar to libcompose [3]
>> (written on Go).
>>
>> If we need to utilize docker-compose features in Higgins i'd recommend
>> to work on this with Docker community and convince them to land that
>> patch to upstream.
>>
>> If you have any questions, please let me know.
>>
>> [1] https://docs.docker.com/compose/
>> [2] https://github.com/docker/compose/pull/3535
>> [3] https://github.com/docker/libcompose
>>
>>
>> Kind regards,
>> Denys Makogon
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-05-31 Thread Devananda van der Veen
On 05/31/2016 01:35 AM, Dmitry Tantsur wrote:
> On 05/31/2016 10:25 AM, Tan, Lin wrote:
>> Hi,
>>
>> Recently, I am working on a spec[1] in order to recover nodes which get stuck
>> in deploying state, so I really expect some feedback from you guys.
>>
>> Ironic nodes can be stuck in
>> deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is
>> reserved by a dead conductor (the exclusive lock was not released).
>> Any further requests will be denied by ironic because it thinks the node
>> resource is under control of another conductor.
>>
>> To be more clear, let's narrow the scope and focus on the deploying state
>> first. Currently, people do have several choices to clear the reserved lock:
>> 1. restart the dead conductor
>> 2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the 
>> lock.
>> 3. The operator touches the DB to manually recover these nodes.
>>
>> Option two looks very promising but there are some weakness:
>> 2.1 It won't work if the dead conductor was renamed or deleted.
>> 2.2 It won't work if the node's specific driver was not enabled on live
>> conductors.
>> 2.3 It won't work if the node is in maintenance. (only a corner case).
> 
> We can and should fix all three cases.

2.1 and 2.2 appear to be a bug in the behavior of _check_deploying_status().

The method claims to do exactly what you suggest in 2.1 and 2.2 -- it gathers a
list of Nodes reserved by *any* offline conductor and tries to release the lock.
However, it will always fail to update them, because objects.Node.release()
raises a NodeLocked exception when called on a Node locked by a different 
conductor.

Here's the relevant code path:

ironic/conductor/manager.py:
1259 def _check_deploying_status(self, context):
...
1269 offline_conductors = self.dbapi.get_offline_conductors()
...
1273 node_iter = self.iter_nodes(
1274 fields=['id', 'reservation'],
1275 filters={'provision_state': states.DEPLOYING,
1276  'maintenance': False,
1277  'reserved_by_any_of': offline_conductors})
...
1281 for node_uuid, driver, node_id, conductor_hostname in node_iter:
1285 try:
1286 objects.Node.release(context, conductor_hostname, node_id)
...
1292 except exception.NodeLocked:
1293 LOG.warning(...)
1297 continue


As far as 2.3, I think we should change the query string at the start of this
method so that it includes nodes in maintenance mode. I think it's both safe and
reasonable (and, frankly, what an operator will expect) that a node which is in
maintenance mode, and in DEPLOYING state, whose conductor is offline, should
have that reservation cleared and be set to DEPLOYFAILED state.

--devananda

>>
>> Definitely we should improve the option 2, but there are could be more issues
>> I didn't know in a more complicated environment.
>> So my question is do we still need a new command to recover these node easier
>> without accessing DB, like this PoC [2]:
>>   ironic-noderecover --node_uuids=UUID1,UUID2 
>> --config-file=/etc/ironic/ironic.conf
> 
> I'm -1 to anything silently removing the lock until I see a clear use case 
> which
> is impossible to improve within Ironic itself. Such utility may and will be 
> abused.
> 
> I'm fine with anything that does not forcibly remove the lock by default.
> 
>>
>> Best Regards,
>>
>> Tan
>>
>>
>> [1] https://review.openstack.org/#/c/319812
>> [2] https://review.openstack.org/#/c/311273/
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Joshua Harlow

Cool good to know,

I see 
https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66


Would that be the primary API? Hard to tell what is the API there 
actually, haha. Is it the run() method?


I was thinking more along the line that higgins could be a 'interpreter' 
of the same docker-compose format (or similar format); if the library 
that is being created takes a docker-compose file and turns it into a 
'intermediate' version/format that'd be cool. The compiled version would 
then be 'executable' (and introspectable to) by say higgins (which could 
say traverse over that intermediate version and activate its own code to 
turn the intermediate versions primitives into reality), or a 
docker-compose service could or ...


Libcompose also seems to be targeted at a higher level library, from at 
least reading the summary, neither seem to be taking a compose yaml 
file, turning it into a intermediate format, exposing that intermediate 
format to others for introspection/execution (and also likely providing 
a default execution engine that understands that format) but instead 
both just provide an equivalent of:


  project = make_project(yaml_file)
  project.run/up()

Which probably isn't the best API for something like a web-service that 
uses that same library to have. IMHO having a long running run() method 
exposed, without the necessary state tracking, ability to 
interrupt/pause/resume that run() method and such is not going to end 
well for users of that lib (especially a web-service that needs to 
periodically be `service webservice stop` or restart, or ...).


Denis Makogon wrote:

Hello Stackers.


As part of discussions around what Higgins is and what its mission there
are were couple of you who mentioned docker-compose [1] and necessity of
doing the same thing for Higgins but from scratch.

I don't think that going that direction is the best way to spend
development cycles. So, that's why i ask you to take a look at recent
patchset submitted to docker-compose upstream [2] that makes this tool
(initially designed as CLI) to become a library with Python API.  The
whole idea is to make docker-compose look similar to libcompose [3]
(written on Go).

If we need to utilize docker-compose features in Higgins i'd recommend
to work on this with Docker community and convince them to land that
patch to upstream.

If you have any questions, please let me know.

[1] https://docs.docker.com/compose/
[2] https://github.com/docker/compose/pull/3535
[3] https://github.com/docker/libcompose


Kind regards,
Denys Makogon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings

2016-05-31 Thread Ildikó Váncsa
Hi All,

We skipped the Monday slot this week due to the holiday in the US. __Only this 
week__ we will hold the meeting on __Thursday, 1700UTC__ on the 
__#openstack-meeting-cp__ channel.

Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes 

Thanks and Best Regards,
/Ildikó

> -Original Message-
> From: Ildikó Váncsa [mailto:ildiko.van...@ericsson.com]
> Sent: May 20, 2016 18:31
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly 
> IRC meetings
> 
> Hi All,
> 
> We have now the approved slot for the Cinder-Nova interaction changes meeting 
> series. The new slot is __Monday, 1700UTC__, it will
> be on channel  __#openstack-meeting-cp__.
> 
> Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes
> Summary about ongoing items: 
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/094089.html
> 
> We will have one exception which is May 30 as it is a US holiday, I will 
> announce a temporary slot for that week.
> 
> Thanks,
> /Ildikó
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-05-31 Thread Ryan Moats



"Armando M."  wrote on 05/31/2016 01:12:32 PM:

> From: "Armando M." 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 05/31/2016 01:13 PM
> Subject: [openstack-dev] [Neutron][Release] Changing release model
> for *-aas services
>
> Hi folks,
>
> Having looked at the recent commit volume that has been going into
> the *-aas repos, I am considering changing the release model for
> neutron-vpnaas, neutron-fwaas, neutron-lbaas from release:cycle-
> with-milestones [1] to release:cycle-with-intermediary [2]. This
> change will allow us to avoid publishing a release at fixed times
> when there's nothing worth releasing.
>
> I'll follow up with a governance change, as I know of the imminent
> deadline [3].
>
> Thoughts?
> Armando
>
> [1] https://governance.openstack.org/reference/tags/release_cycle-
> with-milestones.html
> [2] https://governance.openstack.org/reference/tags/release_cycle-
> with-intermediary.html
>
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095490.html

+1 to this as it makes a *LOT* of sense to me...

Ryan (regXboi)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-05-31 Thread Kyle Mestery
On Tue, May 31, 2016 at 1:12 PM, Armando M.  wrote:
> Hi folks,
>
> Having looked at the recent commit volume that has been going into the *-aas
> repos, I am considering changing the release model for neutron-vpnaas,
> neutron-fwaas, neutron-lbaas from release:cycle-with-milestones [1] to
> release:cycle-with-intermediary [2]. This change will allow us to avoid
> publishing a release at fixed times when there's nothing worth releasing.
>
> I'll follow up with a governance change, as I know of the imminent deadline
> [3].
>
> Thoughts?
> Armando
>
+1, I've voted as such on the reivew as well [4].

[4] https://review.openstack.org/#/c/323522/

> [1]
> https://governance.openstack.org/reference/tags/release_cycle-with-milestones.html
> [2]
> https://governance.openstack.org/reference/tags/release_cycle-with-intermediary.html
> [3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095490.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-05-31 Thread Armando M.
On 31 May 2016 at 11:17, Ihar Hrachyshka  wrote:

>
> > On 31 May 2016, at 20:12, Armando M.  wrote:
> >
> > Hi folks,
> >
> > Having looked at the recent commit volume that has been going into the
> *-aas repos, I am considering changing the release model for
> neutron-vpnaas, neutron-fwaas, neutron-lbaas from
> release:cycle-with-milestones [1] to release:cycle-with-intermediary [2].
> This change will allow us to avoid publishing a release at fixed times when
> there's nothing worth releasing.
>
> VPNaaS and FWaaS are the land of the dead these days. Even LBaaS is not
> that active.
>
> +1 for the change.
>

https://review.openstack.org/#/c/323522/


>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-05-31 Thread Ihar Hrachyshka

> On 31 May 2016, at 20:12, Armando M.  wrote:
> 
> Hi folks,
> 
> Having looked at the recent commit volume that has been going into the *-aas 
> repos, I am considering changing the release model for neutron-vpnaas, 
> neutron-fwaas, neutron-lbaas from release:cycle-with-milestones [1] to 
> release:cycle-with-intermediary [2]. This change will allow us to avoid 
> publishing a release at fixed times when there's nothing worth releasing.

VPNaaS and FWaaS are the land of the dead these days. Even LBaaS is not that 
active.

+1 for the change.

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-05-31 Thread Armando M.
Hi folks,

Having looked at the recent commit volume that has been going into the
*-aas repos, I am considering changing the release model for
neutron-vpnaas, neutron-fwaas, neutron-lbaas
from release:cycle-with-milestones [1] to release:cycle-with-intermediary
[2]. This change will allow us to avoid publishing a release at fixed times
when there's nothing worth releasing.

I'll follow up with a governance change, as I know of the imminent deadline
[3].

Thoughts?
Armando

[1]
https://governance.openstack.org/reference/tags/release_cycle-with-milestones.html
[2]
https://governance.openstack.org/reference/tags/release_cycle-with-intermediary.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095490.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [Tempest] Abondoned old code reviews

2016-05-31 Thread Andrea Frittoli
On Mon, 30 May 2016, 6:25 p.m. Ken'ichi Ohmichi, 
wrote:

> Hi,
>
> There are many patches which are not updated in Tempest review queue
> even if having gotten negative feedback from reviewers or jenkins.
> Nova team is abandoning such patches like [1].
> I feel it would be nice to abandone such patches which are not updated
> since the end of 2015.
> Any thoughts?
>

I don't mind either way, if you prefer abandoning them it's ok with me.
I rely on gerrit dashboards and IRC communication to decide which patches I
should review; but I understand it would be nice to remove some clutter.

Andrea


> [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/096112.html
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

2016-05-31 Thread Jay Pipes

On 05/31/2016 01:06 PM, Chris Dent wrote:

On Tue, 31 May 2016, Jay Pipes wrote:

Kinda. What the compute node needs is an InventoryList object
containing all inventory records for all resource classes both local
to it as well as associated to it via any aggregate-resource-pool
mapping.


Okay, that mostly makes sense. A bit different from what I've proved
out so far, but plenty of room to make it go that way.


Understood, and not a problem. I will provide more in-depth coded 
examples in code review comments.



The SQL for generating this InventoryList is the following:


Presumably this would be a method on the InventoryList object
itself?


InventoryList.get_by_compute_node() would be my suggestion. :)


We can deal with multiple shared storage pools per aggregate at a
later time. Just take the first resource provider in the list of
inventory records returned from the above SQL query that corresponds
to the DISK_GB resource class and that is resource provider you will
deduct from.


So this seem rather fragile and pretty user-hostile. We're creating an
opportunity for people to easily replace their existing bad tracking of
disk usage with a different style of bad tracking of disk usage.


I'm not clear why the new way of tracking disk usage would be "bad 
tracking"? The new way is correct -- i.e. the total amount of DISK_GB 
will be correct instead of multiplied by the number of compute nodes 
using that shared storage.



If we assign to different shared disk resource pools to the same
aggregate we've got a weird situation (unless we explicitly order
the resource providers by something).


Sure, but I'm saying that, for now, this isn't something I think we need 
to be concerned about. Deployers cannot *currently* have multiple shared 
storage pools used for providing VM ephemeral disk resources. So, there 
is no danger -- outside of a deployer deliberately sabotaging things -- 
for a compute node to have >1 DISK_GB inventory record if we have a 
standard process for deployers that use shared storage to create their 
resource pools for DISK_GB and assign compute nodes to that resource pool.



Maybe that's fine, for now, but it seems we need to be aware of, not
only for ourselves, but in the documentation when we tell people how
to start using resource pools: Oh, by the way, for now, just
associate one shared disk pool to an aggregate.


Sure, absolutely.


Assume only a single resource provider of DISK_GB. It will be either a
compute node's resource provider ID or a resource pool's resource
provider ID.


✔


For this initial work, my idea was to have some code that, on creation
of a resource pool and its association with an aggregate, if that
resource pool has an inventory record with resource_class of DISK_GB
then remove any inventory records with DISK_GB resource class for any
compute node's (local) resource provider ID associated with that
aggregate. This way we ensure the existing behaviour that a compute
node either has local disk or it uses shared storage, but not both.


So let me translate that to make sure I get it:

* node X exists, has inventory of DISK_GB
* node X is in aggregate Y
* resource pool A is created
* two possible paths now: first associating aggregate to pool or
   first adding inventory pool
* in either case, when aggregate Y is associated, if the pool has
   DISK_GB, traverse the nodes in aggregate Y and drop the disk
   inventory


Correct.


So, effectively, any time we associate an aggregate we need to
inspect its nodes?


Yeah good point. :(

I suppose the alternative would be to "deal" with the multiple resource 
providers by just having the resource tracker pick whichever one appears 
first for a resource class (and order by the resource provider ID...). 
This might actually be a better alternative long-term, since then all we 
would need to do is change the ordering logic to take into account 
multiple resource providers of the same resource class instead of 
dealing with all this messy validation and conversion.



What happens if we ever disassociate an aggregate from a resource pool?
Do the nodes in the aggregate have some way to get their local Inventory
back or are we going to assume that the switch to shared is one way?


OK, yeah, you've sold me that my solution isn't good. By just allowing 
multiple providers and picking the "first" that appears, we limit 
ourselves to just needing to do the scrubbing of compute node local 
DISK_GB inventory records -- which we can do in an online data migration 
-- and we don't have to worry about the disassociate/associate aggregate 
problems.



In my scribbles when I was thinking this through (that led to the
start of this thread) I had imagined that rather than finding both
the resource pool and compute node resource providers when finding
available disk we'd instead see if there was resource pool, use it
if it was there, and if not, just use the compute node. Therefore if
the resource pool was ever disassociated, 

[openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Denis Makogon
Hello Stackers.


As part of discussions around what Higgins is and what its mission there
are were couple of you who mentioned docker-compose [1] and necessity of
doing the same thing for Higgins but from scratch.

I don't think that going that direction is the best way to spend
development cycles. So, that's why i ask you to take a look at recent
patchset submitted to docker-compose upstream [2] that makes this tool
(initially designed as CLI) to become a library with Python API.  The whole
idea is to make docker-compose look similar to libcompose [3] (written on
Go).

If we need to utilize docker-compose features in Higgins i'd recommend to
work on this with Docker community and convince them to land that patch to
upstream.

If you have any questions, please let me know.

[1] https://docs.docker.com/compose/
[2] https://github.com/docker/compose/pull/3535
[3] https://github.com/docker/libcompose


Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread Henry Fourie
Ryan,
   I agree that having rules in the ACL table with actions that would steer the 
packets to SFC
Processing would be a good approach.

-Louis

From: Ryan Moats [mailto:rmo...@us.ibm.com]
Sent: Tuesday, May 31, 2016 10:18 AM
To: John McDowall
Cc: Justin Pettit; Russell Bryant; Ben Pfaff; OpenStack Development Mailing 
List; disc...@openvswitch.org
Subject: Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
> wrote 
on 05/26/2016 11:08:43 AM:

> From: John McDowall 
> >
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org"
> >, Justin Pettit 
> >,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> >
> Date: 05/26/2016 11:09 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> My (incomplete) throughts about the flow-classifier are:
>
> 1)  ACL’s are more about denying access, while the flow classifier
> is more about steering selected traffic to a path, so we would need
> to deny-all except allowed flows.
> 2)  The networking-sfc team has done a nice job with the drivers so
> ovn has its own flow-classifier driver which allows us to align the
> flow-classifier with the matches supported in ovs/ovn, which could
> be an advantage.

The ACL table has a very simple flow-classifier structure and I'd
like to see if that can be re-used for the purpose of the SFC classifier
(read that I feel the Logical_Flow_Classifier table is too complex).
My initial thoughts were to look at extending the action column and
using the external-ids field to differentiate between legacy ACLs and
those that are used to intercept traffic and route it to an SFC.

>
> What were your thoughts on the schema it adds a lot of tables and a
> lot of commands – cannot think of anyway around it

In this case, I think that the other tables are reasonable and I'm
uncomfortable trying to stretch the existing tables to cover that
information...

Ryan

>
> Regards
>
> John
>
> From: Ryan Moats >
> Date: Wednesday, May 25, 2016 at 9:12 PM
> To: John McDowall 
> >
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit 
> >, OpenStack
> Development Mailing List 
> >,
>  Russell Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall 
> > wrote 
> on 05/25/2016
> 07:27:46 PM:
>
> > From: John McDowall 
> > >
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: "disc...@openvswitch.org" 
> > >, "OpenStack
> > Development Mailing List" 
> > >,
> >  Ben
> > Pfaff >, Justin Pettit 
> > >, Russell Bryant
> > >
> > Date: 05/25/2016 07:28 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > Ok – I will let the experts weigh in on load balancing.
> >
> > In the meantime I have attached a couple of files to show where I am
> > going. The first is sfc_dict.py and is a representation of the dict
> > I am passing from SFC to OVN. This will then translate to the
> > attached ovn-nb schema file.
> >
> > One of my concerns is that SFC almost doubles the size of the ovn-nb
> > schema but I could not think of any other way of doing it.
> >
> > Thoughts?
> >
> > John
>
> The dictionary looks fine for a starting point, and the more I look
> at the classifier, the more I wonder if we can't do something with
> the current ACL table to avoid duplication in the NB database
> definition...
>
> Ryan
>
> > From: Ryan Moats >
> > Date: Wednesday, May 25, 2016 at 7:27 AM
> > To: John McDowall 
> > >
> > Cc: "disc...@openvswitch.org" 
> > >, OpenStack
> > Development Mailing List 
> > 

[openstack-dev] [ironic] looking for documentation liaison

2016-05-31 Thread Loo, Ruby
Hi,

We¹re looking for a documentation liaison [1]. If you love (Œlike¹ is also 
acceptable) documentation, care that ironic has great documentation, and would 
love to volunteer, please let us know.

The position would require you to:

- attend the weekly doc team meetings [2] (or biweekly, depending on which 
times work for you), and represent ironic
- attend the weekly ironic meetings[3] and report (via the subteam reports) on 
anything that may impact ironic
- open bugs/whatever to track getting any documentation-related work done. You 
aren¹t expected to do the work yourself although please do if you¹d like!
- know the general status of ironic documentation
- see the expectations mentioned at [1]

Please let me know if you have any questions. Thanks and may the best candidate 
win ?

--ruby

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
[2] https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
[3] https://wiki.openstack.org/wiki/Meetings/Ironic





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread Ryan Moats


John McDowall  wrote on 05/26/2016 11:08:43
AM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org"
> , Justin Pettit ,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 05/26/2016 11:09 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> My (incomplete) throughts about the flow-classifier are:
>
> 1)  ACL’s are more about denying access, while the flow classifier
> is more about steering selected traffic to a path, so we would need
> to deny-all except allowed flows.
> 2)  The networking-sfc team has done a nice job with the drivers so
> ovn has its own flow-classifier driver which allows us to align the
> flow-classifier with the matches supported in ovs/ovn, which could
> be an advantage.

The ACL table has a very simple flow-classifier structure and I'd
like to see if that can be re-used for the purpose of the SFC classifier
(read that I feel the Logical_Flow_Classifier table is too complex).
My initial thoughts were to look at extending the action column and
using the external-ids field to differentiate between legacy ACLs and
those that are used to intercept traffic and route it to an SFC.

>
> What were your thoughts on the schema it adds a lot of tables and a
> lot of commands – cannot think of anyway around it

In this case, I think that the other tables are reasonable and I'm
uncomfortable trying to stretch the existing tables to cover that
information...

Ryan

>
> Regards
>
> John
>
> From: Ryan Moats 
> Date: Wednesday, May 25, 2016 at 9:12 PM
> To: John McDowall 
> Cc: Ben Pfaff , "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit , OpenStack
> Development Mailing List , Russell
Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall  wrote on 05/25/2016
> 07:27:46 PM:
>
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: "disc...@openvswitch.org" , "OpenStack
> > Development Mailing List" , Ben
> > Pfaff , Justin Pettit , Russell Bryant
> > 
> > Date: 05/25/2016 07:28 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > Ok – I will let the experts weigh in on load balancing.
> >
> > In the meantime I have attached a couple of files to show where I am
> > going. The first is sfc_dict.py and is a representation of the dict
> > I am passing from SFC to OVN. This will then translate to the
> > attached ovn-nb schema file.
> >
> > One of my concerns is that SFC almost doubles the size of the ovn-nb
> > schema but I could not think of any other way of doing it.
> >
> > Thoughts?
> >
> > John
>
> The dictionary looks fine for a starting point, and the more I look
> at the classifier, the more I wonder if we can't do something with
> the current ACL table to avoid duplication in the NB database
> definition...
>
> Ryan
>
> > From: Ryan Moats 
> > Date: Wednesday, May 25, 2016 at 7:27 AM
> > To: John McDowall 
> > Cc: "disc...@openvswitch.org" , OpenStack
> > Development Mailing List , Ben Pfaff
<
> > b...@ovn.org>, Justin Pettit , Russell Bryant <
> russ...@ovn.org
> > >
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > John McDowall  wrote on 05/24/2016
> > 06:33:05 PM:
> >
> > > From: John McDowall 
> > > To: Ryan Moats/Omaha/IBM@IBMUS
> > > Cc: "disc...@openvswitch.org" , "OpenStack
> > > Development Mailing List" 
> > > Date: 05/24/2016 06:33 PM
> > > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> > >
> > > Ryan,
> > >
> > > Thanks for getting back to me and pointing me in a more OVS like
> > > direction. What you say makes sense, let me hack something together.
> > > I have been a little distracted getting some use cases together. The
> > > other area is how to better map the flow-classifier I have been
> > > thinking about it a little, but I will leave it till after we get
> > > the chains done.
> > >
> > > Your load-balancing comment was very interesting – I saw some
> > > patches for load-balancing a few months ago but nothing since. It
> > > would be great if we could align with load-balancing as that would
> > > make a really powerful solution.
> > >
> > > Regards
> > >
> > > John
> >
> > John-
> >
> > For the load balancing, 

Re: [openstack-dev] [congress] Spec for congress.conf

2016-05-31 Thread Tim Hinrichs
We should add a section to our docs that details the config option names,
their descriptions, and which ones are required.  We should backport that
to mitaka and maybe liberty.

Tim

On Mon, May 30, 2016 at 12:49 AM Masahito MUROI <
muroi.masah...@lab.ntt.co.jp> wrote:

> Hi Bryan,
>
>
> On 2016/05/28 2:52, Bryan Sullivan wrote:
> > Masahito,
> >
> > Sorry, I'm not quite clear on the guidance. Sounds like you're saying
> > all options will be defaulted by Oslo.config if not set in the
> > congress.conf file. That's OK, if I understood.
> you're right.
>
> >
> > It's clear to me that some will be deployment-specific.
> >
> > But what I am asking is where is the spec for:
> > - what congress.conf fields are supported i.e. defined for possible
> > setting in a release
> Your generated congress.conf has a list of all supported config fields.
>
> > - which fields are mandatory to be set (or Congress will simply not work)
> > - which fields are not mandatory, but must be set for some specific
> > purpose, which right now is unclear
> Without deployment-specific configs, IIRC what you need to change from
> default only is "drivers" fields to run Congress with default setting.
>
> >
> > I'm hoping the answer isn't "go look at the code"! That won't work for
> > end-users, who are looking to use Congress but not decipher the
> > meaning/importance of specific fields from the code.
> I guess your generated config has the purpose of each config fields.
>
> If you expect the spec means documents like [1], unfortunately Congress
> doesn't have these kind of document now.
>
> [1] http://docs.openstack.org/mitaka/config-reference/
>
> best regards,
> Masahito
>
> >
> > Thanks,
> > Bryan Sullivan
> >
> >> From: muroi.masah...@lab.ntt.co.jp
> >> Date: Fri, 27 May 2016 15:40:31 +0900
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [congress] Spec for congress.conf
> >>
> >> Hi Bryan,
> >>
> >> Oslo.config that Congress uses to manage config sets each fields to
> >> default value if you don't specify your configured values in
> >> congress.conf. In that meaning, all config is option/required.
> >>
> >> In my experience, config values differing from each deployment, like ip
> >> address and so on, have to be configured, but others might be configured
> >> when you want Congress to run with different behaviors.
> >>
> >> best regard,
> >> Masahito
> >>
> >> On 2016/05/27 3:36, SULLIVAN, BRYAN L wrote:
> >> > Hi Congress team,
> >> >
> >> >
> >> >
> >> > Quick question for anyone. Is there a spec for fields in congress.conf
> >> > file? As of Liberty this has to be tox-generated but I need to know
> >> > which conf values are required vs optional. The generated sample
> output
> >> > doesn't clarify that. This is for the Puppet Module and JuJu Charm I
> am
> >> > developing with the help of RedHat and Canonical in OPNFV. I should
> have
> >> > Congress installed by default (for the RDO and JuJu installers) in the
> >> > OPNFV Colorado release in the next couple of weeks, and the
> >> > congress.conf file settings are an open question. The Puppet module
> will
> >> > also be used to create a Fuel plugin for installation.
> >> >
> >> >
> >> >
> >> > Thanks,
> >> >
> >> > Bryan Sullivan | AT
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> >> --
> >> 室井 雅仁(Masahito MUROI)
> >> Software Innovation Center, NTT
> >> Tel: +81-422-59-4539
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> 室井 雅仁(Masahito MUROI)
> Software Innovation Center, NTT
> Tel: +81-422-59-4539
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

2016-05-31 Thread Chris Dent

On Tue, 31 May 2016, Jay Pipes wrote:

Kinda. What the compute node needs is an InventoryList object containing all 
inventory records for all resource classes both local to it as well as 
associated to it via any aggregate-resource-pool mapping.


Okay, that mostly makes sense. A bit different from what I've proved
out so far, but plenty of room to make it go that way.


The SQL for generating this InventoryList is the following:


Presumably this would be a method on the InventoryList object
itself?

We can deal with multiple shared storage pools per aggregate at a later time. 
Just take the first resource provider in the list of inventory records 
returned from the above SQL query that corresponds to the DISK_GB resource 
class and that is resource provider you will deduct from.


So this seem rather fragile and pretty user-hostile. We're creating an
opportunity for people to easily replace their existing bad tracking of
disk usage with a different style of bad tracking of disk usage.

If we assign to different shared disk resource pools to the same
aggregate we've got a weird situation (unless we explicitly order
the resource providers by something).

Maybe that's fine, for now, but it seems we need to be aware of, not
only for ourselves, but in the documentation when we tell people how
to start using resource pools: Oh, by the way, for now, just
associate one shared disk pool to an aggregate.

Assume only a single resource provider of DISK_GB. It will be either a 
compute node's resource provider ID or a resource pool's resource provider 
ID.


✔

For this initial work, my idea was to have some code that, on creation of a 
resource pool and its association with an aggregate, if that resource pool 
has an inventory record with resource_class of DISK_GB then remove any 
inventory records with DISK_GB resource class for any compute node's (local) 
resource provider ID associated with that aggregate. This way we ensure the 
existing behaviour that a compute node either has local disk or it uses 
shared storage, but not both.


So let me translate that to make sure I get it:

* node X exists, has inventory of DISK_GB
* node X is in aggregate Y
* resource pool A is created
* two possible paths now: first associating aggregate to pool or
  first adding inventory pool
* in either case, when aggregate Y is associated, if the pool has
  DISK_GB, traverse the nodes in aggregate Y and drop the disk
  inventory

So, effectively, any time we associate an aggregate we need to
inspect its nodes?

What happens if we ever disassociate an aggregate from a resource pool?
Do the nodes in the aggregate have some way to get their local Inventory
back or are we going to assume that the switch to shared is one way?

In my scribbles when I was thinking this through (that led to the
start of this thread) I had imagined that rather than finding both
the resource pool and compute node resource providers when finding
available disk we'd instead see if there was resource pool, use it
if it was there, and if not, just use the compute node. Therefore if
the resource pool was ever disassociated, we'd be back to where we
were before without needing to reset the state in the artifact
world.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-31 Thread Ryan Moats


John McDowall  wrote on 05/26/2016 10:59:48
AM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS, Ben Pfaff 
> Cc: "disc...@openvswitch.org" , Justin
> Pettit , OpenStack Development Mailing List
> , Russell Bryant 
> Date: 05/26/2016 11:00 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Agree with your description of the problem. The only thing I would
> add is that in the case of bi-directional chains the return flows
> need to go through the same VNF(Port-pair).

I'm pretty sure that is caught automagically, isn't it?

Ryan

>
> Regards
>
> John
>
> From: Ryan Moats 
> Date: Wednesday, May 25, 2016 at 9:29 PM
> To: Ben Pfaff 
> Cc: "disc...@openvswitch.org" , John McDowall <
> jmcdow...@paloaltonetworks.com>, Justin Pettit ,
> OpenStack Development Mailing List  >, Russell Bryant 
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ben Pfaff  wrote on 05/25/2016 07:44:43 PM:
>
> > From: Ben Pfaff 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: John McDowall ,
> > "disc...@openvswitch.org" , OpenStack
> > Development Mailing List , Justin
> > Pettit , Russell Bryant 
> > Date: 05/25/2016 07:44 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > On Wed, May 25, 2016 at 09:27:31AM -0500, Ryan Moats wrote:
> > > As I understand it, Table 0 identifies the logical port and logical
> > > flow. I'm worried that this means we'll end up with separate bucket
> > > rules for each ingress port of the port pairs that make up a port
> > > group, leading to a cardinality product in the number of rules.
> > > I'm trying to think of a way where Table 0 could identify the packet
> > > as being part of a particular port group, and then I'd only need one
> > > set of bucket rules to figure out the egress side.  However, the
> > > amount of free metadata space is limited and so before we go down
> > > this path, I'm going to pull Justin, Ben and Russell in to see if
> > > they buy into this idea or if they can think of an alternative.
> >
> > I've barely been following the discussion, so a recap of the question
> > here would help a lot.
> >
>
> Sure (and John gets to correct me where I'm wrong) - the SFC proposal
> is to carry a chain as a ordered set of port groups, where each group
> consists of multiple port pairs. Each port pair consists of an ingress
> port and an egress port, so that traffic is load balanced between
> the ingress ports of a group. Traffic from the egress port of a group
> is sent to the ingress port of the next group (ingress and egress here
> are from the point of view of the thing getting the traffic).
>
> I was suggesting to John that from the view of the switch, this would
> be reversed in the openvswitch rules - the proposed CHAINING stage
> in the ingress pipeline would apply the classifier for traffic entering
> a chain and identify traffic coming from an egress SFC port in the
> midst of a chain. The egress pipeline would identify the next ingress SFC
> port that gets the traffic or the final destination for traffic exiting
> the chain.
>
> Further, I pointed him at the select group for how traffic could be
> load balanced between the different ports that are contained in a port
> group, but that I was worried that I'd need a cartesian product of rules
> in the egress chain stage.  Having thought about this some more, I'm
> realizing that I'm confused and the number of rules should not be that
> bad:
>
> - Table 0 will identify the logical port the traffic comes from
> - The CHAINING stage of the ingress pipeline can map that logical
>   port information to the port group the port is part of.
> - The CHAINING stage of the egress pipeline would use that port
>   group information to select the next logical port via a select group.
>
> I believe this requires a total number of rules in the CHAINING stages
> of the order of the number of ports in the service chain.
>
> The above is predicated on carrying the port group information from
> the ingress pipeline to the egress pipeline in metadata, so I would
> be looking to you for ideas on where this data could be carried, since
> I know that we don't have infinite space for said metadata...
>
> Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [heat] [murano] [app-catalog] OpenStack Apps Community, several suggestions how to improve collaboration

2016-05-31 Thread Jeremy Stanley
On 2016-05-31 19:20:22 +0300 (+0300), Sergey Kraynev wrote:
[...]
> * *Second part related with changes with future repositories and*
> important for Openstack Infra team *
> JFYI, what we plan to do as next steps.
> 
> Murano team will re-create some applications in their repositories using
> name murano-examples, as reference implementation of some of the
> applications which Murano team decides to keep in their project for
> reference. This can be done by Murano team, no external help needed.
> 
> Some of the applications (complicated and big applications like CI/CD
> pipeline or Kubernetes cluster) will have their own repositories in the
> future under openstack/. Actually CI/CD pipeline already lives in separated
> repository, probably Kubernetes should be also moved to separated repo
> going forward. Hopefully this shouldn't be a big deal for OpenStack Infra
> team.
> *However* we would like to get confirmation, that *Infra team* is ok with
> it?

Infra hasn't balked in the past at project teams having however many
Git repositories they need to be able to effectively maintain their
software (see configuration management projects for examples of
fairly large sets of repos). Do you have any guesses as to how many
you're talking about creating in, say, the next year?

> Suggestion is to use common template for names of repositories with Murano
> applications in the future, namely openstack/murano-app-...
> (openstack/murano-app-kubernetes, openstack/murano-app-docker, ...). We'll
> describe overall approach in more details using
> https://launchpad.net/murano-apps as entry point.
> 
> Simple applications or applications where there is no active development
> will keep being stored in murano-apps until there is a demand to move some
> of them to separated repository. At that point we'll ask OpenStack Infra
> team to do it.
[...]

Can you clarify what it is you're going to ask Infra to do? I think
the things you're describing can be done entirely through
configuration (you just need some project-config-core reviewers to
approve the changes you propose), but that might mean I'm
misunderstanding you.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Finishing the job on threat analysis for Kolla

2016-05-31 Thread Chivers, Doug
Thanks for following up Steve, the sessions at the summit were extremely useful.

Both Rob and I have been caught up with the day-job since we got back from the 
summit, but will discuss next steps and agree a plan this week.

Regards

Doug




From: "Steven Dake (stdake)" >
Date: Tuesday, 24 May 2016 at 17:16
To: 
"openstack-dev@lists.openstack.org" 
>
Cc: Doug Chivers >, 
"robcl...@uk.ibm.com" 
>
Subject: [kolla][security] Finishing the job on threat analysis for Kolla

Rob and Doug,

At Summit we had 4 hours of highly productive work producing a list of "things" 
that can be "threatened".  We have about 4 or 5 common patterns where we follow 
the principle of least privilege.  On Friday of Summit we produced a list of 
all the things (in this case deployed containers).  I'm not sure who, I think 
it was Rob was working on a flow diagram for the least privileged case.  From 
there, the Kolla coresec team can produce the rest of the diagrams for 
increasing privileges.

I'd like to get that done, then move on to next steps.  Not sure what the next 
steps are, but lets cover the flow diagrams first since we know we need those.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [heat] [murano] [app-catalog] OpenStack Apps Community, several suggestions how to improve collaboration

2016-05-31 Thread Ihor Dvoretskyi
Sergey,

Great initiative, +1 from me.
On May 31, 2016 7:24 PM, "Sergey Kraynev"  wrote:

> Hi Infra, Murano and App Catalog teams.
>
> We discussed in some more details plan suggested below with App Catalog,
> Murano and (partially) Infra team regarding moving repositories with source
> code of Murano applications out of area of responsibility of Murano core
> team.
>
> * *First part related with changes in existing Murano repository and*
> important for Murano App developers *
>
> Decision was made to:
> - create new gerrit groups for review/release/test repository, namely -
> murano-apps-core
> - don't rename murano-apps project and repository, just assign team above
> as owners (murano-apps-core)
>
> Previous owner of this project (murano-core) will be part of new group to
> continue sharing expertise in Murano with this new team and help them going
> forward.
>
> There is patch on review for it: https://review.openstack.org/#/c/323340/3
>
> Separation of murano-apps from Murano is the first step in separating work
> with Murano applications from work with Murano core project.
>
> * *Second part related with changes with future repositories and*
> important for Openstack Infra team *
> JFYI, what we plan to do as next steps.
>
> Murano team will re-create some applications in their repositories using
> name murano-examples, as reference implementation of some of the
> applications which Murano team decides to keep in their project for
> reference. This can be done by Murano team, no external help needed.
>
> Some of the applications (complicated and big applications like CI/CD
> pipeline or Kubernetes cluster) will have their own repositories in the
> future under openstack/. Actually CI/CD pipeline already lives in separated
> repository, probably Kubernetes should be also moved to separated repo
> going forward. Hopefully this shouldn't be a big deal for OpenStack Infra
> team.
> *However* we would like to get confirmation, that *Infra team* is ok with
> it?
>
> Suggestion is to use common template for names of repositories with Murano
> applications in the future, namely openstack/murano-app-...
> (openstack/murano-app-kubernetes, openstack/murano-app-docker, ...). We'll
> describe overall approach in more details using
> https://launchpad.net/murano-apps as entry point.
>
> Simple applications or applications where there is no active development
> will keep being stored in murano-apps until there is a demand to move some
> of them to separated repository. At that point we'll ask OpenStack Infra
> team to do it.
>
> We hope that this will help to clearly identify area of responsibility
> around development of Murano applications, helping to onboard new
> contributors/teams using mostly efforts of Murano Apps team. I.e. creation
> of new application in murano-apps repository means in this model just
> creation of new directory with new application, which can be done by
> murano-apps team on their own. In this case we'll need to understand how to
> organize CI for different applications being stored in the same repository
> but I think we'll figure it out, it's not a blocker.
>
> This model allows us to ask for involvement of OpenStack Infra team only
> in rare cases when there is a need to create separate repository for
> especially big and complicated Murano Application which should be treated
> as a dedicated project with its own development team and CI.
>
>
> Any suggestions, questions are welcome
>
> On 25 May 2016 at 14:37, Igor Marnat  wrote:
>
>> Colleagues,
>> having attended many sessions and talked to many customers, partners
>> and contributors in Austin I’d like to suggest several improvements to how
>> we develop OpenStack apps and work with the Community App Catalog (
>> https://apps.openstack.org/).
>>
>> Key goals to achieve are:
>> - Provide contributors with an ability to collaborate on OpenStack
>> apps development
>> - Provide contributors and consumers with transparent workflow to
>> manage their apps
>> - Provide consumers with information about apps - how it was developed
>> and tested
>> - To summarize - introduce the way to build community working on
>> OpenStack apps
>>
>> *What is OpenStack application*
>> OpenStack is about 6 years young and all these years discussions about
>> it are in progress. Variety of applications is huge, from LAMP stacks
>> and legacy Java apps to telco workloads and VNF apps. There is working
>> group which works on a definition of "What is OpenStack application",
>> hopefully community will agree on definition soon.
>>
>> For the sake of our discussion below let us agree on a simple approach:
>> an OpenStack application is any software asset which 1. can be executed on
>> an OpenStack cloud, 2. lives in apps.openstack.org.  So far there are
>> Murano applications, Heat templates, Glance images and TOSCA templates.
>>
>> There are many good OpenStack applications in the world which 

Re: [openstack-dev] [tc] [heat] [murano] [app-catalog] OpenStack Apps Community, several suggestions how to improve collaboration

2016-05-31 Thread Sergey Kraynev
Hi Infra, Murano and App Catalog teams.

We discussed in some more details plan suggested below with App Catalog,
Murano and (partially) Infra team regarding moving repositories with source
code of Murano applications out of area of responsibility of Murano core
team.

* *First part related with changes in existing Murano repository and*
important for Murano App developers *

Decision was made to:
- create new gerrit groups for review/release/test repository, namely -
murano-apps-core
- don't rename murano-apps project and repository, just assign team above
as owners (murano-apps-core)

Previous owner of this project (murano-core) will be part of new group to
continue sharing expertise in Murano with this new team and help them going
forward.

There is patch on review for it: https://review.openstack.org/#/c/323340/3

Separation of murano-apps from Murano is the first step in separating work
with Murano applications from work with Murano core project.

* *Second part related with changes with future repositories and*
important for Openstack Infra team *
JFYI, what we plan to do as next steps.

Murano team will re-create some applications in their repositories using
name murano-examples, as reference implementation of some of the
applications which Murano team decides to keep in their project for
reference. This can be done by Murano team, no external help needed.

Some of the applications (complicated and big applications like CI/CD
pipeline or Kubernetes cluster) will have their own repositories in the
future under openstack/. Actually CI/CD pipeline already lives in separated
repository, probably Kubernetes should be also moved to separated repo
going forward. Hopefully this shouldn't be a big deal for OpenStack Infra
team.
*However* we would like to get confirmation, that *Infra team* is ok with
it?

Suggestion is to use common template for names of repositories with Murano
applications in the future, namely openstack/murano-app-...
(openstack/murano-app-kubernetes, openstack/murano-app-docker, ...). We'll
describe overall approach in more details using
https://launchpad.net/murano-apps as entry point.

Simple applications or applications where there is no active development
will keep being stored in murano-apps until there is a demand to move some
of them to separated repository. At that point we'll ask OpenStack Infra
team to do it.

We hope that this will help to clearly identify area of responsibility
around development of Murano applications, helping to onboard new
contributors/teams using mostly efforts of Murano Apps team. I.e. creation
of new application in murano-apps repository means in this model just
creation of new directory with new application, which can be done by
murano-apps team on their own. In this case we'll need to understand how to
organize CI for different applications being stored in the same repository
but I think we'll figure it out, it's not a blocker.

This model allows us to ask for involvement of OpenStack Infra team only in
rare cases when there is a need to create separate repository for
especially big and complicated Murano Application which should be treated
as a dedicated project with its own development team and CI.


Any suggestions, questions are welcome

On 25 May 2016 at 14:37, Igor Marnat  wrote:

> Colleagues,
> having attended many sessions and talked to many customers, partners
> and contributors in Austin I’d like to suggest several improvements to how
> we develop OpenStack apps and work with the Community App Catalog (
> https://apps.openstack.org/).
>
> Key goals to achieve are:
> - Provide contributors with an ability to collaborate on OpenStack
> apps development
> - Provide contributors and consumers with transparent workflow to
> manage their apps
> - Provide consumers with information about apps - how it was developed
> and tested
> - To summarize - introduce the way to build community working on
> OpenStack apps
>
> *What is OpenStack application*
> OpenStack is about 6 years young and all these years discussions about
> it are in progress. Variety of applications is huge, from LAMP stacks
> and legacy Java apps to telco workloads and VNF apps. There is working
> group which works on a definition of "What is OpenStack application",
> hopefully community will agree on definition soon.
>
> For the sake of our discussion below let us agree on a simple approach:
> an OpenStack application is any software asset which 1. can be executed on
> an OpenStack cloud, 2. lives in apps.openstack.org.  So far there are
> Murano applications, Heat templates, Glance images and TOSCA templates.
>
> There are many good OpenStack applications in the world which don't live
> in OpenStack App Catalog. However, let us for now concentrate on those
> which do, just for the sake of this discussion.
>
> *Introduction to OpenStack development ecosystem*
> OpenStack was introduced about 6 years ago. Over these years
> community grown 

Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-31 Thread Nikhil Komawar
Hey,


Thanks for the feedback. 0800UTC is 4am EDT for some of the US Glancers :-)


I request this time which may help the folks in Eastern and Central US
time.
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=7=11=0=0=881=196=47=22=157=87=24=78


If it still does not work, I may have to poll the folks in EMEA on how
strong their intentions are for joining this call.  Because another time
slot that works for folks in Australia & US might be too inconvenient
for those in EMEA:
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=6=23=0=0=881=196=47=22=157=87=24=78


Here's the map of cities that may be involved:
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160607=881=196=47=22=157=87=24=78


Please let me know which ones are possible and we can try to work around
the times.


On 5/31/16 2:54 AM, Blair Bethwaite wrote:
> Hi Nikhil,
>
> 2000UTC might catch a few kiwis, but it's 6am everywhere on the east
> coast of Australia, and even earlier out west. 0800UTC, on the other
> hand, would be more sociable.
>
> On 26 May 2016 at 15:30, Nikhil Komawar  wrote:
>> Thanks Sam. We purposefully chose that time to accommodate some of our
>> community members from the Pacific. I'm assuming it's just your case
>> that's not working out for that time? So, hopefully other Australian/NZ
>> friends can join.
>>
>>
>> On 5/26/16 12:59 AM, Sam Morrison wrote:
>>> I’m hoping some people from the Large Deployment Team can come along. It’s 
>>> not a good time for me in Australia but hoping someone else can join in.
>>>
>>> Sam
>>>
>>>
 On 26 May 2016, at 2:16 AM, Nikhil Komawar  wrote:

 Hello,


 Firstly, I would like to thank Fei Long for bringing up a few operator
 centric issues to the Glance team. After chatting with him on IRC, we
 realized that there may be more operators who would want to contribute
 to the discussions to help us take some informed decisions.


 So, I would like to call for a 2 hour sync for the Glance team along
 with interested operators on Thursday June 9th, 2016 at 2000UTC.


 If you are interested in participating please RSVP here [1], and
 participate in the poll for the tool you'd prefer. I've also added a
 section for Topics and provided a template to document the issues clearly.


 Please be mindful of everyone's time and if you are proposing issue(s)
 to be discussed, come prepared with well documented & referenced topic(s).


 If you've feedback that you are not sure if appropriate for the
 etherpad, you can reach me on irc (nick: nikhil).


 [1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync

 --

 Thanks,
 Nikhil Komawar
 Newton PTL for OpenStack Glance


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> --
>>
>> Thanks,
>> Nikhil
>>
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

2016-05-31 Thread Jay Pipes

On 05/29/2016 06:19 PM, Chris Dent wrote:

This gets a bit complex (to me) but: The idea for step 4 is that the
resource tracker will be modified such that:

* if the compute node being claimed by an instance is a member of some
   aggregates
* and one of those  aggregates is associated with a resource provider

> * and the resource provider has inventory of resource class DISK_GB

Kinda. What the compute node needs is an InventoryList object containing 
all inventory records for all resource classes both local to it as well 
as associated to it via any aggregate-resource-pool mapping.


The SQL for generating this InventoryList is the following:

SELECT
  i.resource_provider_id,
  i.resource_class_id,
  i.total,
  i.reserved,
  i.min_unit,
  i.max_unit,
  i.step_size,
  i.allocation_ratio
FROM inventories AS i
 INNER JOIN resource_providers AS rp
  ON i.resource_provider_id = rp.id
WHERE rp.uuid = 'compute-node-1'
OR rp.id IN (
  SELECT rpa1.resource_provider_id
  FROM resource_provider_aggregates rpa1
INNER JOIN resource_provider_aggregates rpa2
 ON rpa1.aggregate_id = rpa2.aggregate_id
INNER JOIN resource_providers rp
 ON rpa2.resource_provider_id = rp.id
  WHERE rp.uuid = 'compute-node-1'
);


then rather than claiming disk on the compute node, claim it on the
resource provider.


Yes.


The first hurdle to overcome when doing this is to trace the path
from compute node, through aggregates, to a resource provider. We
can get a list of aggregates by host, and then we can use those
aggregates to get a list of resource providers by joining across
ResourceProviderAggregates, and we can join further to get just
those ResourceProviders which have Inventory of resource class
DISK_GB.

The issue here is that the result is a list. As far as I can tell
we can end up with >1 ResourceProviders providing DISK_GB for this
host because it is possible for a host to be in more than one
aggregate and it is necessary for an aggregate to be able to associate
with more than one resource provider.


Well, yes, it *is* possible in the future to have >1 resource pool for 
shared storage attached to an aggregate. However, the thinking for this 
first round of code was that currently compute nodes either have local 
disk or they use a (single per availability zone) shared storage 
solution for the VM's ephemeral disks.


So, for deployers that use a shared storage solution, they will need to 
create a resource pool via:


 `openstack resource-pool create`

associate that resource pool with a single aggregate (and create that 
aggregate if it does not already exist) that is associated with ALL 
compute nodes. The compute nodes in these deployments will have *no 
corresponding inventory record for DISK_GB*.


For deployers who use local disk on each compute node, no action is needed.

We can deal with multiple shared storage pools per aggregate at a later 
time. Just take the first resource provider in the list of inventory 
records returned from the above SQL query that corresponds to the 
DISK_GB resource class and that is resource provider you will deduct from.



If the above is true and we can find two resource providers providing
DISK_GB how does:

* the resource tracker know where (to which provider) to write its
   disk claim?
* the scheduler (the next step in the work items) make choices and
   declarations amongst providers? (Yes, place on that node, but use
disk provider
   X, not Y)


Assume only a single resource provider of DISK_GB. It will be either a 
compute node's resource provider ID or a resource pool's resource 
provider ID.



If the above is not true, why is it not true? (show me the code
please)

If the above is an issue, but we'd like to prevent it, how do we fix it?
Do we need to make it so that when we associate an aggregate with a
resource provider we check to see that it is not already associated with
some other provider of the same resource class? This would be a
troubling approach because as things currently stand we can add Inventory
of any class and aggregates to a provider at any time and the amount of
checking that would need to happen is at least bi-directional if not multi
and that level of complexity is not a great direction to be going.


For this initial work, my idea was to have some code that, on creation 
of a resource pool and its association with an aggregate, if that 
resource pool has an inventory record with resource_class of DISK_GB 
then remove any inventory records with DISK_GB resource class for any 
compute node's (local) resource provider ID associated with that 
aggregate. This way we ensure the existing behaviour that a compute node 
either has local disk or it uses shared storage, but not both.


Best,
-jay


So, yeah, if someone could help me tease this out, that would be
great, thanks.


[1]
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#work-items





Re: [openstack-dev] [infra][shade] Proposing Ricardo Carrillo Cruz for Shade core

2016-05-31 Thread Jeremy Stanley
On 2016-05-31 08:53:22 -0400 (-0400), David Shrewsbury wrote:
> Ricardo has been working with shade for a while now, has been
> great at helping out with reviews, and has offered some quality
> code contributions. He has showed a good understanding of the code
> base and coding guidelines, and has been helping to review (and
> adding to) the new OpenStack Ansible modules that depend so highly
> on shade.
> 
> Shade could use more cores as our user base has grown and I think
> he'd be an awesome addition.

Thanks for the suggestion! I've looked through his review history
for openstack-infra/shade changes, and he seems to be consistently
reviewing and catching bugs there in recent months. Given other
regular reviewers/contributors to that repo have also expressed
interest in having him, I've added Ricardo as a shade-core member
(after confirming he's also cool with the added responsibility).

Thank you for stepping up to help on shade reviews, Ricardo!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] support of NSH in networking-SFC

2016-05-31 Thread Duarte Cardoso, Igor
Hi Tim,

+1
Focus on the plugin and API while improving the n-sfc<->ODL interaction to 
match that.

In parallel, early (non-merged) support in OVS driver itself could be 
attempted, based on the unofficial April 2016's NSH patches for OVS [1]. After 
official supports gets merged, it would be less troublesome to adapt since the 
big hurdles of mapping the abstraction to OVS would have been mostly overcome.

[1] 
https://github.com/yyang13/ovs_nsh_patches/tree/98e1d3d6b1ed49d902edaede11820853b0ad5037
 
Best regards,
Igor.


-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: Tuesday, May 31, 2016 4:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Hey Paul,
ODL uses OVS as its dataplane (but is also not limited to just OVS), and ODL 
already supports IETF SFC today in the ODL SFC project.  My point was Neutron 
is no longer in scope of managing OVS, since it is managed by ODL.  I think 
your comments echo the 2 sides of this discussion - whether or not OVS is in 
scope of a protocol implementation in Neutron networking-sfc.  In my opinon it 
is if you consider OVS driver support, but it is not if you consider a 
different networking-sfc driver.

You can implement IETF NSH in the networking-sfc API/DB Model, without caring 
if it is actually supported in OVS (when using ODL as a driver) because all 
networking-sfc cares about should be if it's driver correctly supports SFC.  To 
that end, if you are using ODL as your SFC driver, then absolutely you should 
verify it is an IETF SFC compliant API/model.  However, outside of that scope, 
it is not networking-sfc's responsibility to care what ODL is using as it's 
dataplane backend or for that matter it's version of OVS.  It is now up to ODL 
to manage that for networking-sfc, and networking-sfc just needs to ensure it 
can talk to ODL.  

I think this is a pragmatic way to go, since networking-sfc doesn't yet support 
an ODL driver and we are in the process of adding one.  We could leave the 
networking-sfc OVS driver alone, add support for NSH to the networking-sfc 
plugin, and then only allow API calls that use NSH to work if ODL networking 
driver is the backend.  That way we allow for some experimental NSH support in 
networking-sfc without officially supporting it in the OVS driver until it is 
officially supported in OVS.

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Paul Carver" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, May 30, 2016 10:12:34 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 5/25/2016 13:24, Tim Rozet wrote:
> In my opinion, it is a better approach to break this down into plugin vs 
> driver support.  There should be no problem adding support into 
> networking-sfc plugin for NSH today.  The OVS driver however, depends on OVS 
> as the dataplane - which I can see a solid argument for only supporting an 
> official version with a non-NSH solution.  The plugin side should have no 
> dependency on OVS.  Therefore if we add NSH SFC support to an ODL driver in 
> networking-odl, and use that as our networking-sfc driver, the argument about 
> OVS goes away (since neutron/networking-sfc is totally unaware of the 
> dataplane at this point).  We would just need to ensure that API calls to 
> networking-sfc specifying NSH port pairs returned error if the enabled driver 
> was OVS (until official OVS with NSH support is released).
>

Does ODL have a dataplane? I thought it used OvS. Is the ODL project supporting 
its own fork of OvS that has NSH support or is ODL expecting that the user will 
patch OvS themself?

I don't know the details of why OvS hasn't added NSH support so I can't judge 
the validity of the concerns, but one way or another there has to be a 
production-quality dataplane for networking-sfc to front-end.

If ODL has forked OvS or in some other manner is supporting its own NSH capable 
dataplane then it's reasonable to consider that the ODL driver could be the 
first networking-sfc driver to support NSH. However, we still need to make sure 
that the API is an abstraction, not implementation specific.

But if ODL is not supporting its own NSH capable dataplane, instead expecting 
the user to run a patched OvS that doesn't have upstream acceptance then I 
think we would be building a rickety tower by piling networking-sfc on top of 
that unstable base.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing 

Re: [openstack-dev] [puppet] weekly meeting #83

2016-05-31 Thread Emilien Macchi
On Mon, May 30, 2016 at 8:34 AM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi Puppeteers!
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting-4.
>
> Here's a first agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160531
>
> Feel free to add more topics, and any outstanding bug and patch.
>
> See you tomorrow!

We did our meeting and here are the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-05-31-15.00.html

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-31 Thread Tim Rozet
Hey Paul,
ODL uses OVS as its dataplane (but is also not limited to just OVS), and ODL 
already supports IETF SFC today in the ODL SFC project.  My point was Neutron 
is no longer in scope of managing OVS, since it is managed by ODL.  I think 
your comments echo the 2 sides of this discussion - whether or not OVS is in 
scope of a protocol implementation in Neutron networking-sfc.  In my opinon it 
is if you consider OVS driver support, but it is not if you consider a 
different networking-sfc driver.

You can implement IETF NSH in the networking-sfc API/DB Model, without caring 
if it is actually supported in OVS (when using ODL as a driver) because all 
networking-sfc cares about should be if it's driver correctly supports SFC.  To 
that end, if you are using ODL as your SFC driver, then absolutely you should 
verify it is an IETF SFC compliant API/model.  However, outside of that scope, 
it is not networking-sfc's responsibility to care what ODL is using as it's 
dataplane backend or for that matter it's version of OVS.  It is now up to ODL 
to manage that for networking-sfc, and networking-sfc just needs to ensure it 
can talk to ODL.  

I think this is a pragmatic way to go, since networking-sfc doesn't yet support 
an ODL driver and we are in the process of adding one.  We could leave the 
networking-sfc OVS driver alone, add support for NSH to the networking-sfc 
plugin, and then only allow API calls that use NSH to work if ODL networking 
driver is the backend.  That way we allow for some experimental NSH support in 
networking-sfc without officially supporting it in the OVS driver until it is 
officially supported in OVS.

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Paul Carver" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, May 30, 2016 10:12:34 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 5/25/2016 13:24, Tim Rozet wrote:
> In my opinion, it is a better approach to break this down into plugin vs 
> driver support.  There should be no problem adding support into 
> networking-sfc plugin for NSH today.  The OVS driver however, depends on OVS 
> as the dataplane - which I can see a solid argument for only supporting an 
> official version with a non-NSH solution.  The plugin side should have no 
> dependency on OVS.  Therefore if we add NSH SFC support to an ODL driver in 
> networking-odl, and use that as our networking-sfc driver, the argument about 
> OVS goes away (since neutron/networking-sfc is totally unaware of the 
> dataplane at this point).  We would just need to ensure that API calls to 
> networking-sfc specifying NSH port pairs returned error if the enabled driver 
> was OVS (until official OVS with NSH support is released).
>

Does ODL have a dataplane? I thought it used OvS. Is the ODL project 
supporting its own fork of OvS that has NSH support or is ODL expecting 
that the user will patch OvS themself?

I don't know the details of why OvS hasn't added NSH support so I can't 
judge the validity of the concerns, but one way or another there has to 
be a production-quality dataplane for networking-sfc to front-end.

If ODL has forked OvS or in some other manner is supporting its own NSH 
capable dataplane then it's reasonable to consider that the ODL driver 
could be the first networking-sfc driver to support NSH. However, we 
still need to make sure that the API is an abstraction, not 
implementation specific.

But if ODL is not supporting its own NSH capable dataplane, instead 
expecting the user to run a patched OvS that doesn't have upstream 
acceptance then I think we would be building a rickety tower by piling 
networking-sfc on top of that unstable base.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] weekly meeting

2016-05-31 Thread Emilien Macchi
On Mon, May 30, 2016 at 8:45 AM, Emilien Macchi  wrote:
> Hi!
>
> We'll have our weekly meeting tomorrow at 2pm UTC on
> #openstack-meeting-alt.
>
> Here's a first agenda:
> https://wiki.openstack.org/wiki/Meetings/TripleO#Agenda_for_next_meeting
>
> Feel free to add more topics, and any outstanding bug and patch.
>
> See you tomorrow!

We did our meeting, you can read notes:
http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-05-31-14.00.html

See you on #tripleo,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-31 Thread Ben Pfaff
On Mon, May 30, 2016 at 10:12:34PM -0400, Paul Carver wrote:
> I don't know the details of why OvS hasn't added NSH support so I can't
> judge the validity of the concerns, but one way or another there has to be a
> production-quality dataplane for networking-sfc to front-end.

It looks like the last time anyone submitted NSH patches to Open vSwitch
was September 2015.  They got some reviews but no new version has been
posted since.

Basically, we can't add NSH support if no one submits patches.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

2016-05-31 Thread Jay Pipes

On 05/30/2016 11:22 PM, Cheng, Yingxin wrote:

Hi, cdent:

This problem arises because the RT(resource tracker) only knows to
consume the DISK resource in its host, but it still doesn’t know
exactly which resource provider to place the consumption. That is to
say, the RT still needs to *find* the correct resource provider in
the step 4. The *step 4* finally causes the explicit problem that
“the RT can find two resource providers providing DISK_GB, but it
doesn’t know which is right”, as you’ve encountered.

The problem is: the RT needs to make a decision to choose a resource
provider when it finds multiple of them according to *step 4*.
However, the scheduler should already know which resource provider to
choose when it is making a decision, and it doesn’t send this
information to compute nodes, either. That’s also to say, there is a
missing step in the bp g-r-p that we should “improve filter scheduler
that can make correct decisions with generic resource pools”, the
scheduler should tell the compute node RT not only about the
resources consumptions in the compute-node resource provider, but
also the information where to consume shared resources, i.e. their
related resource-provider-ids.


Well, that is the problem with not having the scheduler actually do the 
claiming of resources on a provider. :(


At this time, the compute node (specifically, its resource tracker) is 
the thing that does the actual claim of the resources in a request 
against the resource inventories it understands for itself.


This is why even though the scheduler "makes a placement decision" for 
things like which NUMA cell/node that a workload will be placed on [1], 
that decision is promptly forgotten about and ignored and the compute 
node makes a totally different decision [2] when claiming NUMA topology 
resources after it receives the instance request containing NUMA 
topology requests. :(


Is this silly and should, IMHO, the scheduler *actually* do the claim of 
resources on a provider? Yes, see [3] which still needs a spec pushed.


Is this going to change any time soon? Unfortunately, no.

Unfortunately, a compute node isn't aware that it may be consuming 
resources from a shared storage pool, which is what Step #4 is all 
about: making the compute node aware that it is using a shared storage 
pool if it is indeed using a shared storage pool. I'll answer Chris' 
email directly with more details.


Best,
-jay

[1] 
https://github.com/openstack/nova/blob/83cd67cd89ba58243d85db8e82485bda6fd00fde/nova/scheduler/filters/numa_topology_filter.py#L81
[2] 
https://github.com/openstack/nova/blob/83cd67cd89ba58243d85db8e82485bda6fd00fde/nova/compute/claims.py#L215
[3] 
https://blueprints.launchpad.net/nova/+spec/resource-providers-scheduler-claims



Hope it can help you.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][shade] Proposing Ricardo Carrillo Cruz for Shade core

2016-05-31 Thread Robla Mota, Yolanda
Only great words about Ricardo. He has been working hard on shade and ansible, 
he will be a great addition to shade-core team.

Yolanda Robla Mota
Cloud automation and Distribution Engineer
yolanda.robla-m...@hp.com
+34 605641639
Spain

De: Monty Taylor 
Enviado: martes, 31 de mayo de 2016 15:00
Para: openstack-dev@lists.openstack.org
Asunto: Re: [openstack-dev] [infra][shade] Proposing Ricardo Carrillo Cruz for 
Shade core

On 05/31/2016 08:53 AM, David Shrewsbury wrote:
> Ricardo has been working with shade for a while now, has been great at
> helping out with reviews, and has offered some quality code contributions.
> He has showed a good understanding of the code base and coding guidelines,
> and has been helping to review (and adding to) the new OpenStack Ansible
> modules that depend so highly on shade.
>
> Shade could use more cores as our user base has grown and I think he'd be
> an awesome addition.

I wholeheartedly concur. Ricky has been doing a great job. Having him in
shade-core would be great.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Docs] Enhance Docs Landing Page Descriptive Text

2016-05-31 Thread Andreas Jaeger
On 2016-05-14 00:08, Laura Clymer wrote:
> Hi everyone,
> 
> In the current Ubuntu install guide, there is this
> section:http://docs.openstack.org/mitaka/install-guide-ubuntu/common/app_support.html
> 
> It contains a good deal of description on the type of information
> contained in each of the release-level docs. This type of description is
> very helpful to new users in that it helps them understand where to look
> for information. Given the major re-design for the Install Guide coming
> up, I would like to propose that the text in this section is migrated
> (and perhaps enhanced) to the docs landing page.
> 
> I am happy to write up a specification for the suggested text and submit
> it for further review, but I wanted to see if anyone else thinks this is
> a good idea?

The app-support page is part of each document, see for example:

http://docs.openstack.org/admin-guide/common/app_support.html

So, the reorg will not remove this page at all, no worries,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] orchestration and db_sync

2016-05-31 Thread Dolph Mathews
On Tue, May 31, 2016 at 8:41 AM David Stanek  wrote:

> On Fri, May 27, 2016 at 12:08 PM, Ryan Hallisey 
> wrote:
>
> Theses changes do not all happen at the same times for an OpenStack
> installation.
>
> > - Create the service's users and add a password into the databse
>
> Should only happen once during installation.
>
> > - Sync the service with the database
>
> Should happen during installation and for every upgrade.
>
> > - Start the service
> >
> > I was wondering if for some services they could be aware of whether or
> not they need
> > to sync with the database at startup.  Or maybe the service runs a
> db_sync every time
> > is starts?  I figured I would start a thread about this because Keystone
> has some
> > flexibility when running N+1 in a cluster of N. If Keystone could have
> that
> > that ability maybe Keystone could db_sync each time it starts without
> harming the
> > cluster?
>
> This isn't something I would want to see for a few reasons. The most
> important one is that I think the decision to run db_sync needs to be
> explicit. An operator should run it when they are ready (maybe they
> need to shut something down, ensure up-to-date backups, etc.).
>

+1


>
> Another issue is database modification permissions. The user running
> the application, as well as the DB user the application uses,
> shouldn't have access to DML for security reasons. Little Bobby
> Tables' mom found this out the hard way[1].
>

+2


>
> 1. https://xkcd.com/327/
>
> --
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
> www: http://dstanek.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-31 Thread Daniel P. Berrange
On Tue, May 31, 2016 at 08:19:33AM -0400, Sean Dague wrote:
> On 05/30/2016 06:25 AM, Kashyap Chamarthy wrote:
> > On Thu, May 26, 2016 at 10:55:47AM -0400, Sean Dague wrote:
> >> On 05/26/2016 05:38 AM, Kashyap Chamarthy wrote:
> >>> On Wed, May 25, 2016 at 05:42:04PM +0200, Kashyap Chamarthy wrote:
> >>>
> >>> [...]
> >>>
>  So, in short, the central issue seems to be this: the custom 'gate64'
>  model is not being trasnalted by libvirt into a model that QEMU can
>  recognize.
> >>>
> >>> An update:
> >>>
> >>> Upstream libvirt points out that this turns to be regression, and
> >>> bisected it to commit (in libvirt Git): 1.2.9-31-g445a09b -- "qemu:
> >>> Don't compare CPU against host for TCG".
> >>>
> >>> So, I expect there's going to be fix pretty soon upstream libvirt.
> >>
> >> Which is good... I wonder how long we'll be waiting for that back in our
> >> distro packages though.
> > 
> > Yeah, until the fix lands, our current options seem to be:
> > 
> >   (a) Revert to a known good version of libvirt
> 
> Downgrading libvirt so dramatically isn't a thing we'll be able to do.
> 
> >   (b) Use nested virt (i.e. ) -- I doubt is possible
> >   on RAX environment, which is using Xen, last I know.
> 
> We turned off nested virt even where it was enabled, because it locks up
> at a non trivial rate. So not really an option.

Hmm, if the guest is using 'qemu' and not 'kvm', then there should be
no dependancy between the host CPU and guest CPU whatsoever. ie we can
present arbitrary CPU to the guest, whether the host CPU has matching
features or not.

I wonder if there is a bug in Nova where it is trying todo a host/guest
CPU compatibility check even for 'qemu' guests, when it should only do
them for 'kvm' guests.

If we can avoid the CPU compatibility check with qemu guest, then the
fact that there's a libvirt bug here should be irrelevant, and we could
avoid needing to invent a gate64 CPU model too.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Mid-cycle development sprint (NOTE: DATE CHANGE!)

2016-05-31 Thread Henry Gessau
Thierry Carrez  wrote:
> Rossella Sblendido wrote:
>> On 05/26/2016 10:47 PM, Henry Gessau wrote:
>>> I am happy to announce that the location logistics for the Neutron mid-cycle
>>> have been finalized. The mid-cycle will take place in Cork, Ireland on 
>>> August
>>> 15-17. I have updated the wiki [1] where you will find a link to an etherpad
>>> with all the details. There you can add yourself if you plan to attend, and
>>> make updates to topics that you would like to work on.
>>
>> Thanks for organizing this! I am happy to see a sprint in Europe :)
>> Unfortunately the 15th is bank holidays in some European countries and
>> at least in Italy most people organize their holidays around those days.
>> I will try to change my plans and do my best to attend.
> 
> For reference, Assumption (Aug 15) is a nationwide public holiday in the 
> following countries in Europe:
> 
> Andorra, Austria, Belgium, Croatia, Cyprus, France, Greece, Italy, 
> Lithuania, Luxembourg, Republic of Macedonia, Malta, Republic of 
> Moldova, Monaco, Poland (Polish Army Day), Portugal, Romania, Slovenia, 
> and Spain.
> 
> Beyond people generally organizing summer vacation around that date, 
> it's also peak-season for European travel, which can make flight prices 
> go up :)
> 
> But then, no date is perfect.
> 

After some discussions I have decided to keep this week but change it slightly
to the end of the week, Wednesday to Friday.

I other words, August 17-19. Same location.
I have updated the wiki and the etherpad.

-- 
Henry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] orchestration and db_sync

2016-05-31 Thread David Stanek
On Fri, May 27, 2016 at 12:08 PM, Ryan Hallisey  wrote:

Theses changes do not all happen at the same times for an OpenStack
installation.

> - Create the service's users and add a password into the databse

Should only happen once during installation.

> - Sync the service with the database

Should happen during installation and for every upgrade.

> - Start the service
>
> I was wondering if for some services they could be aware of whether or not 
> they need
> to sync with the database at startup.  Or maybe the service runs a db_sync 
> every time
> is starts?  I figured I would start a thread about this because Keystone has 
> some
> flexibility when running N+1 in a cluster of N. If Keystone could have that
> that ability maybe Keystone could db_sync each time it starts without harming 
> the
> cluster?

This isn't something I would want to see for a few reasons. The most
important one is that I think the decision to run db_sync needs to be
explicit. An operator should run it when they are ready (maybe they
need to shut something down, ensure up-to-date backups, etc.).

Another issue is database modification permissions. The user running
the application, as well as the DB user the application uses,
shouldn't have access to DML for security reasons. Little Bobby
Tables' mom found this out the hard way[1].

1. https://xkcd.com/327/

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-31 Thread Duarte Cardoso, Igor
"But if ODL is not supporting its own NSH capable dataplane, instead expecting 
the user to run a patched OvS that doesn't have upstream acceptance then I 
think we would be building a rickety tower by piling networking-sfc on top of 
that unstable base."

ODL requires a patched OvS too [1].

[1] 
https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#Building_Open_vSwitch_with_VxLAN-GPE_and_NSH_support

Best regards,
Igor.

-Original Message-
From: Paul Carver [mailto:pcar...@paulcarver.us] 
Sent: Tuesday, May 31, 2016 3:13 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 5/25/2016 13:24, Tim Rozet wrote:
> In my opinion, it is a better approach to break this down into plugin vs 
> driver support.  There should be no problem adding support into 
> networking-sfc plugin for NSH today.  The OVS driver however, depends on OVS 
> as the dataplane - which I can see a solid argument for only supporting an 
> official version with a non-NSH solution.  The plugin side should have no 
> dependency on OVS.  Therefore if we add NSH SFC support to an ODL driver in 
> networking-odl, and use that as our networking-sfc driver, the argument about 
> OVS goes away (since neutron/networking-sfc is totally unaware of the 
> dataplane at this point).  We would just need to ensure that API calls to 
> networking-sfc specifying NSH port pairs returned error if the enabled driver 
> was OVS (until official OVS with NSH support is released).
>

Does ODL have a dataplane? I thought it used OvS. Is the ODL project supporting 
its own fork of OvS that has NSH support or is ODL expecting that the user will 
patch OvS themself?

I don't know the details of why OvS hasn't added NSH support so I can't judge 
the validity of the concerns, but one way or another there has to be a 
production-quality dataplane for networking-sfc to front-end.

If ODL has forked OvS or in some other manner is supporting its own NSH capable 
dataplane then it's reasonable to consider that the ODL driver could be the 
first networking-sfc driver to support NSH. However, we still need to make sure 
that the API is an abstraction, not implementation specific.

But if ODL is not supporting its own NSH capable dataplane, instead expecting 
the user to run a patched OvS that doesn't have upstream acceptance then I 
think we would be building a rickety tower by piling networking-sfc on top of 
that unstable base.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New core reviewers nomination for TOSCA-Parser and or Heat-Translator project [tosca-parser][heat-translator][heat]

2016-05-31 Thread Sahdev P Zala
Hello TOSCA-Parser and Heat-Translator core team,

I would like to nominate following current active contributors to the 
tosca-parser and or heat-translator project as core reviewers to speed up 
the development. They are contributing for more than six months and has 
remained one of the top five contributors for a mentioned project(s).

Please reply to this thread or email me with your vote (+1 or -1) by EOD 
June 4th. 

[1] Bob Haddleton: Bob is a lead developer for the TOSCA NFV specific 
parsing and translation in the tosca-parser and heat-translator projects 
respectively. Bob actively participates in IRC meetings and other 
discussion via emails or IRC. He is a also a core reviewer in OpenStack 
Tacker project. I would like to nominate him for core reviewer position 
for both tosca-parser and heat-translator. 

[2] Miguel Caballar: Miguel is familiar with TOSCA for long time. He is an 
asset for the tosca-parser project and has been bringing lot of new use 
cases to the project. He is a second lead developer overall for the 
project at present. I would like to nominate him for core reviewer 
position in tosca-parser.

[3] Bharath Thiruveedula: Bharath is actively contributing to the 
heat-translator project. He knows project well and has implemented 
important blueprints during the Mitaka cycle including enhancement to the 
OSC plugin, automatic deployment of translated templates and dynamic 
querying of flavors and images. Bharath actively participates in IRC 
meetings and other discussion via emails or IRC. I would like to nominate 
him for the core reviewer position in heat-translator. 

[4] Mathieu Velten: Mathieu is familiar with TOSCA for long time as well. 
He is brining new use cases regularly and actively working on enhancing 
the heat-translator project with needed implementation. He also uses the 
translated templates with real time deployment with Heat for his work on 
project Indigo DataCloud [5]. He knows project well and was the second 
lead developer for the project during the Mitaka cycle. I would like to 
nominate him for the core reviewer position in heat-translator. 

[1] 
http://stackalytics.com/?release=all=tosca-parser=commits_id=bob-haddleton
 
and 
http://stackalytics.com/?release=all=heat-translator=commits_id=bob-haddleton
[2] 
http://stackalytics.com/?release=all=tosca-parser=commits_id=micafer1
[3] 
http://stackalytics.com/?release=all=heat-translator=commits_id=bharath-ves
[4] 
http://stackalytics.com/?release=all=commits=heat-translator_id=matmaul
[5] https://www.indigo-datacloud.eu/

Thanks! 

Regards, 
Sahdev Zala
RTP, NC

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for Commiters & Contributors for daisycloud-core project

2016-05-31 Thread jason
Hi Shake,

Kolla is mainly used for containerzing OpenStack components. But before
that, Kolla needs installer to do node discovery as well as OS
provisioning. So there should be a installer at the first place which then
calls into Kolla or alternative s such as packstack to deploy OpenStack.

I agree with you that sometimes we do need the installer to run in the
context of a container, but it mainly because we want to save a dedicated
jump server.
On May 31, 2016 2:57 PM, "Shake Chen"  wrote:

> Hi Zhijiang
>
> I think you can put Daisy into docker, then use ansible or kolla deploy
> Daisy.
>
>
>
> On Tue, May 31, 2016 at 9:43 AM,  wrote:
>
>> Hi All,
>>
>> I would like to introduce to you a new OpenStack installer project
>> Daisy(project name: daisycloud-core). Daisy used to be a closed source
>> project mainly developed by ZTE, but currently we make it a OpenStack
>> related project(http://www.daisycloud.org,
>> https://github.com/openstack/daisycloud-core).
>>
>> Although it is not mature and still under development, Daisy concentrates
>> on deploying OpenStack fast and efficiently for large data center which has
>> hundreds of nodes. In order to reach that goal, Daisy was born to focus on
>> many features that may not be suitable for small clusters, but definitely
>> conducive to the deployment of big clusters. Those features include but not
>> limited to the following:
>>
>> 1. Containerized OpenStack Services
>> In order to speed up installation and upgrading as a whole, Daisy decides
>> to use Kolla as underlying deployment module to support containerized
>> OpenStack services.
>>
>> 2. Multicast
>> Daisy utilizes multicast as much as possible to speed up imaging work
>> flow during the installation. For example, instead of using centralized
>> Docker registry while adopting Kolla, Daisy multicasts all Docker images to
>> each node of the cluster, then creates and uses local registries on each
>> node during Kolla deployment process. The Same things can be done for OS
>> imaging too.
>>
>> 3. Automatic Deployment
>> Instead of letting users decide if a node can be provisioned and deserved
>> to join to the cluster, Daisy provide a characteristics matching mechanism
>> to recognize if a new node has the same capabilities as a current working
>> computer nodes. If it is true, Daisy will start deployment on that node
>> right after it is discovered and make it a computer node with the same
>> configuration as that current working computer nodes.
>>
>> 4. Configuration Template
>> Using precise configuration file to describe a big dynamic cluster is not
>> applicable, and it is not able to be reused when moving to another
>> approximate environment either. Daisy’s configuration template only
>> describes the common part of the cluster and the representative of the
>> controller/compute nodes. It can be seen as a semi-finished configuration
>> file which can be used in any approximate environments. During deployment,
>> users only have to evaluate few specific parameters to make the
>> configuration template a final configuration file.
>>
>> 5. Your comments on anything else that can brings unique value to the
>> large data center deployment?
>>
>> As the project lead, I would like to get feedback from you about this new
>> project. You are more than welcome to join this project!
>>
>> Thank you
>> Zhijiang
>>
>>
>> 
>> ZTE Information Security Notice: The information contained in this mail (and 
>> any attachment transmitted herewith) is privileged and confidential and is 
>> intended for the exclusive use of the addressee(s).  If you are not an 
>> intended recipient, any disclosure, reproduction, distribution or other 
>> dissemination or use of the information contained is strictly prohibited.  
>> If you have received this mail in error, please delete it and notify us 
>> immediately.
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Shake Chen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to list all the servers of a user across projects

2016-05-31 Thread OpenStack Mailing List Archive

Link: https://openstack.nimeyo.com/86042/?show=86116#a86116
From: imocha 

http://localhost:8774/v2.1/a90db774c8fb48b290a161f075a862f6/servers/detail?all_tenants=1_id=d3d1ffa545c840148aeeffa62b69aa06

I am able to achieve the functionality using the above api. However, if requires admin role and if I dont add the user_id filter, it lists all the servers from all projects from all domains.

This is of my concern. How do I restrict this to only one domain. How to create admin at domain level?

The default admin with the Mitaka installation is above all the domains and can do everything. This would be for cloud administrator. I wanted to still have this and would like to have admin at Domain level which would be mapped to each organisation while hosting this solution for public cloud.

I dont have clear documentation on policy implementation and Attribute base access control. How the token information would be used to restrict access to API and filter the result based on the token that is used in the API request.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Team meeting reminder - 05/30/2016

2016-05-31 Thread Renat Akhmerov
That’s no problem! I knew about the holiday )

Renat Akhmerov
@Nokia

> On 31 May 2016, at 18:05, Dougal Matthews  wrote:
> 
> 
> 
> On 30 May 2016 at 08:40, Renat Akhmerov  > wrote:
> Hi,
> 
> This is a reminder about the team meeting that we’ll have today at 16.00 UTC 
> at #openstack-meeting.
> 
> Agenda:
> Review action items
> Current status (progress, issues, roadblocks, further plans)
> Newton-2 scope
> Open discussion
> 
> As usually, feel free to bring your own topics.
> 
> Sorry I neglected to make the meeting. Last week when I said I would I forgot 
> it was a public holiday in the UK.
> 
> Dougal
> 
>  
> 
> Renat Akhmerov
> @Nokia
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][shade] Proposing Ricardo Carrillo Cruz for Shade core

2016-05-31 Thread Ghe Rivero
Quoting Monty Taylor (2016-05-31 15:00:10)
> On 05/31/2016 08:53 AM, David Shrewsbury wrote:
> > Ricardo has been working with shade for a while now, has been great at
> > helping out with reviews, and has offered some quality code contributions.
> > He has showed a good understanding of the code base and coding guidelines,
> > and has been helping to review (and adding to) the new OpenStack Ansible
> > modules that depend so highly on shade.
> > 
> > Shade could use more cores as our user base has grown and I think he'd be
> > an awesome addition.
> 
> I wholeheartedly concur. Ricky has been doing a great job. Having him in
> shade-core would be great.
 
++ Shade is getting a lot of attention lately, and more cores will help.

Ghe Rivero



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] Selectively publish logs to topics

2016-05-31 Thread Witek Bedyk

Hi Venkat,

thank you for submitting the blueprint [1]. It covers actually two 
topics, both of them a valuable functional extension:


1) submitting additional (apart from dimensions) information with the logs
2) specifying a specific output topic

ad. 1
I think we should keep it generic to allow the operator add any 
information one needs. I like the idea of adding the 'attributes' 
dictionary, but we would need it per message, not only per request (the 
same story as we had with global and local dimensions).


ad. 2
As we want to change the target where the API writes the data, we could 
use perhaps the path parameter for that. The request could look like:


POST /v3.0/logs/topics/{kafka_topic_name}

I don't think we should send 'retention' with every request, instead the 
Kafka topic should be configured accordingly, but I understand it was 
just an example, right?



Cheers
Witek


[1] 
https://blueprints.launchpad.net/monasca/+spec/publish-logs-to-topic-selectively



--
FUJITSU Enabling Software Technology GmbH
Schwanthalerstr. 75a, 80336 München

Telefon: +49 89 360908-547
Telefax: +49 89 360908-8547
COINS: 7941-6547

Sitz der Gesellschaft: München
AG München, HRB 143325
Geschäftsführer: Dr. Yuji Takada, Hans-Dieter Gatzka, Christian Menk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][shade] Proposing Ricardo Carrillo Cruz for Shade core

2016-05-31 Thread Monty Taylor
On 05/31/2016 08:53 AM, David Shrewsbury wrote:
> Ricardo has been working with shade for a while now, has been great at
> helping out with reviews, and has offered some quality code contributions.
> He has showed a good understanding of the code base and coding guidelines,
> and has been helping to review (and adding to) the new OpenStack Ansible
> modules that depend so highly on shade.
> 
> Shade could use more cores as our user base has grown and I think he'd be
> an awesome addition.

I wholeheartedly concur. Ricky has been doing a great job. Having him in
shade-core would be great.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][shade] Proposing Ricardo Carrillo Cruz for Shade core

2016-05-31 Thread David Shrewsbury
Ricardo has been working with shade for a while now, has been great at
helping out with reviews, and has offered some quality code contributions.
He has showed a good understanding of the code base and coding guidelines,
and has been helping to review (and adding to) the new OpenStack Ansible
modules that depend so highly on shade.

Shade could use more cores as our user base has grown and I think he'd be
an awesome addition.


-Dave
-- 
David Shrewsbury (Shrews)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-31 Thread Sean Dague
On 05/30/2016 10:05 PM, Zhenyu Zheng wrote:
> I think it is good to share codes and a single microversion can make
> life more easier during coding.
> Can we approve those specs first and then decide on the details in IRC
> and patch review? Because
> the non-priority spec deadline is so close.
> 
> Thanks
> 
> On Tue, May 31, 2016 at 1:09 AM, Ken'ichi Ohmichi  > wrote:
> 
> 2016-05-29 19:25 GMT-07:00 Alex Xu  >:
> >
> >
> > 2016-05-20 20:05 GMT+08:00 Sean Dague  >:
> >>
> >> There are a number of changes up for spec reviews that add parameters 
> to
> >> LIST interfaces in Newton:
> >>
> >> * keypairs-pagination (MERGED) -
> >>
> >> 
> https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
> >> * os-instances-actions - https://review.openstack.org/#/c/240401/
> >> * hypervisors - https://review.openstack.org/#/c/240401/
> >> * os-migrations - https://review.openstack.org/#/c/239869/
> >>
> >> I think that limit / marker is always a legit thing to add, and I 
> almost
> >> wish we just had a single spec which is "add limit / marker to the
> >> following APIs in Newton"
> >>
> >
> > Are you looking for code sharing or one microversion? For code sharing, 
> it
> > sounds ok if people have some co-work. Probably we need a common 
> pagination
> > supported model_query function for all of those. For one microversion, 
> i'm a
> > little hesitate, we should keep one small change, or enable all in one
> > microversion. But if we have some base code for pagination support, we
> > probably can make the pagination as default thing support for all list
> > method?
> 
> It is nice to share some common code for this, that would be nice for
> writing the api doc also to know what APIs support them.
> And also nice to do it with a single microversion for the above
> resources, because we can avoid microversion bumping conflict and all
> of them don't seem a big change.

There is already common code for limit / marker.

I don't think these all need to be one microversion, they are honestly
easier to review if they are not.

However in future we should probably make 1 spec for all limit / marker
adds during a cycle. Just because the answer will be *yes* and seems
like more work to have everything be a dedicated spec.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-31 Thread Sean Dague
On 05/30/2016 04:02 PM, Shoham Peller wrote:
> I support Clint's comment, and as an example, only today I was able to
> search a bug and to see it was reported 2 years ago and wasn't solved since.
> I've commented on the bug saying it happened to me in an up-to-date nova.
> I'm talking about a bug which is on your list -
> https://bugs.launchpad.net/nova/+bug/1298075
> 
> I guess I wouldn't
>  been able to do so if the bug was closed.

A closed bug still shows up in the search, and if you try to report a
bug. So you'd still see in in reporting.

That bug is actually a classic instance of something which shouldn't be
in the bug tracker. It's a known issue of all of OpenStack and
Keystone's token architecture. It requires a bunch of Keystone feature
work to be addressed.

Having a more public "Known Issues in OpenStack" googlable page might be
way more appropriate for this so we don't spend a ton of time
duplicating issues into these buckets.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-31 Thread Sean Dague
On 05/30/2016 02:37 PM, Clint Byrum wrote:
> (Top posting as a general reply to the thread)
> 
> Bugs are precious data. As much as it feels like the bug list is full of
> cruft that won't ever get touched, one thing that we might be missing in
> doing this is that the user who encounters the bug and takes the time
> to actually find the bug tracker and report a bug, may be best served
> by finding that somebody else has experienced something similar. If you
> close this bug, that user is now going to be presented with the "I may
> be the first person to report this" flow instead of "yeah I've seen that
> error too!". The former can be a daunting task, but the latter provides
> extra incentive to press forward, since clearly there are others who
> need this, and more data is helpful to triagers and fixers.

I strongly disagree with this sentiment. Bugs are only useful if
actionable. Given the rate of change of the code base an 18 month old
bug without a reasonable reproduce case (which in almost all cases is
not there), is just debt. And more importantly they are sink holes where
well intended developers go off and burn 3 days realizing it's
completely irrelevant to the current project. Energy that could be spent
on relevant work.

> I 100% support those who are managing bugs doing whatever they need
> to do to make sure users' issues are being addressed as well as can be
> done with the resources available. However, I would also urge everyone
> to remember that the bug tracker is not only a way for developers to
> manage the bugs, it is also a way for the community of dedicated users
> to interact with the project as a whole.

Dedicated users reporting bugs that are actionable tend not to exist
longer than the supported window of the project.

I do also suggest that if people feel strongly that bugs shouldn't be
expired like this, they put their money where their mouth is and help on
the Bug Triage and addressing bugs through the system. Because the
alternative to expiring old bugs isn't old bugs getting more eyes, it's
all bugs getting less time by developers because the pile is so
insurmountable no one ever wants to look at it.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-31 Thread Sean Dague
On 05/31/2016 05:39 AM, Daniel P. Berrange wrote:
> On Tue, May 24, 2016 at 01:59:17PM -0400, Sean Dague wrote:
>> The team working on live migration testing started with an experimental
>> job on Ubuntu 16.04 to try to be using the latest and greatest libvirt +
>> qemu under the assumption that a set of issues we were seeing are
>> solved. The short answer is, it doesn't look like this is going to work.
>>
>> We run tests on a bunch of different clouds. Those clouds expose
>> different cpu flags to us. These are not standard things that map to
>> "Haswell". It means live migration in the multinode cases can hit cpus
>> with different flags. So we found the requirement was to come up with a
>> least common denominator of cpu flags, which we call gate64, and push
>> that into the libvirt cpu_map.xml in devstack, and set whenever we are
>> in a multinode scenario.
>> (https://github.com/openstack-dev/devstack/blob/master/tools/cpu_map_update.py)
>>  Not ideal, but with libvirt 1.2.2 it works fine.
>>
>> It turns out it works fine because libvirt *actually* seems to take the
>> data from cpu_map.xml and do a translation to what it believes qemu will
>> understand. On these systems apparently this turns into "-cpu
>> Opteron_G1,-pse36"
>> (http://logs.openstack.org/29/42529/24/check/gate-tempest-dsvm-multinode-full/5f504c5/logs/libvirt/qemu/instance-000b.txt.gz)
>>
>> At some point between libvirt 1.2.2 and 1.3.1, this changed. Now libvirt
>> seems to be passing our cpu_model directly to qemu, and assumes that as
>> a user you will be responsible for writing all the  stanzas to
>> add/remove yourself. When libvirt sends 'gate64' to qemu, this explodes,
>> as qemu has no idea what we are talking about.
>> http://logs.openstack.org/34/319934/2/experimental/gate-tempest-dsvm-multinode-live-migration/b87d689/logs/screen-n-cpu.txt.gz#_2016-05-24_15_59_12_531
>>
>> Unlike libvirt, which has a text file (xml) that configures the cpus
>> that could exist in the world, qemu builds this in statically at compile
>> time:
>> http://git.qemu.org/?p=qemu.git;a=blob;f=target-i386/cpu.c;h=895a386d3b7a94e363ca1bb98821d3251e70c0e0;hb=HEAD#l694
>>
>>
>> So, the existing cpu_map.xml workaround for our testing situation will
>> no longer work.
>>
>> So, we have a number of open questions:
>>
>> * Have our cloud providers standardized enough that we might get away
>> without this custom cpu model? (Have some of them done it and only use
>> those for multinode?)
>> * Is there any way to get this feature back in libvirt to do the cpu
>> computation?
>> * Would we have to build a whole nova feature around setting libvirt xml
>>  to be able to test live migration in our clouds?
>> * Other options?
>> * Do we give up and go herd goats?
> 
> Rather than try to define our own custom CPU models, we can probably
> just use one of the standard CPU models and then explicitly tell
> libvirt which flags to turn off in order to get compatibility with
> our cloud environments.
> 
> This is not currently possible with Nova, since our nova.conf option
> only allow us to specify a bare CPU model. We would have to extend
> nova.conf to allow us to specify a list of CPU features to add or
> remove. Libvirt should then correctly pass these changes through
> to QEMU.

Yes, that's an option. Given that the libvirt team seemed to acknowledge
this as a regression, I'd rather not build a user exposed feature for
all of that just as a workaround for a libvirt regression.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-31 Thread Sean Dague
On 05/30/2016 06:25 AM, Kashyap Chamarthy wrote:
> On Thu, May 26, 2016 at 10:55:47AM -0400, Sean Dague wrote:
>> On 05/26/2016 05:38 AM, Kashyap Chamarthy wrote:
>>> On Wed, May 25, 2016 at 05:42:04PM +0200, Kashyap Chamarthy wrote:
>>>
>>> [...]
>>>
 So, in short, the central issue seems to be this: the custom 'gate64'
 model is not being trasnalted by libvirt into a model that QEMU can
 recognize.
>>>
>>> An update:
>>>
>>> Upstream libvirt points out that this turns to be regression, and
>>> bisected it to commit (in libvirt Git): 1.2.9-31-g445a09b -- "qemu:
>>> Don't compare CPU against host for TCG".
>>>
>>> So, I expect there's going to be fix pretty soon upstream libvirt.
>>
>> Which is good... I wonder how long we'll be waiting for that back in our
>> distro packages though.
> 
> Yeah, until the fix lands, our current options seem to be:
> 
>   (a) Revert to a known good version of libvirt

Downgrading libvirt so dramatically isn't a thing we'll be able to do.

>   (b) Use nested virt (i.e. ) -- I doubt is possible
>   on RAX environment, which is using Xen, last I know.

We turned off nested virt even where it was enabled, because it locks up
at a non trivial rate. So not really an option.

>   (c) Or a different CPU model

Right, although it's not super clear what that will be.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ?????? [infra] [devstack][smaug] gate-smaug-dsvm-fullstack-nv is failed with exit code: 2

2016-05-31 Thread xiangxinyong
Hi Wilson,


Thanks very much.
You find the key.
I also get some information from here.


[1] 
https://review.openstack.org/#/q/project:openstack-infra/project-config+novnc


Best Regards,
  xiangxinyong




Wilson Liu 2016-05-31 PM 7:26 wrote
  
>> Hi xinyong,
 
>> 

>> I remembered that I have seen this error times ago in my CI. 
 
>> 

>> Seems like nova doesn??t need noVNC before but now it needs noVNC to 
>> complete the installation.
 
>> You need to clone noVNC into /opt/stack/new manually, or add noVNC project 
>> to the $PROJECTS variable in file like: 

>> /home/Jenkins/workspaces/workspace/YOU->> 
>> JOB-NAME/devstack-gate/devstack-vm-gate-wrap.sh
 
>>  
 
>> Hope that could help youJ
 
>>  
 
>> --
 
>> Wilson Liu
 
> Hello team,
 
 
> The gate-smaug-dsvm-fullstack-nv is failed with exit code: 2. 

 
> The console.html [1] includes the below information:
 
 
> Running devstack
 
 
> ERROR: the main setup script run by this job failed - exit code: 2

> [...]

> Could some one help?> Thanks very much. 
 
> [1] 
> http://logs.openstack.org/29/321329/2/check/gate-smaug-dsvm-fullstack-nv/c734eea/console.html
 
 
> [2] 
> http://logs.openstack.org/29/321329/2/check/gate-smaug-dsvm-fullstack-nv/c734eea/logs/devstacklog.txt.gz
 
 
> [3] 
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/smaug.yaml
 
 
> Best Regards,

 
>   xiangxinyong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-keystoneclient] Return request-id to caller

2016-05-31 Thread koshiya maho
Hi, keystone devs,

Thank you for your many opinions about request id mapping.
I fixed this patch [1] by the way that Brant and Cao gave me suggestions [2].
Could you review it?

[1] https://review.openstack.org/#/c/261188/
[2] http://paste.openstack.org/show/495040/

Thank you,

On Wed, 20 Apr 2016 16:37:31 -0700
Morgan Fainberg  wrote:

> On Wed, Apr 13, 2016 at 6:07 AM, David Stanek  wrote:
> 
> > On Wed, Apr 13, 2016 at 3:26 AM koshiya maho 
> > wrote:
> >
> >>
> >> My request to all keystone cores to give their suggestions about the same.
> >>
> >>
> > I'll test this a little and see if I can see how it breaks.
> >
> > Overall I'm not really a fan of this design. It's just a hack to add
> > attributes where they don't belong. Long term I think this will be hard to
> > maintain.
> >
> >
> >
> If we want to return a response object we should return a response object.
> Returning a magic list with attributes (or a dict with attributes, etc)
> feels very, very wrong.
> 
> I'm not going to block this design, but I wish we had something a bit
> better.
> 
> --Morgan

--
Maho Koshiya
E-Mail : koshiya.m...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-31 Thread Sean Dague
On 05/30/2016 10:26 AM, Loo, Ruby wrote:
> Hi,
> 
>> But the issue here is just capacity. Whether or not we keep an instance
>> in a deleting state, or when we release quota, doesn't change the
>> Tempest failures from what I can tell. The suggestions below address
>> that.
>>
>>
>>>
>>
>> I think we should go with #1, but instead of erasing the whole disk
>> for real maybe we should have a "fake" clean step that runs quickly
>> for tests purposes only?
>>
>>>
>>> Disabling the cleaning step (or having a fake one that does nothing) for
>>> the
>>> gate would get around the failures at least. It would make things work
>>> again
>>> because the nodes would be available right after Nova deletes them.
> 
> I lost track of what we are trying to test ? If we want to test that an 
> ironic node gets cleaned, then add fake cleaning. If we don¹t care that the 
> node gets cleaned (because eg we have a different test that will test for 
> that), then disable the cleaning. [And if we don¹t care either way, but one 
> is harder to do than the other, go with the easier ;)]

It seems like cleaning tests are probably something you want to do in a
more dedicated way because of the cost associated with them. We run the
default gate jobs with secure_delete turned off for volumes for the same
reason, it just adds a ton of delay that impacts a lot of other
unrelated code.

So if there is a flag to just disable it, I think that's fine.
Especially given the fake ironic is qemu guests right? So kill and
reboot should give you a fresh one.

Just make sure that in an Ironic specific normal job cleaning is handled.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live Migration meeting

2016-05-31 Thread Murray, Paul (HP Cloud)
Sorry for late update - this weekend was a holiday in UK

Agenda: https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [devstack] [smaug] gate-smaug-dsvm-fullstack-nv is failed with exit code: 2

2016-05-31 Thread liuxinguo
Hi xinyong,

I remembered that I have seen this error times ago in my CI.

Seems like nova doesn’t need noVNC before but now it needs noVNC to complete 
the installation.
You need to clone noVNC into /opt/stack/new manually, or add noVNC project to 
the $PROJECTS variable in file like: 
/home/Jenkins/workspaces/workspace/YOU-JOB-NAME/devstack-gate/devstack-vm-gate-wrap.sh

Hope that could help you:)

--
Wilson Liu

发件人: xiangxinyong [mailto:xiangxingy...@qq.com]
发送时间: 2016年5月31日 19:05
收件人: openstack-dev@lists.openstack.o
主题: [openstack-dev] [infra] [devstack] [smaug] gate-smaug-dsvm-fullstack-nv is 
failed with exit code: 2

Hello team,

The gate-smaug-dsvm-fullstack-nv is failed with exit code: 2.

The console.html [1] includes the below information:
Running devstack
ERROR: the main setup script run by this job failed - exit code: 2

The devstacklog.txt.gz [2] includes the below information:+ 
functions-common:git_clone:533:   echo 'The /opt/stack/new/noVNC project was 
not found; if this is a gate job, add'The /opt/stack/new/noVNC project was not 
found; if this is a gate job, add+ functions-common:git_clone:534:   echo 'the 
project to the $PROJECTS variable in the job definition.'the project to the 
$PROJECTS variable in the job definition.+ functions-common:git_clone:535:   
die 535 'Cloning not allowed in this configuration'+ functions-common:die:186:  
 local exitcode=0

I guess the problem is related with this file [3].
Could some one help?Thanks very much.
[1] 
http://logs.openstack.org/29/321329/2/check/gate-smaug-dsvm-fullstack-nv/c734eea/console.html
[2] 
http://logs.openstack.org/29/321329/2/check/gate-smaug-dsvm-fullstack-nv/c734eea/logs/devstacklog.txt.gz
[3] 
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/smaug.yaml

Best Regards,
  xiangxinyong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Team meeting reminder - 05/30/2016

2016-05-31 Thread Dougal Matthews
On 30 May 2016 at 08:40, Renat Akhmerov  wrote:

> Hi,
>
> This is a reminder about the team meeting that we’ll have today at 16.00
> UTC at #openstack-meeting.
>
> Agenda:
>
>- Review action items
>- Current status (progress, issues, roadblocks, further plans)
>- Newton-2 scope
>- Open discussion
>
>
> As usually, feel free to bring your own topics.
>

Sorry I neglected to make the meeting. Last week when I said I would I
forgot it was a public holiday in the UK.

Dougal



>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >